text
stringlengths
4
2.78M
--- abstract: 'The low-temperature dc conductivities of barely metallic samples of p-type Si:B are compared for a series of samples with different dopant concentrations, n, in the absence of stress (cubic symmetry), and for a single sample driven from the metallic into the insulating phase by uniaxial compression, S. For all values of temperature and stress, the conductivity of the stressed sample collapses onto a single universal scaling curve, $\sigma (S, T) = \sigma_0 (\Delta S/S_c)^\mu G[T/T^*(S)]$, with $T^* \propto (\Delta S)^{z\nu}$. The scaling fit indicates that the conductivity of Si:B is $\propto T^{1/2}$ in the critical range. Our data yield a critical conductivity exponent $\mu = 1.6$, considerably larger than the value reported in earlier experiments where the transition was crossed by varying the dopant concentration. The larger exponent is based on data in a narrow range of stress near the critical value within which scaling holds. We show explicitly that the temperature dependences of the conductivity of stressed and unstressed Si:B are different, suggesting that a direct comparison of the critical behavior and critical exponents for stress-tuned and concentration-tuned transitions may not be warranted.' address: - 'Physics Department, City College of the City University of New York, New York, New York 10031' - 'Department of Electrical Engineering, Princeton University, Princeton, New Jersey 08544-5263' author: - 'S. Bogdanovich and M. P. Sarachik' - 'R. N. Bhatt' title: 'Conductivity of Metallic Si:B Near the Metal-Insulator Transition: Comparison between Unstressed and Uniaxially Stressed Samples' --- Introduction {#intro} ============ A continuous metal-insulator transition in the limit of zero temperature has been demonstrated over the past two decades since the pioneering results of Rosenbaum[*et al.*]{}[@Rosen1] in a wide variety of disordered electronic systems, including uncompensated and compensated doped semiconductors, amorphous metal-insulator mixtures, and magnetic semiconductors. The region near the transition has been studied by tuning through the transition using the standard method of varying the concentration of one of the constituents [@Rosen1; @Bishop; @Thomas; @castner; @holcomb; @ourSi:B; @sarachikSi:P; @stupp; @shlimak; @itoh] by uniaxial stress[@paalanen; @ourPRL], using a magnetic field[@vonM] to vary the critical point, or using persistent photoconductivity to vary the doping in shallow levels. [@katsumoto]. In metal-semiconductor mixtures and compensated semiconductors (including the persistent photoconductor, doped $Al_xGa_{1-x}As$[@katsumoto]) the onset of the conductivity is found to be well described by a particularly simple form in the metallic phase: $$\sigma(t,T) = \sigma(t,0) + B T^{1/2} \label{equation}$$ where $\sigma(t,0) = A(t-t_c)^\mu$ is the zero temperature conductivity, the critical conductivity exponent $\mu \approx 1$, and the coefficient $B$ of the temperature-dependent term is independent of the tuning parameter $t$ (the metal fraction, dopant concentration, stress, magnetic field, photo-induced carrier density etc.) as it approaches the critical value $t_c$ at the metal-insulator transition. Measurements of the conductivity for different values of $t$ are thus found to yield a set of parallel straight lines when plotted against $T^{1/2}$. Near a continuous zero-temperature phase transition governed by a quantum critical point, the critical behavior is expected to obey a standard scaling formalism[@Sondhi]. In particular, for a metal-insulator transition, the conductivity in the vicinity of the transition ($t \rightarrow t_c, T \rightarrow 0 $) is expected to scale as: $$\sigma (t,T) = \sigma_c (T) f[ (t-t_c) T^{ -1/z\nu}]. \label{equation}$$ where $ \sigma_c (T) \propto T^{\mu/z\nu} $ is the temperature-dependent conductivity at $t = t_c$, $\mu$ is the exponent of the zero-temperature conductivity $ \sigma (t,0) \propto (t-t_c)^\mu $, $\nu$ is the exponent of the divergent correlation length $\xi \propto (t-t_c)^{-\nu} $, and $z$ is the dynamical exponent relating spatial and temporal scales near the critical point $ \tau \propto \xi ^z $, with the characteristic temporal scale at a temperature T given by $ \hbar / k_B T $. By recasting Equation (1) as $$\sigma (t,T) = B T^{1/2} [ 1 + A(t-t_c)^\mu/BT^{1/2} ] \label{equation}$$ it is easily seen to be a special case of the scaling form (Eq. (2)) with the identification $ \mu/z\nu = 1/2 $; in conjunction with the experimentally determined value $\mu = 1$, this implies $ z\nu = 2$. In contrast, the situation in uncompensated doped semiconductors, such as Si:P, Si:B and Ge:Ga, appears to be much more complicated, and there continues to be debate and controversy (see [*e.g.*]{}, [@TPR; @Lohneysen; @sarachikmott]) concerning the behavior of the conductivity near the metal-insulator transition. Far from the transition deep in the metallic phase, the conductivity clearly exhibits a $T^{1/2}$ dependence[@Rosen3] at low temperatures, in agreement with Eq. (1) and consistent with perturbative results for a weakly disordered metal [@Altshuler]. The coefficient $B$ is found to depend weakly on dopant concentration deep in the metal, in qualitative agreement with theoretical expectations[@bhattlee]. Closer to the transition, however, the dependence of $B$ on concentration becomes rather marked, and actually changes sign from negative to positive as the transition is approached. The full scaling relation, Eq. (2), is not satisfied if one includes in the analysis both negative and positive slopes $B$. (In fact, if the conductivity obeys Eq. (1) near the transition, then scaling requires that the coefficient $B$ scale as a power of $(t-t_c)$, so that reversals in the sign of $B$ are explicitly excluded). In the region close to the transition where the temperature coefficient of the low-temperature conductivity is positive, even the form of the temperature dependence of the conductivity is not clearly established: it has been reported in different experiments as $\propto T^{1/2}$ and $\propto T^{1/3}$.[@T1/3] Very different critical conductivity exponents have been obtained in uncompensated Si:P, considered the prototypical doped semiconductor. A value $\mu=0.5$ was found in the classic experiments of Paalanen [*et al.*]{}[@paalanen] down to very low temperatures (below 5 mK), where uniaxial stress was used to tune the transition. In experiments where the transition was approached by reducing the dopant concentration, similar exponents near $0.5$ were found in Si:P[@sarachikSi:P] as well as a number of other uncompensated doped semiconductors, including Si:As[@castner], double-doped Si:P,As[@holcomb] and Ge:Ga[@itoh]. In contrast, Stupp [*et al.*]{}[@stupp] found $\mu = 1.3$ in Si:P, and Shlimak [*et al.*]{}[@shlimak] deduced $\mu=1$ for uncompensated transmutation-doped Ge:Sb. These large exponents were based on data in a narrow range of dopant concentration near the transition where the coefficient $B$ of Eq. (1) is positive. Using dopant concentration to tune the transition, a prior study involving one of the present authors reported $\mu=0.65$ in Si:B, a material in which the impurity states are characterized by an angular momentum $ J= 3/2 $ arising from spin-orbit coupling characteristic of the valence bands of semiconductors like Si, and where spin-orbit scattering has been found to be strong[@ourSi:B]. We have recently reported[@ourPRL] measurements of the conductivity in Si:B in the immediate vicinity of the transition. By applying a compressive uniaxial stress, $S$, along the \[001\] direction using a pressure cell described elsewhere[@bogdanovich], we have driven a sample of Si:B from the metallic phase toward the transition, and mapped out the conductivity as a function of applied stress $(S)$ and temperature $(T)$ in the range $0.05 K < T < 0.5 K$. We find that the conductivity is described accurately by the scaling form given by Eq. (2) (with $t=S$) for a range of stresses which yield conductivities that obey Eq. (1) with a constant coefficient $B$. However, the critical conductivity exponent is found to be $ \mu \approx 1.6 $, considerably larger than the values around $\mu = 0.5 - 0.7 $ reported by many workers, including that reported earlier for Si:B[@ourSi:B] where the transition was approached by varying the dopant concentration. In this paper, we describe in detail the measurements on the metallic side of the transition and compare results obtained on a sample subjected to uniaxial stress with those obtained earlier for a series of unstressed samples in Ref.. We are led to the surprising conclusion that the two do not agree in detail, suggesting that further investigation of the issue of critical behavior in the presence of uniaxial stress is warranted. We describe the experimental details and results below, followed by a discussion and summary. Experimental Details and Results {#exdetails} ================================ A bar-shaped $8.0$x$1.25$x$0.3$ mm$^3$ sample of Si:B was cut with its long dimension along the \[001\] direction. Relatively small uniaxial stress has a pronounced effect on the conductivity of Si:B, driving it initially toward more insulating behavior. A detailed discussion of the effect of stress is contained in a companion paper. The dopant concentration, determined from the ratio of the resistivities[@bogdanovich] at 300 K and 4.2 K, was $4.84$x$10^{18}$ cm$^{-3}$. Electrical contact was made along four thin boron-implanted strips. Uniaxial compression was applied to the sample along the long \[001\] direction using a pressure cell described elsewhere[@bogdanovich]. Four-terminal measurements were taken at 13 Hz (equivalent to DC) for different fixed values of uniaxial stress at temperatures between 0.05 and 0.75 K. Resistivities were determined from the linear region of the I-V curves. As discussed earlier, Eq. (1) is expected to be valid at low temperatures in the weakly disordered metal (perturbative regime)[@Altshuler; @leeramareview], i. e. not too close to the transition. In the absence of theoretical predictions very near the transition, the conductivity is often fitted to this form everywhere, including the critical regime. Following this generally accepted procedure, we plot the conductivity of Si:B as a function of $T^{1/2}$ for various values of the stress $S$ in Fig. 1. In agreement with experiments where dopant concentration is used to tune the transition, the slope $B$ of the curves changes from negative to positive with increasing stress as the critical value $S_c$ is approached. However, although the apparent straight-line behavior implies the validity of Eq. (1), an equally good fit (not shown) is obtained by plotting the data as a function of $T^{1/3}$. This method is therefore not sufficient to distinguish between the two functional forms. We now present the results of a full scaling analysis of these data published elsewhere[@ourPRL] and discuss its implications. The critical stress for the sample used in our experiments was determined to be $S_c = 613$ bar; the temperature dependence at this value of stress, [*i. e.*]{} the critical conductivity, is $\sigma_c (T) \propto T^{0.5}$. We rewrite the scaling form, Eq. (2), as: $$\sigma(S,T) = \sigma(S,0) G[T/(\Delta S/S_c)^{z\nu}] \label{equation}$$ where $\Delta S = (S - S_c) $ and $\sigma(S,0) = \sigma_0 (\Delta S/S_c)^\mu $. Guided by this version of the scaling form (Eq. (4)), the quantity $\sigma (S, T)/(\Delta S/S_c)^\mu$ is plotted in Fig. 2 (a) as a function of the scaling variable, $T/(\Delta S/S_c)^{z\nu}$ with $z\nu=3.2$ and $\mu=1.6$ chosen to yield the best data collapse[@ourPRL]. The resulting scaling function fully describes the temperature dependence of the conductivity in the conducting phase in the vicinity of the transition. If the usual assumption is made that $\mu = \nu$, then the dynamical exponent $z = 2$, the same as that found in systems described by Eq. (1), such as semiconductor-metal mixtures and persistent photoconductors. To test whether Eq. (1) provides a good description of the conductivity of Si:B very near the transition, we replot the same data as a function of $[ T/ ( \Delta S/S_c)^{z\nu} ]^{1/2} $ in Fig 2(b). The data fall nearly on a straight line, indicating that the temperature dependence of the conductivity of Si:B just on the metallic side of the metal-insulator transition in the scaling regime is rather similar to that of metal-semiconductor mixtures and doped, highly compensated $Al_x Ga_{1-x} As$. This, in turn, implies that the $T^{1/2}$ corrections exhibited by the conductivity in the perturbative regime of the weakly disordered metal extend all the way to the critical point[@castellani]. Pronounced failure of scaling occurs if we assume a critical temperature dependence in Si:B of $T^{1/3}$ instead of $T^{1/2}$; we are thus able to assert that the temperature dependence of the critical curve and the scaling function are decidedly inconsistent with the $T^{1/3}$ dependence that has been found in some other materials, such as Ge:Ga[@itohpreprint] and Ge:Sb[@shlimak]. Since Si:B and Ge:Ga are both acceptor systems, it would be of importance to see if similar scaling holds in the latter case, and whether the critical curve displays similar $T^{1/3}$ dependence. A best straight line[@footnote; @note] fit to the data of Fig. 2 (b) yields: $$\sigma (S, T)/(\Delta S/S_c)^{\mu} = 66 +10.6 [T/(\Delta S/S_c)^{z\nu}]^{1/2}. \label{eq}$$ Rearranging terms and making use of the fact that in our case $\mu = z\nu/2 = 1.6$, this can be written as: $$\sigma (S, T) = 66 (\Delta S/S_c)^{1.6} + 10.6 T^{1/2} \label{eq}$$ where $\sigma$ is in $(ohm-cm)^{-1}$ and $T$ is in Kelvin. This is precisely of the form Eq. (1), as stated earlier. A striking feature of these results is the very large critical conductivity exponent $1.6$ compared to the exponent $0.65$ found in earlier experiments[@ourSi:B] where the transition was approached by tuning the dopant concentration. This is further illustrated in Fig. 3, which shows the zero-temperature conductivity of the stressed sample plotted as a function of $\Delta t/t_c$ on a linear scale compared with the conductivity obtained from a series of unstressed samples with varying dopant density. The symbols represent zero-temperature extrapolations obtained from the $T^{1/2}$ curves of Fig. 1 and the lower solid curve represents the first term on the right of Eq. (6); here the tuning parameter $t=S$. The upper solid curve represents the zero-temperature conductivity as a function of dopant concentration taken from Reference 6; here the tuning parameter $t=n$. The difference between the results for stressed and unstressed samples is clear and dramatic. To probe these differences further, we show in Fig. 4 the temperature dependence of the conductivities of a series of unstressed metallic samples close to the metal-insulator transition from Reference (shown as open circles) along with the data of Fig. 1. The two sets of data clearly do not overlap, as might be expected if tuning through the transition by varying stress or dopant concentration were equivalent. Although magnetic field-tuned transitions have long been recognized as different and belonging to a different universality class, it has generally been assumed that stress-tuned and concentration-tuned transitions are equivalent, allowing for direct comparisons of the critical behavior and critical exponents. The data of Fig. 4 seem to indicate that this is not the case. We discuss this point further in the next section. Discussion and Concluding Remarks {#discussion} ================================= The conductivity data in a metallic sample of Si:B subjected to a uniaxial stress along the \[100\] direction shows clear evidence of scaling with temperature and stress as the metal-insulator transition is approached, in contrast with most previous data on uncompensated doped semiconductors. The scaling behavior enables one to determine with much more confidence the critical behavior at the transition, $ \sigma_c(T) \propto T^{0.5} $ than is possible from the temperature dependence of individual samples. However, the scaling yields a much larger critical exponent $ \mu \approx 1.6 $ characterizing the zero-temperature conductivity, $ \sigma(S,0) \propto (S_c-S)^\mu $,than in the absence of stress. This large difference naturally raises a number of questions. As stated earlier, acceptors in semiconductors are characterized by an angular momentum variable corresponding to $J = 3/2$, and therefore have a four-fold degeneracy in the unstressed cubic crystal that is lifted in the presence of uniaxial stress. However, time reversal symmetry, which is broken in the presence of magnetic field, is maintained in the presence of stress (the acceptor state is now two-fold degenerate). Consequently, the change in univerality class expected in the presence of a magnetic field is not expected for uniaxial stress. If, however, the breaking of the four-fold degeneracy leads to some (as yet unknown) new universality class, this effect should be easy to confirm experimentally - in Si:P, where there is no such degeneracy in zero stress, the unstressed and uniaxially stressed data should not have the large discrepancies seen in Si:B. A potential source of error in the determination of the critical exponent is strong nonlinearity in the stress dependence of the critical density. In both n- and p-doped Si (or Ge), the change in the critical density with stress can be attributed to a change in the impurity wavefunction[@bhatt1]. In Si:P, the change at low stresses due to mixing of the central-cell split excited 1s states into the ground state can be calculated[@bhatt2] and shown to be quadratic in $S$, [*i.e.*]{}, $ n_c(S) = n_c(0) - aS^2 $. At large stresses, on the other hand, the impurity wavefunction is derived (for most directions of stress) from the two lowest conduction band minima, and so the critical density $n_c(S)$ saturates as $ S \rightarrow \infty $. As a result, $n_c(S)$ is a monotonic function of $S$, with an “S-shaped” curve, which can be reasonably approximated by a linear curve for small excursions around a critical value $S_c$, except for very small and very large $S$, the characteristic $S$ corresponding to the strain given by the central cell splitting divided by the conduction band deformation potential. In Si:B, stress initially splits the $ J = 3/2 $ acceptor state linearly, causing a much more dramatic dependence on the stress. A calculation of the acceptor wavefunction at low stress[@Chroboczek; @Durst] does not explain this large dependence; instead the predominant effect must come from the disappearance of the freedom to choose between orbitally distinct wavefunctions (as in the case of effective mass donors[@bhatt3]). For large compressive stresses along the \[001\] direction, on the other hand, the acceptor wavefunctions must be derived predominantly from the light hole valence band, and therefore $n_c$ is expected to decrease, as the acceptor wavefunction expands[@Chroboczek; @Durst]. Consequently, $n_c(S)$ actually exhibits a maximum as a function of $S$, so that an appropriately doped sample of Si:B should exhibit a reentrant metal-insulator-metal transition as a function of stress. In the absence of quantitative theory for $n_c(S)$, we base our assumption that nonlinearities are not significant on the experimental finding that both the stress dependence of the conductivity of our sample $\sigma(S,T)$ at a high temperature ($T$ = 4.2K), and the dopant density dependence of the conductivity of a series of closely spaced unstressed samples $\sigma(n,T)$ at $T$=4.2K are [*linear*]{} in $S$ and $n$ respectively over the range of the control parameter around the critical value used in our analysis. Further, the critical stress for our sample, $ S_c = 613 $ bar, lies well away from zero stress (where one might expect some complications from local strains due to a Jahn-Teller splitting of the acceptor state) and from the stress corresponding to the maximum resistivity at low $T$ ($ S_{max} = 3.5 $ kbar ), and is therefore less likely to be affected by nonlinearities in $n_c(S)$. Confirmation of this must await results on a series of samples with differing values of the critical stress $S_c$. Another possible source for the unusually large exponent obtained in the current experiments is an inhomogeneous distribution of stresses resulting in a spread of $\Delta S$’s and a consequent averaging over a distribution of conducting paths, some further and some closer to the transition. However, such a distribution might well be expected to give rise to measurable deviations from scaling, and the quality of the data collapse shown in Fig. 2 is excellent. One also needs to consider possible effects associated with anisotropic conductivities in uniaxially stressed samples. For a sample under \[001\] stress, the conductivity along the stress direction $\sigma_l(S)$ differs from the conductivities along the transverse \[100\] and \[010\] directions, $\sigma_t(S)$. Assuming a normal Fermi liquid metallic phase, and since the critical stress $S_c$ is nonzero, the conductivity anisotropy $$\alpha (S) = 3 [ \sigma_l(S) - \sigma_t(S) ] / [\sigma_l(S) + 2 \sigma_t(S)] \label{equation}$$ may be expanded in an analytic Taylor expansion around $S_c$ $$\alpha (S) = \alpha (S_c) + (d \alpha /dS)_{S_c} (S - S_c) \label{equation}$$ which can easily be shown to lead to a subleading correction to the conductivity onset when measured in any direction, [*i.e.*]{}, if we take: $$\sigma_{tr} (S) = [ \sigma_l (S) + 2 \sigma_t (S) ]/3 = \sigma_0 [(S_c-S)/S_c]^\mu \label{equation}$$ we obtain $$\sigma_l(S) , \sigma_t(S) \propto (S_c-S)^\mu [ 1 + O(S_c-S) ] \label{equation}$$ where the coefficient of the term of order $(S_c - S)$ in the square brackets will be proportional to $ ( d \alpha /dS)_{S_c} $. The anisotropy also affects the comparison between unstressed and uniaxially stressed samples shown in Fig. 4. In particular, one expects to be able to compare the stress dependence of the angle-averaged value $\sigma_{tr} (S) $ to the concentration dependence of $\sigma (n)$ of the unstressed (cubic) samples. Consequently, the longitudinal conductivities $\sigma_l (S) $ for the uniaxially stressed samples (closed circles in Fig. 4) should be divided by a stress-dependent anisotropy factor $[1 + 2 \alpha (S) /3 ]$ when comparing with the unstressed samples. (In providing a direct comparison in Fig. 4, we have assumed that $\alpha $ is small at the stresses applied.) To test this, we multiplied each of the unstressed curves (open circles) by an arbitrary factor chosen to make it coincide with corresponding curves for the stressed samples. Our best attempt, shown in Fig. 5, requires rather large anisotropy values $\alpha$; moreover, the $\alpha$’s (listed in the caption to Fig. 5) are unphysical: they are nonmonotonic functions of the stress, and decrease with increasing stress in the critical region. We therefore conclude that the difference between the temperature dependence of the uniaxially stressed and unstressed samples is intrinsic and not due to effects associated with anisotropic conductivities. Conclusive proof would require measurements of the conductivities in both the longitudinal and transverse directions in the presence of stress. It should be noted that the scaling is found to hold only in a relatively small window of metallic conductivities for control parameter values rather close to the critical value. The much smaller exponent, $ \mu \approx 0.5 - 0.7 $, is derived from data over a much wider range. There has been much debate about the unusually small correlation length exponent $\nu$ that such a small $\mu$ implies, and possible violation of the bound derived for disordered systems $ \nu \geq (2/d) $[@chayes]. It is not clear whether such systems become inhomogeneous at long length scales and are then governed by percolation near the transition[@zimanyi]. Such a scenario would offer the attractive possibility of reconciling the many different results found in Si:P and Si:B. We point out that reports of large conductivity exponents[@stupp; @ourPRL] are confined to a region very close to the critical value of the tuning parameter, where percolation may well result from such inhomogeneities. Further, the observed conductivity exponent is close to that expected for classical percolation in three dimensions[@percolation]. Finally, this might account for earlier observations[@paalanen] of differing conductivities in different samples very close to the metal insulator transition; the percolative paths could be rather sensitive to precise details of dopant distribution, and lead to nonuniversal amplitudes especially in a crossover region. summary ======= In summary, we have used compressive uniaxial stress applied along the \[001\] direction to approach the metal-insulator transition from the metallic side in Si:B. The conductivity scales with stress and temperature over the narrow range within which Eq. (1) is obeyed with a constant coefficient $B$. The temperature dependence of the conductivity at the critical value of the tuning parameter (uniaxial stress in our case) is found to be proportional to $T^{0.5}$. The critical exponent characterizing the onset of the zero-temperature conductivity is found to be $\mu = 1.6$, considerably larger than the exponent found in experiments where the transition was approached by reducing dopant concentration. The temperature dependence of the conductivity is qualitatively and quantitatively different for stressed and unstressed Si:B, however, suggesting that a direct comparison of the critical exponents is not possible. Our data call for a systematic study of the stress tuned transition in other donor and acceptor systems, as well as for critical reexamination of the assumption that stress-driven and concentration-driven metal-insulator transitions are equivalent for all doped semiconductors. Acknowledgments =============== We are grateful to Jonathan Friedman, D. Simonian and S. V. Kravchenko for their participation in some phases of these experiments. We acknowledge valuable experimental contributions by L. Walkowicz. Our heartfelt thanks go to G. A. Thomas for his generous support and expert advice, help and interest throughout this project. We thank T. F. Rosenbaum, M. Paalanen, E. Smith and S. Han for valuable experimental advice and the loan of equipment, and F. Pollak for useful suggestions and some samples. M. P. S. thanks G. Kotliar and D. Belitz for numerous discussions, and John Davies for several discussions regarding the possible effects of inhomogeneous stress distributions. This work was supported by the US Department of Energy Grant No. DE-FG02-84ER45153. R. N. B. was supported by NSF grant No. DMR-9400362 and DMR-9809483. T. F. Rosenbaum, K. Andres, G. A. Thomas and R. N. Bhatt, Phys. Rev. Lett. [**45**]{}, 1723 (1980). G. Hertel, D. J. Bishop, E. G. Spencer, J. M. Rowell and R. C. Dynes, Phys. Rev. Lett. [**50**]{}, 743 (1983); D. J. Bishop, E. G. Spencer, and R.  C. Dynes, Solid State Electronics [**28**]{}, 73 (1985). G. A. Thomas, Y.Ootuka, S. Katsumoto, S. Kobayashi and W. Sasaki, Phys. Rev [**B 25**]{}, 4288 (1982). P. F. Newman and D. F. Holcomb, Phys. Rev. B [**28**]{}, 638 (1983); D. W. Koon and T. G. Castner, Phys. Rev. B [**40**]{}, 1216 (1989). P. F. Newman and D. F. Holcomb, Phys. Rev. Ltt. [**51**]{}, 2144 (1983). P. Dai, Y. Zhang, and M. P. Sarachik, Phys. Rev. Lett.  [**66**]{}, 1914 (1991); Phys. Rev. B [**45**]{}, 3984 (1992). P. Dai, Y. Zhang, S. Bogdanovich, and M. P. Sarachik, Phys. Rev. Lett [**66**]{}, 1914 (1991); Phys. Rev. B [**48**]{}, 4941 (1993). H. Stupp, M. Hornung, M. Lakner, O. Madel, and H. v.  Lohneysen, Phys. Rev. Lett. [**71**]{}, 2634 (1993). I. Shlimak, M. Kaveh, R. Ussyshkin, V. Ginodman, and L. Resnick, Phys. Rev. Lett. [**77**]{}, 1103 (1996). K. M. Itoh, E. E. Haller, J. W. Beeman, W. L. Hansen, J. Emes, L. A. Reichertz, E. Kreysa, T. Shutt, A. Cummings, W. Stockwell, B. Sadoulet, J. Muto, J. W. Farmer, and V. I. Ozhogin, Phys. Rev. Lett. [**77**]{}, 4058 (1996). M. A. Paalanen, T. F. Rosenbaum, G. A. Thomas, and R.  N. Bhatt, Phys. Rev. Lett. [**48**]{}, 1284 (1982). S. Bogdanovich, M. P. Sarachik, and R. N. Bhatt, submitted to Phys. Rev. Lett. S. von Molnar, A. Briggs, J. Floquet and G. Remenyi, Phys. Rev. Lett. [**51**]{}, 706 (1983). S. Katsumoto, F. Komori, N. Sano, and S. Kobayashi, J. Phys. Soc. Jpn, [**56**]{}, 2259 (1987). See [*e.g.*]{}, S. L. Sondhi, S. M. Girvin, J. P. Carini and D. Shahar, Rev. Mod. Phys. [**69**]{}, 315 (1997). T. F. Rosenbaum, G. A. Thomas, and M. A. Paalanen, Phys. Rev. Lett. [**72**]{}, 2121(C) (1994). H. Stupp, M. Hornung, M. Lakner, O. Madel, and H. v.  Lohneysen, Phys. Rev. Lett. [**72**]{}, 2122(C) (1994). For a review, see M. P. Sarachik, in [*Metal-Insulator Transitions Revisited*]{}, ed. by P. P. Edwards and C. N. Rao (Taylor and Francis, London, 1995). T. F. Rosenbaum, K. Andres, G. A. Thomas, and P. A, Lee, Phys. Rev. Lett. [**46**]{}, 568 (1981); T. F. Rosenbaum, R. F. Milligan, G. A. Thomas, P. A. Lee, T. V. Ramakrishnan, R. N. Bhatt, K. DeConde, H. Hess, and T. Perry, Phys. Rev. Lett. [**47**]{}, 1758 (1981). B. L. Altshuler and A. G. Aronov, Zh. Eksp. Teor.  Fiz. [**77**]{}, 2028 (1979) \[Sov. Phys. JETP [**50**]{}, 968 (1979); Pisma Zh.  Eksp. Teor. Fiz. [**30**]{}, 514 (1979) \[Sov. Phys. JETP Lett. [**30**]{}, 514 (1979); Solid State Commun. [**46**]{}, 429 (1983). R. N. Bhatt and P. A. Lee, Solid State Commun. [**48**]{}, 755 (1983). $T^{1/2}$ was measured in Si:P by T. F. Rosenbaum, R. F.  Milligan, M. A. Paalanen, G. A. Thomas, R. N. Bhatt, and W. Lin, Phys.  Rev. B [**27**]{}, 7509 (1983); $T^{1/3}$ was first reported in $n$-type GaAs by M. Maliepaard, M. Pepper, R. Newbury, J. E. F. Frost, D. C. Peacock, D. A. Ritchie, G. A. C. Jones. and G. Hill, Phys. Rev. B [**39**]{}, 1430 (1989). S. Bogdanovich, Thesis (City College of CUNY, 1998). P. A. Lee and T. V.  Ramakrishnan, Rev. Mod. Phys. [**57**]{}, 287 (1985). M. Watanabe, Y. Ootuka, K. M. Itoh, and E. E. Haller, preprint. A $T^{1/2}$ dependence of the conductivity at the critical value of the tuning parameter has been calculated for the case with strong spin-orbit scattering by C. Castellani, C. Di Castro, G. Forgacs and S. Sorella, Solid State Commun. [**52**]{}, 261 (1984). Data for the stress closest to the critical value $S_c$ were not included in the fit. The slope B for the $S= 583$-bar curve is smaller and atypical, due possibly to inhomogeneous distribution of stress resulting in insulating behavior for some regions of the sample. A somewhat better fit is obtained using $T^{0.47}$. This represents the uncertainty associated with the present analysis. R. N. Bhatt, Proceedings of the Nuclear Physics and Solid State Physics Symposium, Department of Atomic Energy, India [**25A**]{}, 49 (1982). R. N. Bhatt, Phys. Rev. [**B 26**]{}, 1082 (1982). R. N. Bhatt, Phys. Rev. [**B 24**]{}, 3630 (1981). J. A. Chroboczek, F. H. Pollak, and H. F. Staunton, Phil. Mag. [**B 50**]{}, 113 (1984). A. Durst and R. N. Bhatt (unpublished results). J. Chayes, L. Chayes, D. S. Fisher and T. Spencer, Phys. Rev. Lett. [**57**]{}, 2999 (1987) and references to N. F. Mott and to A. B. Harris therein. G. Zimanyi (private communication). It should also be mentioned that the exponent for the Anderson localization transition for noninteracting electrons in three dimensions is also close to 1.6 (see [*e.g.*]{} K. Slevin and T. Ohtsuki, Phys. Rev. Lett. [**78**]{}, 4083 (1997). However, there is considerable evidence for electron interaction effects in both the insulating and metallic phases in Si:B; see, for example, ref[@sarachikmott].
--- author: - 'A. D. Supanitsky' - 'and G. Medina-Tanco' title: 'Ultra high energy cosmic rays from super-heavy dark matter in the context of large exposure observatories' --- Introduction {#sec:intro} ============ The nature of the ultra high energy cosmic rays (UHECRs, $E\geq 10^{18}$ eV) is still unknown. The main observables used to study its origin are the energy spectrum, the composition profile as a function of primary energy, and the distribution of their arrival directions. The UHECR flux has been measured with good statistics by the Pierre Auger Observatory and Telescope Array. It presents two main features, a hardening at $\sim 10^{18.7}$ eV, known as the ankle and a suppression. This suppression is observed by Auger at $10^{(19.62 \pm 0.02)}$ eV and by Telescope Array at a larger energy, $10^{(19.78 \pm 0.06)}$ eV [@AugerTA:17]. Also, the Auger spectrum takes smaller values than the ones corresponding to Telescope Array. The discrepancies between the two observations can be diminished by shifting the energy scales of both experiments within their systematic uncertainties. However, some differences are still present in the suppression region [@AugerTA:17]. The composition of the UHECRs is determined by comparing experimental data with air shower simulations, which makes use of high energy hadronic interaction models. These models present non negligible systematic uncertainties since the hadronic interactions at the highest energies cannot be deduced from first principles. As a consequence, the composition determination is subject to important systematic uncertainties. One of the most sensitive parameters to the nature of the primary is the atmospheric depth of the maximum shower development, $X_{max}$. It can be obtained on an event by event basis from the data taken by the fluorescence telescopes of Auger and Telescope Array. The mean value of $X_{max}$ obtained by Auger [@AugerXmax:14], interpreted by using the updated versions of current high energy hadronic interaction models, shows that the composition is light from $\sim 10^{18}$ up to $\sim 10^{18.6}$ eV. From $\sim 10^{18.6}$ eV, the composition becomes progressively heavier for increasing values of the primary energy. This trend is consistent with the results obtained by using the standard deviation of the $X_{max}$ distribution [@AugerXmax:14]. On the other hand, the $X_{max}$ parameter reconstructed from the data taken by the fluorescence telescopes of Telescope Array is also compatible with a light composition at energies below the ankle, when it is interpreted by using the current hadronic interaction models [@TA:18]. It is worth mentioning that the $X_{max}$ distributions, as a function of primary energy, obtained by Auger and Telescope Array are compatible within systematic uncertainties [@Souza:17]. However, the presence of heavier primaries at energies above the ankle cannot be confirmed by the Telescope Array data due to the limited statistics of the event sample [@Souza:17]. The distribution of the arrival directions of the events with primary energies above $\sim 10^{18.9}$ eV detected by Auger presents an anisotropy that can be described as a dipole of $\sim 6.5$% amplitude [@Science:17]. The significance of this detection is larger than $5.2 \sigma$. The dipole direction is such that a scenario in which the flux is dominated by a galactic component is disfavored [@Science:17]. Regarding point source searches, Auger has found an indication of a correlation between the arrival directions of the events of primary energy larger than $10^{19.6}$ eV and nearby starburst galaxies [@AugerStar:18]. The significance of this correlation is at $\sim 4\sigma$ level. Also the Auger data present an excess above $10^{19.76}$ eV in the region of the radio galaxy Centaurus A [@AugerAnisICRC:17; @AugerAnis:15]. However, the statistical significance of this excess is $\sim 3.1\sigma$. The Telescope Array Collaboration has also found an excess above $10^{19.75}$ eV in a direction of the sky which is contained in the supergalactic plane [@HotSpot:14; @HotSpot:17]. The statistical significance of this excess is $\sim 3.4\sigma$. The experimental data suggest that the cosmic ray flux above the ankle is dominated by a component originated in extragalactic sources, most of those are possibly starburst galaxies. Besides, these sources accelerate not only protons but also heavier nuclei, assuming that current high energy hadronic interaction models do not present too large systematic uncertainties. However, a minority component of another origin that could dominate the flux beyond the suppression is still compatible with the experimental data [@Alcantara:19]. The possibility that the by-products of the decay of unstable super-heavy dark matter (SHDM) particles can contribute to the UHECR flux has been studied extensively in the past (see for instance [@MedinaTanco:99; @Aloisio:08; @Kalashev:08; @Aloisio:15; @Kalshev:17; @Marzola:17]). In these models the dark matter is composed of supermassive particles produced gravitationally during inflation [@Kuzmin:98; @Chung:98; @Kuzmin:99a; @Kuzmin:99b]. These particles would be clustered in the halo of the galaxies including ours. The spectrum of SHDM particles is expected to be dominated by gamma rays, protons and neutrinos. The upper limits to the gamma-ray flux obtained by Auger and the non detection of events above $10^{20.3}$ eV by Auger impose tight constraints to the flux corresponding to this hypothetical SHDM component. Therefore, to test the hypothesis of the existence of this component, observatories of very large exposure are required. In this article we study the possibility to identify this scenario in the context of the next generation UHECR observatories which will have a much larger exposure than current ones. In this study, besides considering the contribution of the galactic halo, the contribution of extragalactic halos located in the nearby universe is also included. Cosmic rays from galactic and extragalactic SHDM ================================================ The rest mass and the decay time of the SHDM particles are free parameters in models in which a minority component of the UHECRs originate from the decay of these unstable particles. In the energy range of interest these parameters are constrained by cosmic ray observations. In particular, the most restrictive constraints are imposed by the upper limits to the photon flux found by Auger [@Kalashev:16] and also the non-observation by Auger of events above $10^{20.3}$ eV [@Alcantara:19]. This last analysis imposes more restrictive constraints than the ones based on the upper limits to the photon flux for scenarios in which $M_X>10^{23}$ eV. Therefore, the mass of the SHDM particles considered in this work is $M_X=10^{22.3}$ eV, for which only the constraints coming from the upper limits to the photon flux obtained by Auger are relevant. Given the mass of the SHDM particles the decay time corresponding to the scenario for the largest SHDM cosmic ray flux, compatible with the upper limits to the photon fraction obtained by Auger, can be estimated from the predicted integral gamma-ray flux, which is given by, $$\begin{aligned} % J_\gamma(>E)=&& \frac{1}{4 \pi\ M_X c^2\ \tau_X}\ \sum_{s=1}^N \int_E^\infty dE'\ \frac{dN_{\gamma,\, s}}{dE'}(E',D_s) \int_0^\infty d\xi % % \int_0^{2 \pi} d\alpha\ \int_0^{\pi} d\delta \cos\delta \times \nonumber \\ % \label{IntG} % &&\rho_{X,\, s}(r(\xi,\alpha,\delta,\alpha_s,\delta_s)) \ \varepsilon(\delta), %\end{aligned}$$ where $M_X$ is the rest mass of the SHDM particle, $\tau_X$ is its decay time, $c$ is the speed of light, $N$ is the number of dark matter halos considered, $\rho_{X,s}$ is the energy density of the $s-th$ dark matter halo, $r$ is the distance measured from the center of the halo to a given point in the space, $\alpha_s$ is the right ascension of the center of the $s-th$ halo, $\delta_s$ is the declination of the center of the $s-th$ halo, $\xi$ is the distance from the Earth in the direction defined by the angles $\alpha$ and $\delta$, $dN_{\gamma,s}/dE$ is the number of gamma rays per units of energy corresponding to a single decay including the effects of propagation, and $D_s$ is the comoving distance from the Earth to the center of the $s-th$ halo. Here $\varepsilon(\delta)$ is the relative exposure of Auger which fulfills the normalization condition, $$% \int_0^{\pi} d\delta \cos\delta \ \varepsilon(\delta) = 1. %$$ An analytical expression for $\varepsilon(\delta)$ can be found in Ref. [@Sommers:01]. Gamma rays generated in SHDM decays can interact with low energy photons of the radiation field present in the universe during propagation. The relevant low energy photon backgrounds are the cosmic microwave background (CMB) and the radio background (RB). The main processes undergone by gamma rays are pair production ($\gamma+\gamma_b \rightarrow e^+ + e^-$) and double pair production ($\gamma+\gamma_b \rightarrow e^+ + e^- + e^+ + e^-$). Gamma rays that originate in our Galaxy are assumed to propagate freely due to the fact that the distances traveled by them, from generation to the Earth, are much smaller than their mean free path. However, the spectrum of gamma rays, which originated from SHDM decays in extragalactic halos are modified due to the interactions undergone by them during propagation. Therefore, the energy spectrum of the gamma rays at Earth takes the following form, $$\begin{aligned} % \label{SpecG} &&\frac{dN_\gamma}{dE}(E,D)=\frac{dN_{\gamma}^0}{dE}(E) \ \ \ \ \textrm{Galactic gamma rays} \\ % &&\frac{dN_\gamma}{dE}(E,D)=\exp\left[ -\frac{D}{\lambda_{\gamma \gamma}(E)}\right]\ \frac{dN_\gamma^0}{dE}(E) \ \ \ \ \textrm{Extragalactic gamma rays}, % \label{SpecEG} %\end{aligned}$$ where $dN_\gamma^0/dE$ is the energy distribution at decay, $\lambda_{\gamma \gamma}$ is the mean free path of gamma rays in the photon background, and $D$ is the distance from the center of the halo to the Earth. Note that Eq. (\[SpecEG\]) is valid for a non-expanding universe which in our case is a good approximation, since the extragalactic halos considered are at distances smaller than $\sim 140$ Mpc. The energy distributions $dN_{\gamma,\, p}^0/dE$ of the gamma rays and protons (the secondary particles considered in this work) generated from the decay of the SHDM particles are calculated by using the SHdecay program [@Barbot:04]. Figure \[GammaMFP\] shows the mean free path of gamma rays in the CMB and RB for the relevant processes. The radio background model used for the calculation is the one developed in Ref. [@Protheroe:96]. The calculation is performed by using the tools developed for the package CRPropa 3, which are accessible at Ref. [@CRPropa3Data]. It can be seen that for energies above $10^{19}$ eV the total mean free path is larger than 1 Mpc, this is the reason that supports the assumption that gamma rays originated in the halo of our Galaxy propagate freely. It can also be seen that below $\sim 10^{19.5}$ eV the mean free path is dominated by pair production in the CMB, from $\sim 10^{19.5}$ eV to $\sim 10^{23.5}$ eV the relevant process is also pair production but in the RB, and finally above $\sim 10^{23.5}$ eV the dominant process is the double pair production in the CMB. ![Mean free path of gamma rays in the CMB and RB as a function of the gamma ray energy. \[GammaMFP\]](GammaMFP.eps){width="10cm"} The Burkert dark matter profile [@Burkert:95] is considered in this work. It is given by, $$% \rho_X(r)=\frac{\rho_B}{\left(1+\frac{r}{r_B}\right) \left( 1+\left(\frac{r}{r_B}\right)^2 \right)}, %$$ where $\rho_B$ and $r_B$ depend on the halo under consideration. For the Milky Way the parameters are $\rho_B=1.187$ GeV cm$^{-3}$ and $r_B=10$ kpc [@Nesti:13]. The dark matter halos considered are the ones of the DMCat catalog [@Lisanti:18a; @Lisanti:18b], which is based on the galaxy group catalogs of Refs. [@Tully:15; @Kourkchi:17]. In that catalog there are 17021 halos with comoving distances smaller than $\sim 140$ Mpc. The parameters of the Burkert profile corresponding to these extragalactic halos are can obtained from the catalog. Figure \[IntGammaAuger\] shows the integral gamma-ray flux obtained by using Eq. (\[IntG\]) for $\tau_X = 5.4\times 10^{22}$ yr. This value for the decay time corresponds to the largest integral gamma-ray flux compatible with the Auger upper limits [@PhLimits:15], which are also shown in the figure. The contributions of our galaxy and the one corresponding to the halos in the DMCat catalog are included. ![Integral gamma-ray flux as a function of the logarithm of the energy. The galactic and extragalactic (halos in the DMCat catalog) contributions are included. The arrows correspond to the 95% CL upper limits obtaind by Auger [@PhLimits:15]. \[IntGammaAuger\]](IntGammaAuger.eps){width="10cm"} The cosmic ray flux originated in SHDM decays for an observatory with uniform exposure is given by, $$\begin{aligned} % J_i(E)=&& \frac{1}{(4 \pi)^2 \ M_X c^2\ \tau_X}\ \sum_{s=1}^N \frac{dN_{i,\, s}}{dE}(E,D_s) \int_0^\infty d\xi % % \int_0^{2 \pi} d\alpha\ \int_0^{\pi} d\delta \cos\delta \times \nonumber \\ % \label{IntI} % &&\rho_{X,\, s}(r(\xi,\alpha,\delta,\alpha_s,\delta_s)), %\end{aligned}$$ where $i\in\{p,\gamma\}$, $dN_{\gamma,\, s}/dE$ is given by Eq. (\[SpecG\]) for the galactic halo and Eq. (\[SpecEG\]) for the extragalctic halos, and $$\begin{aligned} % \label{SpecPrG} &&\frac{dN_p}{dE}(E,D)=\frac{dN_{p}^0}{dE}(E) \ \ \ \ \textrm{Galactic protons} \\ % &&\frac{dN_p}{dE}(E,D)=\int_0^\infty dE' P(E|E',D)\ \frac{dN_p^0}{dE'}(E') \ \ \ \ \textrm{Extragalactic protons}. % \label{SpecPrEG} %\end{aligned}$$ Here $dN_p^0/dE$ is the proton energy distribution at decay and $P(E|E',D)$ is the energy distribution of a proton at Earth with energy $E$ injected at a comoving distance $D$ with energy $E'$. As well as gamma rays, extragalactic protons can undergo interactions with the low energy photon backgrounds during propagation through the Universe. The main processes are pair production ($p+\gamma_b \rightarrow p+e^+ + e^-$) and photopion production ($p+\gamma_b \rightarrow N+\pi s$, where $N$ corresponds to nucleons). The distribution function $P(E|E',D)$ is obtained from simulations performed by using the CRPropa 3 package [@CRPropa3:16]. In this program all relevant processes are taken into account including also the interactions of the ultra high energy protons with the low energy photons of the extragalactic background light (see Ref. [@CRPropa3:16] for details). Figure \[FluxAuger\] shows the energy spectrum observed by Auger [@AugerSpec:17] fitted with the function defined in Ref. [@AugerSpec:17] (see appendix \[FluxAstro\]). In the scenario considered in this work it is assumed that this component is of astrophysical origin. The Auger data are compatible with a SHDM component which starts to dominate the flux above $10^{20}$ eV. The SHDM contribution, which corresponds to the considered scenario in which $M_X=10^{22.3}$ eV and $\tau_X=5.4\times10^{22}$ yr is also shown in the figure. As before, the contributions of the galactic halo and the extragalactic halos from the DMCat catalog are included. From the figure it can be seen that the contribution from the extragalactic halos is a small fraction of the total SHDM flux. The propagation effects on the proton and gamma-ray components are also evident, in particular the proton component presents a pile-up originated from the photopion production process [@Bere:88]. ![Ultra high energy cosmic ray flux as a function of the logarithm of the primary energy. The data points correspond to the Auger measurements [@AugerSpec:17]. Double dotted-dashed line corresponds to a fit of the Auger data. Dashed lines correspond to the proton and gamma-ray components originated from SHDM decays and the triple dotted-dashed and dotted-dashed lines correspond to the proton and gamma-ray contribution from the extragalactic halos of the DMCat catalog, respectively. Solid line corresponds to the total contribution. \[FluxAuger\]](FitAugerFlux.eps){width="11cm"} One of the most important characteristic of next generation space-based UHECR observatories, like JEM-EUSO [@JEMEUSO] and POEMMA [@POEMMA], is its very large exposure. Fig. \[NEvents\] shows the expected number of events originated from SHDM decays, corresponding to each halo in the DMCat catalog, as a function of the comoving distance for a constant exposure of $\mathcal{E} = 10^6$ km$^2$ yr sr above $10^{20}$ eV [@POEMMA]. The left panel of the figure shows the separated proton and gamma ray contributions and the right panel shows the sum of these two. Note that the number of events corresponding to proton (photon) primaries is given by the integral of the corresponding term in Eq. (\[IntI\]) above $10^{20}$ eV, with $i=p$ ($i=\gamma$), multiplied by the exposure $\mathcal{E}$. From the left panel of the figure it can be seen that the number of events corresponding to gamma rays decreases much faster with the comoving distance than the one corresponding to protons. This is due to the fact that the gamma rays undergoing interactions are removed from the flux but protons, or more precisely nucleons, lose a fraction of their energy in each interaction but they still contribute to the flux. From the right panel of the figure it can be seen that there is only one halo, located close to the Earth, for which the expected number of events is of order one. For the rest of the halos is smaller than $\sim 0.3$. The halo that most contributes to the number of events corresponds to the Andromeda galaxy, also known as M31. Note that in the two plots of Fig. \[NEvents\] two different regions can be identified, which are separated at a comoving distance of $\sim 55$ Mpc. These two regions correspond to the two different galaxy catalogs used to build the DMCat catalog [@Lisanti:18a]. ![Expected number of events originated from SHDM decays, corresponding to each halo in the DMCat catalog, as a function of the comoving distance $D$ for exposure $\mathcal{E} = 10^6$ km$^2$ yr sr and energy above $10^{20}$ eV. Left panel: expected number of events for protons and gamma rays. Right panel: total number of events irrespective of the type. \[NEvents\]](NevExpMax_PrPh.eps "fig:"){width="7.7cm"} ![Expected number of events originated from SHDM decays, corresponding to each halo in the DMCat catalog, as a function of the comoving distance $D$ for exposure $\mathcal{E} = 10^6$ km$^2$ yr sr and energy above $10^{20}$ eV. Left panel: expected number of events for protons and gamma rays. Right panel: total number of events irrespective of the type. \[NEvents\]](NevExpMax_All.eps "fig:"){width="7.7cm"} Although the contribution from a given extragalactic halo is much smaller than the one corresponding to the galactic halo, it can be important due to the fact that the gamma rays and protons originating in such halo come from a narrow region of the sky. Therefore, in that region the contribution of the extragalactic halo can be more important than the one corresponding to our galaxy, specially in regions far from the galactic center where the galactic contribution decreases considerably. In order to study this possibility the angular distribution of gamma rays and proton is required. The flux from a given halo, $s$, is given by, $$\begin{aligned} % J_{s,i}(E,\theta) = && \frac{\textrm{sr}^{-1}}{4 \pi\ M_X c^2\ \tau_X}\ \frac{dN_{s,i}}{dE}\ \left[ 2\ \Theta\left(\frac{\pi}{2}- \theta\right) \int_{D_s \sin\theta}^{D_s} dr \frac{r\ \rho_{X,\, s}(r)}{\sqrt{r^2-D_s^2 \sin^2(\theta)}} + \right. \nonumber \\ % &&\left. \int_{D_s}^{\infty} dr \frac{r\ \rho_{X,\, s}(r)}{\sqrt{r^2-D_s^2 \sin^2(\theta)}} \right], % \label{JSHDMAng} %\end{aligned}$$ where $\Theta(x)$ is the Heaviside function (i.e. $\Theta(x)=1$ for $x \geq 0$ and $\Theta(x)=0$ otherwise) and $\theta \in [0,\pi]$ is the angle between the direction of the center of the halo and the direction of observation. Note that, for the Burkert dark matter profile, the integral in Eq. (\[JSHDMAng\]) can be done analytically (see appendix \[AngDist\] for details). Figure \[AngDistPlot\] shows the angular distribution calculated from Eq. (\[JSHDMAng\]) (see appendix \[AngDist\]), normalized to its value at $\theta=0$, for the Milky Way (left panel) and Andromeda (right panel). It can be seen from the figure that even though the distribution of Andromeda is much narrower than the one corresponding to the Milky Way, it has a non negligible angular width, which is larger than $4^\circ$. ![Angular distribution, normalized to its value at $\theta=0$, as a function of $\theta$ for the Milky Way (left panel) and for Andromeda (right panel). \[AngDistPlot\]](AngDistMW.eps "fig:"){width="7.7cm"} ![Angular distribution, normalized to its value at $\theta=0$, as a function of $\theta$ for the Milky Way (left panel) and for Andromeda (right panel). \[AngDistPlot\]](AngDistAnd.eps "fig:"){width="7.7cm"} The HEALPix library [@Healpix] is used to study the contribution of the different UHECRs sources in a given region of the sky. It is assumed that the arrival directions of the cosmic rays of astrophysical origin are uniformly distributed. Given a pixelization of the sphere the average number of events with primary energies above $E_{min}$ and arrival directions contained in the $j-th$ pixel is $\langle n_j \rangle(E_{min})=\langle n_j \rangle_{Astro}(E_{min}) + \langle n_j \rangle_{SHDM}(E_{min})$, where $$\begin{aligned} % \label{Nastro} % &&\langle n_j \rangle_{Astro}(E_{min}) = \mathcal{E} \ \frac{\Omega_j}{4 \pi} \ \int_{E_{min}}^\infty dE \ J_{Astro} (E), \\ % \label{Nshdm} % &&\langle n_j \rangle_{SHDM}(E_{min}) = \mathcal{E} \ \frac{1}{4 \pi} \ \int_{E_{min}}^\infty dE \int_{\Omega_j} dl\ db \cos b % \sum_{s=1}^N \sum_{i=\gamma,p} % J_{s,i} (E,l,b). %\end{aligned}$$ Here $\langle n_j \rangle_{Astro}(E_{min})$ corresponds to the average number of events of astrophysical origin, $\Omega_j$ is the solid angle subtended by the $j-th$ pixel, $J_{Astro} (E)$ is the flux of astrophysical origin (see appendix \[FluxAstro\]), $\langle n_j\rangle_{SHDM}(E_{min})$ corresponds to the average number of events for the SHDM component, $l$ and $b$ are the galactic longitude and the galactic latitude, respectively, and $J_{s,i} (E,l,b)$ is given by Eq. (\[JSHDMAng\]) but written as a function of the galactic coordinates. Note that an observatory with uniform exposure is considered. A pixelization of the sphere with 768 pixels is considered (corresponding to $N_{side} = 8$ [@Healpix]). In this pixelization each pixel has an angular radius of the order of $4^\circ$. It is worth mentioning that the reconstruction uncertainties are not included in the calculation because at these energies the angular resolution is in general smaller than $1^\circ$ [@POEMMA], which is much smaller than the pixel radius. The angular integrals in Eq. (\[Nshdm\]) are performed by using the Monte Carlo technique. The left panel of Fig. \[SkyMapNev\] shows the average number of events expected in each pixel for the extragalactic halos from the DMCat catalog, $E_{min}=10^{20}$ eV, and $\mathcal{E} = 10^6$ km$^2$ yr sr. Note that the scale color is logarithmic. From the figure it can be seen that the region in the sky for which the average number of events is larger corresponds to the surroundings of Andromeda. However, a larger exposure is required in order to increase the probability of observing at least one event in the pixels of the surroundings of Andromeda. The right panel of the figure shows the average total number of events, which includes the contributions from the decay of SHDM in the galactic halo, SHDM in the DMCat catalog halos, and the astrophysical component. It can be seen that the galactic halo dominates the average total number of events. However, Andromeda is placed in a region far from the galactic center, then its contribution can be significant, provided the total exposure is larger. ![Left panel: Average number of events for the extragalactic halos from the DMCat catalog, a logarithmic scale color is considered in this case. Right panel: Average total number of events including the contributions from the decay of SHDM in the galactic halo, SHDM in the DMCat catalog halos, and the astrophysical component. Here $E_{min}=10^{20}$ eV and $\mathcal{E} = 10^6$ km$^2$ yr sr. The red star corresponds to the position of Andromeda. \[SkyMapNev\]](SM_AngDist_EG_logE20_0_ExpMax.eps "fig:"){width="7.7cm"} ![Left panel: Average number of events for the extragalactic halos from the DMCat catalog, a logarithmic scale color is considered in this case. Right panel: Average total number of events including the contributions from the decay of SHDM in the galactic halo, SHDM in the DMCat catalog halos, and the astrophysical component. Here $E_{min}=10^{20}$ eV and $\mathcal{E} = 10^6$ km$^2$ yr sr. The red star corresponds to the position of Andromeda. \[SkyMapNev\]](SM_AngDist_GEG_logE20_0_ExpMax.eps "fig:"){width="7.7cm"} The probability to observe a given number of events in a given pixel follows the Poisson distribution. This probability strongly depends on the scenario considered. In particular, for a given value of the exposure, the probability to observe at least one event in the pixels corresponding to the galactic center region is larger for the case in which the contribution from the SHDM decays is non negligible. The probability to observe at least one event in the $j-th$ pixel is given by, $$% \label{Proba} % P(n_j \geq 1 | E_{min})= 1-\exp(-\mu_j), %$$ where $\mu_j = \langle n_j \rangle(E_{min})$ or $\mu_j = \langle n_j \rangle_{Astro}(E_{min})$ for the cases in which the SHDM contribution is non negligible and negligible, respectively. Therefore, the exposure required to measure at least one event in the $j-th$ pixel with probability $p_0$ is given by, $$% \widetilde{\mathcal{E}}_j(p_0)=-\frac{1}{\mu_j}\ \ln(1-p_0). %$$ The left panel of Fig. \[SkyMapExp\] shows the exposure required to observe at least one event in each pixel with 0.95 probability, i.e. $\widetilde{\mathcal{E}}_j(0.95)$ for $E_{min}=10^{20}$ eV and for the case in which the contribution from SHDM is non negligible. It can be seen that $\widetilde{\mathcal{E}}_j(0.95)$ corresponding to the pixels in the center of the galaxy is more than two times smaller than the one corresponding to the regions far from the galactic center. In the case of the Andromeda region $\widetilde{\mathcal{E}}_j(0.95)$ has to be $\sim 1.3$ times smaller compared to regions far from the galactic center. The right panel of Fig. \[SkyMapExp\] shows the ratio between $\widetilde{\mathcal{E}}_j(0.95)$ for the scenario with a negligible SHDM contribution and for the one with a non negligible SHDM contribution. It can be seen that in the galactic center region $\widetilde{\mathcal{E}}_j(0.95)$ for the scenario with a negligible SHDM contribution has to be three times larger than the one corresponding to the case with a non negligible SHDM component. In the Andromeda region this ratio is $\sim 1.8$. ![Left panel: Exposure required to observe at least one event in each pixel with 0.95 probability including all contributions considered. Right panel: Ratio between the exposure required to observe at least one event in each pixel with 0.95 probability without and with the inclusion of a non negligible SHDM component. The minimum energy is $E_{min}=10^{20}$ eV and the red star corresponds to the position of Andromeda. \[SkyMapExp\]](SM_AngDist_GEG_logE20_0_Expo.eps "fig:"){width="7.7cm"} ![Left panel: Exposure required to observe at least one event in each pixel with 0.95 probability including all contributions considered. Right panel: Ratio between the exposure required to observe at least one event in each pixel with 0.95 probability without and with the inclusion of a non negligible SHDM component. The minimum energy is $E_{min}=10^{20}$ eV and the red star corresponds to the position of Andromeda. \[SkyMapExp\]](SM_AngDist_RatioExpo_logE20_0.eps "fig:"){width="7.7cm"} For $E_{min}=10^{20.3}$ eV $\widetilde{\mathcal{E}}_j(0.95)$ increases (see the left panel of Fig. \[SkyMapExp20.3\] of appendix \[RExpo\]). In this case $\widetilde{\mathcal{E}}_j(0.95)$ ranges from $6.3\times 10^6$ km$^2$ yr sr to $2.5\times 10^7$ km$^2$ yr sr. Also the relative importance of the SHDM component of the flux increases for increasing values of the minimum energy, making the differences between the two scenarios considered more important. In particular, in the galactic center region, $\widetilde{\mathcal{E}}_j(0.95)$ for the case with a negligible SHDM contribution has to be 17 times larger than the one corresponding to the case with a non negligible SHDM component. Also, in the Andromeda region it has to be $\sim 7.5$ times larger (see the right panel of Fig. \[SkyMapExp20.3\] of appendix \[RExpo\]). Therefore, the observation of at least one event in one of those pixels for a given exposure can be used to discriminate between these two different scenarios. The probability to observe at least one event in a given set of pixels is given by, $$% P_{set}(n \geq 1 | E_{min}, \mathcal{E})= 1-\exp\left[-\sum_{j\in S_p} \mu_j(E_{min}, \mathcal{E}) \right], %$$ where $S_p$ is the set of pixels considered. Fig. \[Proba\] shows the probability to observe at least one event in two different sets of pixels as a function of the exposure for the two scenarios considered and for $E_{min}=10^{20}$ eV and $E_{min}=10^{20.3}$ eV. The two sets of pixels considered are: *i*) the four pixels closer to the galactic center, $S_{GC}$, and *ii*) the two hottest pixels in the surroundings of Andromeda (see Fig. \[SkyMapNev\]), $S_{M31}$. From the figure it can be seen that the probability reaches one for smaller values of the exposure in the scenarios with a non negligible SHDM contribution. Also, the probability reaches one for smaller values of the exposure in the case of $S_{GC}$, as expected. ![Probability to observe at least one event in a given set of pixels (see text for details) as a function of the exposure for $E_{min}=10^{20}$ eV (left panel) and $E_{min}=10^{20.3}$ eV (right panel). \[Proba\]](ProbaNg1.eps "fig:"){width="7.7cm"} ![Probability to observe at least one event in a given set of pixels (see text for details) as a function of the exposure for $E_{min}=10^{20}$ eV (left panel) and $E_{min}=10^{20.3}$ eV (right panel). \[Proba\]](ProbaNg1_20_3.eps "fig:"){width="7.7cm"} For a given set of pixels let us consider the exposure for which the probability to observe at least one event in the scenario without a component originated from SHDM is $0.1$, which is denoted as $\mathcal{E}_{10}$. Therefore, if for this value of the exposure reached by a given observatory at least one event is observed in this set of pixels, the null hypothesis that states that the UHECR flux is the one corresponding to the astrophysical origin is rejected at $90\%$ confidence level (CL). The probability to observe at least one event for the exposure value $\mathcal{E}_{10}$ in a given set of pixels but for the model that includes a SHDM contribution considered before gives the probability to reject the null hypothesis. This probability will be denoted as $P_{rej}$. Table \[P10E10\] shows the values of $\mathcal{E}_{10}$ and $P_{rej}$ for $E_{min}=10^{20}$ eV and $E_{min}=10^{20.3}$ eV and for the two sets of pixels considered. From the table it can be seen that for $E_{min}=10^{20}$ eV, $P_{rej}$ is smaller than 0.5 for the two sets of pixels considered (0.34 and 0.26 for $S_{GC}$ and $S_{M31}$, respectively). Note that the Auger exposure at present is approximately $8 \times 10^4$ km$^2$ yr sr [@Verzi:19] and it can reach values of order of $2 \times 10^5 $ km$^2$ yr sr at the end of its life [@Batista:18]. Therefore, with Auger data it will be possible to performed the proposed test for $E_{min}=10^{20}$ eV. For $E_{min}=10^{20.3}$ eV $P_{rej}$ is larger than 0.5, i.e. 0.85 and 0.59 for $S_{GC}$ and $S_{M31}$, respectively. The set $S_{M31}$ requires a larger exposure but it can be used to test independently the null hypothesis. ----------------------- -------------------------------------------------- ------------------------ --------------------------------------------------- ------------------------- $E_{min}$ \[eV\] $\mathcal{E}_{10}$ \[km$^2$ yr sr\] for $S_{GC}$ $P_{rej}$ for $S_{GC}$ $\mathcal{E}_{10}$ \[km$^2$ yr sr\] for $S_{M31}$ $P_{rej}$ for $S_{M31}$ \[0.5ex\] $10^{20}$ $6.1\times 10^4$ 0.34 $1.2\times 10^5$ 0.26 \[0.5ex\] $10^{20.3}$ $9.4\times 10^5$ 0.85 $1.9\times 10^6$ 0.59 \[0.5ex\] ----------------------- -------------------------------------------------- ------------------------ --------------------------------------------------- ------------------------- : $\mathcal{E}_{10}$ and $P_{rej}$ for the two sets of pixels considered and for $E_{min}=10^{20}$ eV and $E_{min}=10^{20.3}$ eV.[]{data-label="P10E10"} Although for $E_{min}=10^{20.3}$ eV larger values of the exposure are required for the probability to saturate, it allows to reject the null hypothesis by using both set of pixels considered. The reason for this behavior has to do with the fact that in this energy range the contribution of the astrophysical component decreases much faster (the flux goes as $\sim E^{-5}$) than the one corresponding to the SHDM component (see Fig. \[FluxAuger\]) and then the component originated from SHDM decay becomes more important. Conclusions =========== In this article the possibility to identify a scenario in which there is a non-negligible but minority component, originated from the decay of SHDM particles, that dominates the UHECR flux beyond the suppression has been studied. Due to the expected small flux of UHECRs originated from the decay of SHDM particles these studies have been done in the context of the next generation UHECR observatories which are planed to have larger exposures compared with current ones. Besides the contribution from the galactic halo, the contribution from extragalactic halos has also been considered. The scenario in which the SHDM particles have a mass of $10^{22.3}$ eV and a decay time of $\tau_X = 5.4\times 10^{22}$ yr has been considered. The values of these two parameters are compatible with current constraints. For this scenario it has been found that the halo of the Andromeda galaxy is the one that most contributes to the SHDM extragalactic component. For a uniform exposure of $10^6$ km$^2$ yr sr the mean number of events expected from Andromeda, above $10^{20}$ eV, is of order one. The null hypothesis which states that the UHECR flux is composed by a uniform flux of astrophysical origin has a $\sim 85\%$ probability to be rejected considering a set of pixels in the region of the sky close to the galactic center and for primary energies above $10^{20.3}$ eV. In this case the exposure required is $\sim 9.4\times 10^5$ km$^2$ yr sr. For larger values of the exposure, i.e. $1.9\times 10^6$ km$^2$ yr sr, the null hypothesis has $\sim 59\%$ probability to be rejected considering the hottest pixels in the surroundings of Andromeda and considering also the same energy range. This can be used as an independent test. Therefore, the next generation UHECR observatories that reach exposures of the order of $10^6$ km$^2$ yr sr will be able to identify or even constraint the scenarios in which there is a minority component originated from the decay of SHDM particles. Cosmic rays of astrophysical origin {#FluxAstro} =================================== The fitting function of Ref. [@AugerSpec:17] is used to describe the component of astrophysical origin, which is given by, $$% J(E) = J_a \left\{ % \begin{array}{ll} \left( \mathop{\displaystyle \frac{E}{E_a} } \right)^{-\gamma_1} & E \leq E_a \\[0.4cm] % \left( \mathop{\displaystyle \frac{E}{E_a} } \right)^{-\gamma_2} % \frac{1+\left( \frac{E_a}{E_s} \right)^{\Delta \gamma}}% % {1+\left(\frac{E}{E_s} \right)^{\Delta \gamma}} & E > E_a % \end{array} \right., % \label{JCR} %$$ where $J_a$ is a normalization constant, $E_a=5.08\times 10^{18}$ eV, $E_s=3.9\times 10^{19}$ eV, $\gamma_1=3.293$, $\gamma_2=2.53$, and $\Delta \gamma = 2.5$. The integral flux for $E>E_a$, which is used in the calculations, is given by, $$\begin{aligned} % J(>E)=&& J_a \ \left[1+\left( \frac{E_a}{E_s} \right)^{\Delta \gamma} \right] \left( \frac{E_a}{E_s} \right)^\gamma % \left( \frac{E}{E_s} \right)^{1-\gamma-\Delta \gamma} \frac{E_s}{\gamma+\Delta \gamma-1} \times \nonumber \\[0.4cm] % && _2F_1\left(1,\frac{\gamma+\Delta \gamma-1}{\Delta \gamma},\frac{\gamma+2 \Delta \gamma-1}{\Delta \gamma}, % -\left(\frac{E_s}{E}\right)^{\Delta \gamma} \right), %\end{aligned}$$ where $_2F_1(a,b,c,z)$ is the hypergeometric function. Angular distribution {#AngDist} ==================== The integral in Eq. (\[JSHDMAng\]) can be performed analytically for the case of the Burkert dark matter profile. It can be expressed as, $$% \label{JSHDMAngI} % J(E,\theta) = \frac{\textrm{sr}^{-1}}{4 \pi\ M_X c^2\ \tau_X}\ \frac{dN}{dE} \times % \left\{ % \begin{array}{ll} I_1(\theta,D) + I_2(\theta,D) & \ \ \ 0 \leq \theta \leq \frac{\pi}{2} \\[0.4cm] % I_2(\theta,D) - I_1(\theta,D) & \ \ \ \frac{\pi}{2} < \theta \leq \frac{\pi}{2} % \end{array} \right., %$$ where subscripts $i$ and $s$ are omitted for clarity and, $$\begin{aligned} % I_1(\theta,D) = && \frac{\rho_{B}\ r_{B}}{2} \left[ \frac{r_{B}}{\sqrt{D^2 \sin^2\theta + r_B^2}} % \left( \arctan\left[ \frac{D | \cos\theta |}{\sqrt{D^2 \sin^2\theta + r_B^2}} \right] + \right. \right. \nonumber \\[0.3cm] % && \left. \left. \textrm{artanh} \left[ \frac{r_B | \cos\theta |}{\sqrt{D^2 \sin^2\theta + r_B^2}} \right]\right) -\xi_1(\theta,D) % \right], \\[0.5cm] % I_2(\theta,D) = &&\frac{\rho_{B}\ r_{B}}{2} \left[ \frac{r_{B}}{\sqrt{D^2 \sin^2\theta + r_B^2}} % \left( \frac{\pi}{2} + \textrm{artanh} \left[ \frac{r_B | \cos\theta |}{\sqrt{D^2 \sin^2\theta + r_B^2}} \right] \right) \right. % \nonumber \\[0.3cm] % &&-\xi_2(\theta,D) \Bigg]. %\end{aligned}$$ Here, $$\begin{aligned} % \xi_1(\theta,D)=\left\{ % \begin{array}{ll} % \frac{r_B}{\sqrt{D^2 \sin^2\theta - r_B^2}}\ \arctan\left[ \frac{|\cos\theta | \sqrt{D^2 \sin^2\theta - r_B^2}}{D \sin^2\theta + r_B}\right] % & \ \ \ D \sin \theta > r_B \\[0.3cm] % \sqrt{\frac{D-r_B}{D+r_B}} & \ \ \ D \sin \theta = r_B \\[0.4cm] % \frac{r_B}{\sqrt{r_B^2-D^2 \sin^2\theta}}\ \textrm{artanh}\left[ \frac{|\cos\theta | \sqrt{r_B^2-D^2 \sin^2\theta}}{D \sin^2\theta + r_B} % \right] & \ \ \ D \sin \theta < r_B % \end{array} % \right. %\end{aligned}$$ and $$\begin{aligned} % \xi_2(\theta,D)=\left\{ % \begin{array}{ll} % \frac{r_B}{\sqrt{D^2 \sin^2\theta - r_B^2}}\ \arctan\left[ \frac{\sqrt{D^2 \sin^2\theta - r_B^2}}{r_B}\right] % & \ \ \ D \sin \theta > r_B \\[0.5cm] % 1 & \ \ \ D \sin \theta = r_B \\[0.3cm] % \frac{r_B}{\sqrt{r_B^2-D^2 \sin^2\theta}}\ \textrm{artanh}\left[ \frac{\sqrt{r_B^2-D^2 \sin^2\theta}}{r_B} % \right] & \ \ \ D \sin \theta < r_B % \end{array} % \right.. %\end{aligned}$$ Required exposure for larger minimum energy {#RExpo} =========================================== The left panel of Fig. \[SkyMapExp20.3\] shows the exposure required to observe at least one event in each pixel for $E_{min}=10^{20.3}$ eV and $p_0=0.95$ and for the case in which the contribution from SHDM is non negligible. The right panel of the figure shows the ratio between the exposure required to observed at least one event in each pixel with 0.95 probability in the scenario without a SHDM contribution and in the one with a non negligible SHDM contribution. ![Left panel: Exposure required to observe at least one event in each pixel with 0.95 probability including all contributions considered. Right panel: Ratio between the exposure required to observe at least one event in each pixel with 0.95 probability without and with the inclusion of the SHDM component. The minimum energy is $E_{min}=10^{20.3}$ eV and the red star corresponds to the position of Andromeda. \[SkyMapExp20.3\]](SM_AngDist_GEG_logE20_3_Expo.eps "fig:"){width="7.7cm"} ![Left panel: Exposure required to observe at least one event in each pixel with 0.95 probability including all contributions considered. Right panel: Ratio between the exposure required to observe at least one event in each pixel with 0.95 probability without and with the inclusion of the SHDM component. The minimum energy is $E_{min}=10^{20.3}$ eV and the red star corresponds to the position of Andromeda. \[SkyMapExp20.3\]](SM_AngDist_RatioExpo_logE20_3.eps "fig:"){width="7.7cm"} A. D. S. is member of the Carrera del Investigador Científico of CONICET, Argentina. This work is supported by ANPCyT PICT-2015-2752, Argentina. The authors thank the members of the Pierre Auger Collaboration for useful discussions and R. Clay for reviewing the manuscript. [99]{} D. Ivanov, for the Pierre Auger Collaboration and the Telescope Array Collaboration, *Report of the Telescope Array-Pierre Auger Observatory Working Group on Energy Spectrum*, *PoS(ICRC2017)* (2017) 498. P. Aab et al. (The Pierre Auger collaboration), *Depth of Maximum of Air-Shower Profiles at the Auger Observatory: Measurements at Energies above 10$^{17.8}$ eV*, *Phys. Rev. D* [**90**]{} (2014) 122005. R. Abbasi et al. (The Telescope Array collaboration), *Depth of Ultra High Energy Cosmic Ray Induced Air Shower Maxima Measured by the Telescope Array Black Rock and Long Ridge FADC Fluorescence Detectors and Surface Array in Hybrid Mode*, *Astrophys. J.* [**858**]{} (2015) 76. V. de Souza for The Pierre Auger Collaboration and Telescope Array Collaboration, *Testing the agreement between the $X_{max}$ distributions measured by the Pierre Auger and Telescope Array Observatories*, *PoS (ICRC2017)* (2017) 522. P. Aab et al. (The Pierre Auger collaboration), *Observation of a large-scale anisotropy in the arrival directions of cosmic rays above $8\times 10^{18}$ eV*, *Science* [**357**]{} (2017) 1266. P. Aab et al. (The Pierre Auger collaboration), *An indication of anisotropy in arrival directions of ultra-high-energy cosmic rays through comparison to the flux pattern of extragalactic gamma-ray sources*, *Astrophys. J.* [**853**]{} (2018) L29. U. Giaccari for The Pierre Auger Collaboration, *Arrival directions of the highest-energy cosmic rays detected by the Pierre Auger Observatory*, *PoS (ICRC2017)* (2017) 483. P. Aab et al. (The Pierre Auger collaboration), *Searches for Anisotropies in the Arrival Directions of the Highest Energy Cosmic Rays Detected by the Pierre Auger Observatory*, *Astrophys. J.* [**804**]{} (2015) 15. R. Abbasi et al. (The Telescope Array collaboration), *Indications of intermediate -scale anisotropy of cosmic rays with energy greater than 57 EeV in the Northern sky measured with the surface detector of of the Telescope Array experiment*, *Astrophys. J.* [**790**]{} (2014) L21. K. Kawata et al. (The Telescope Array collaboration), *Ultra-high-energy cosmic-ray hotspot observed with the Telescope Array surface detector*, *PoS (ICRC2015)* (2016) 2016. E. Alcantara, L. Anchordoqui, and J. Soriano, *Hunting for super-heavy dark matter with the highest-energy cosmic rays*, *Phys. Rev. D* [**99**]{} (2019) 103016. G. Medina-Tanco and A. Watson, *Dark matter halos and the anisotropy of ultra-high energy cosmic rays*, *Astropart. Phys.* [**12**]{} (1999) 25. R. Aloisio and F. Totorici, *Super heavy dark matter and UHECR anisotropy at low energy*, *Astropart. Phys.* [**29**]{} (2008) 307. O. Kalashev et al., *Global anisotropy of arrival directions of ultra-high-energy cosmic rays: capabilities of space-based detectors*, *JCAP* [**0803**]{} (2008) 003. R. Aloisio, S. Matarrese, and A. Olinto, *Super Heavy Dark Matter in light of BICEP2, Planck and Ultra High Energy Cosmic Rays Observations*, *JCAP* [**08**]{} (2015) 024. O. Kalashev and M. Kuznetsov, *Heavy decaying dark matter and large-scale anisotropy of high-energy cosmic rays*, *JETP Lett.* [**106**]{} (2017) 73. L. Marzola and F. Urban, *Astropart. Phys.* [**93**]{} (2017) 56. V. Kuzmin, I. Tkachev, *Ultrahigh-energy cosmic rays, superheavy long living particles, and matter creation after inflation*, *JETP Lett.* [**68**]{} (1998) 271. D. Chung, E. Kolb, and A. Riotto, *Superheavy dark matter*, *Phys. Rev. D* [**59**]{} (1998) 023501. V. Kuzmin and I. Tkachev, *Matter creation via vacuum fluctuations in the early universe and observed ultrahigh-energy cosmic ray events*, *Phys. Rev. D* [**59**]{} (1999) 123006. V. Kuzmin and I. Tkachev, *Ultrahigh-energy cosmic rays and inflation relics*, *Phys. Rep.* [**320**]{} (1999) 1999. O. Kalashev and M. Kuznetsov, *Constraining heavy decaying dark matter with the high energy gamma-ray limits*, *Phys. Rev. D* [**94**]{} (2016) 063535. P. Sommers, *Cosmic ray anisotropy analysis with a full-sky observatory* , *Astropart. Phys.* [**14**]{} (2001) 271. C. Barbot, *Decay of super-heavy particles: user guide of the SHdecay program*, *Comput. Phys. Commun.* [**157**]{} (2004) 63. R. Protheroe and P. Biermann, *A new estimate of the extragalactic radio background and implications for ultra-high-energy $\gamma$-ray propagation*, *Astropart. Phys.* [**6**]{} (1996) 45, R. Protheroe and P. Biermann, *Astropart. Phys. Erratum-ibid* [**7**]{} (1997) 181. https://github.com/CRPropa/CRPropa3-data. A. Burkert, *The structure of dark matter halos in dwarf galaxies*, *Astrophys. J.* [**447**]{} (1995) L25. F. Nesti and P. Salucci, *The Dark Matter halo of the Milky Way*, *JCAP* [**1307**]{} (2013) 016. M. Lisanti et al., *A Search for Dark Matter Annihilation in Galaxy Groups*, *Phys. Rev. Lett.* [**120**]{} (2018) 101101. M. Lisanti et al., *Mapping Extragalactic Dark Matter Annihilation with Galaxy Surveys: A Systematic Study of Stacked Group Searches*, *Phys. Rev. D* [**97**]{} (2018) 063005. R. Brent Tully, *Galaxy Groups: A 2MASS Catalog*, *Astrophys. J.* [**149**]{} (2015) 171. E. Kourkchi and R. Brent Tully, *Galaxy Groups Within 3500 km s$^{-1}$*, *Astrophys. J.* [**853**]{} (2017) 16. C. Bleve for the Pierre Auger Collaboration, *Update of the neutrino and photon limits from the Pierre Auger Observatory*, *PoS (ICRC2015)* (2015) 1103. R. Batista et al., *CRPropa 3-a Public Astrophysical Simulation Framework for Propagating Extraterrestrial Ultra-High Energy Particles*, *JCAP* [**1605**]{} (2016) 038. F. Fenu for the Pierre Auger Collaboration, *The cosmic ray energy spectrum measured using the Pierre Auger Observatory*, *PoS (ICRC2017)* (2017) 486. V. Berezinsky and S. Grigor’eva, *A bump in the ultra-high energy cosmic ray spectrum*, *Astron. Astrphys.* [**199**]{} (1988) 1. J.H. Adams Jr. et al. (The JEM-EUSO Collaboration), *The JEM-EUSO mission: An introduction*, *Experimental Astronomy* [**40**]{} (2015) 3. L. Anchordoqui et al., *UHECRs with POEMMA*, arxiv:1907.03694. K. Górski et al., *HEALPix: A framework for high-resolution discretization and fast analysis of data distributed on the sphere*, *Astrophys. J.*, [**622**]{} (2005) 759. V. Verzi for the Pierre Auger Collaboration, *Measurement of the energy spectrum of ultra-high energy cosmic rays using the Pierre Auger Observatory*, *PoS (ICRC2019)* (2019) 450. R. Batista et al., *Open Questions in Cosmic-Ray Research at Ultrahigh Energies*, *Front. Astron. Space Sci.* [**6**]{} (2019) 26.
--- abstract: | The relation between level lines of Gaussian free fields (GFF) and SLE${}_4$-type curves was discovered by O. Schramm and S. Sheffield. A weak interpretation of this relation is the existence of a coupling of the GFF and a random curve, in which the curve behaves like a level line of the field. In the present paper we study these couplings for the free field with different boundary conditions. We provide a unified way to determine the law of the curve (i.e. to compute the driving process of the Loewner chain) given boundary conditions of the field, and to prove existence of the coupling. The proof is reduced to the verification of two simple properties of the mean and covariance of the field, which always relies on Hadamard’s formula and properties of harmonic functions. Examples include combinations of Dirichlet, Neumann and Riemann-Hilbert boundary conditions. In doubly connected domains, the standard annulus SLE${}_4$ is coupled with a compactified GFF obeying Neumann boundary conditions on the inner boundary. We also consider variants of annulus SLE coupled with free fields having other natural boundary conditions. These include boundary conditions leading to curves connecting two points on different boundary components with prescribed winding as well as those recently proposed by C. Hagendorf, M. Bauer and D. Bernard. title: 'Hadamard’s formula and couplings of SLEs with free field' --- Konstantin Izyurov[^1] and Kalle Kytölä[^2]\ Université de Genève, Section de Mathématiques Introduction {#sec: intro} ============ The topic of conformally invariant random processes in two dimensions has received a lot of attention during the past decade. Recent developments have enabled a probabilistic approach to problems traditionally studied in theoretical physics by means of conformal field theory. Two fundamental examples of random conformally invariant objects are Schramm-Loewner evolutions (SLE) and Gaussian free fields (GFF). Schramm-Loewner evolutions are random fractal curves described by growth processes encoded in Loewner chains. Their most important characteristics are captured by one parameter, a positive real number $\kappa$, but still in different setups one needs different variants of SLE${}_\kappa$ as we will again see in this article. The Gaussian free field is a statistical model that fits naturally both in the setup of conformal field theory and in that of probability theory: it is essentially the simplest Euclidean quantum field theory, which describes the free massless boson, but it also admits an easy interpretation as a random generalized function. Informally speaking, the Gaussian free field $\Phi$ in a planar domain ${\Omega}$ is a collection of Gaussian random variables indexed by the points of the domain, $\Phi = \big( \Phi(z)\big)_{z \in {\Omega}}$, such that: - The mean ${\mathsf{E}}\left[ \Phi(z) \right] = M(z)$ is a harmonic function. - The covariance ${\mathsf{E}}\left[ \big( \Phi(z_1) - M(z_1) \big) \big( \Phi(z_2) - M(z_2) \big) \right] = C(z_1, z_2)$ is a Green’s function in ${\Omega}$. To obtain an unambiguous definition of the GFF one has to specify which harmonic function to choose, and what is meant by the Green’s function. We will usually specify $M$ by its boundary conditions. The Green’s functions will be solutions to $-{\triangle}G(\cdot,z_2) = \delta_{z_2}(\cdot)$ with prescribed boundary conditions. From the definition one immediately sees that GFF will posses conformal invariance properties — indeed harmonic functions and Green’s functions are simply transported by conformal maps. If ${\phi}: {\Omega}\rightarrow {\Omega}'$ is a conformal map and $\Phi$ is a GFF in ${\Omega}'$, then $\Phi\circ {\phi}$ is a GFF in ${\Omega}$, boundary conditions in ${\Omega}$ being the pullback of those in ${\Omega}'$. We will mostly deal with boundary conditions that transform nicely under conformal maps. Note that $\Phi$ being Gaussian, the law is indeed determined by its mean and covariance. Due to the blowup of the covariance as $|z_1-z_2| \rightarrow 0$, however, the field $\Phi$ is not a random function but rather a random distribution (a generalized function). We postpone a formal definition of GFF to [Section \[sec: coupling\]]{}. A typical example of how the mean $M$ and covariance $C$ are specified appears in the works of Schramm and Sheffield [@SS-harmonic_explorer; @SS-contour_lines] which first established a relation between the Gaussian free field and Schramm-Loewner evolutions. In a simply connected domain ${\Omega}$ with boundary ${\partial}{\Omega}$ divided to two complementary arcs $l_1$ and $l_2$ one defines $M$ and $C$ by $$\begin{aligned} \label{eq: SS example} \left\{ \begin{array}{rll} {\triangle}M(z) & = \; 0 \quad & \textrm{ for $z \in {\Omega}$}\\ M(z) & = \; +\lambda \quad & \textrm{ for $z \in l_1$} \\ M(z) & = \; - \lambda \quad & \textrm{ for $z \in l_2$} \end{array} \right. \qquad \textrm{ and } \qquad \left\{ \begin{array}{rll} {\triangle}_z C(z,z_2) & = \; -\delta_{z_2}(z) \quad & \textrm{ for $z \in {\Omega}$}\\ C(z,z_2) & = \; 0 \quad & \textrm{ for $z \in {\partial}{\Omega}$.} \end{array} \right. \end{aligned}$$ Schramm and Sheffield showed that chordal SLE${}_4$ describes the scaling limit of the zero level lines of a discrete Gaussian free field with the above boundary conditions, when the parameter $\lambda$ has the particular value $\lambda = \sqrt{\pi/8}$. In particular, the free field is naturally coupled with a chordal SLE${}_4$, and in the scaling limit the level lines of discrete GFF become discontinuity lines of the GFF of jump $2 \lambda$. We will be interested in couplings of different variants of GFF with random growth proceces of SLE type. A variant of GFF is a rule associating to any domain ${\Omega}$ with $n+1$ marked points $x, x_1, x_2, \dots, x_n \in {\partial}{\Omega}$ a free field $\Phi_{\Omega;x, x_1,\dots,x_n}$ — for instance, in the above example (\[eq: SS example\]) the marked points are the two endpoints of the boundary arcs. Take a domain $({\Omega};x,x_1,x_2,\dots)$ and suppose we have a random curve $\gamma \subset \Omega$ growing from $x$. The main property we require of the coupling is: > \[quote: coupling property\] Conditionally on the random curve $\gamma \subset {\Omega}$ starting from $x \in {\partial}{\Omega}$, the law of the free field $\Phi_{({\Omega};x,x_1,\ldots,x_n)}$ is the same as that of the free field $\Phi_{({\widetilde}{{\Omega}};\tilde{x},x_1,\ldots,x_n)}$ in the domain ${\widetilde}{{\Omega}} = {\Omega}\setminus \gamma$, where $\tilde{x} \in {\partial}{\widetilde}{{\Omega}}$ is the tip of the curve $\gamma$. This property also immediately suggests a constructive way of producing the coupling given the random curve and the laws of the free fields in different domains: > \[quote: sampling\] Sample the random curve $\gamma$, and then sample independently the free field $\Phi_{({\widetilde}{{\Omega}}; \tilde{x}, x_1, \ldots, x_n)}$ in the slitted domain ${\widetilde}{{\Omega}} = {\Omega}\setminus \gamma$. The law of the resulting field is the same as the free field $\Phi_{({\Omega}; x, x_1, \ldots, x_n)}$ in the original domain. The motivation for imposing these properties of the coupling is the example of Schramm & Sheffield, in which the discontinuity line of the free field satisfies them. The present article exhibits numerous variations of that basic example. The article is organized as follows. [Sections \[sec: Loewner chains\] and \[sec: SLE\]]{} recall necessary background on Loewner chains and SLE in simply and multiply connected domains. [Section \[sec: couplings of SLE and GFF\]]{} is devoted to the general setup for establishing couplings of SLEs and free fields. [Section \[sec: basic equations\]]{} writes the two basic conditions that we will verify in each case to prove the existence of couplings, and [Section \[sec: SSExample\]]{} concretely illustrates these conditions in the simplest example case (\[eq: SS example\]) of Schramm & Sheffield. We define free field in [Section \[sec: coupling\]]{} and show that the two basic conditions imply a weak form of coupling. Next, in [Section \[sec: Hadamard\]]{} we recall and prove in a setup appropriate for the present purpose the Hadamard’s formula, whose variants are crucial to the verification of the basic conditions in all cases. The concrete examples are divided to two sections, treating simply connected domains and doubly connected domains separately. [Section \[sec: simply connected\]]{} presents free fields with different boundary conditions in simply connected domains. The examples here include coupling of the dipolar SLE${}_4$ with GFF having combined jump-Dirichlet and Neumann boundary conditions, and the coupling of SLE${}_4(\rho)$ with GFF having combined jump-Dirichlet and Riemann-Hilbert boundary conditions. We also show in the presence of more complicated combinations of boundary conditions how the coupling determines the law of the curve, i.e. how to compute the Loewner driving process. [Section \[sec: doubly connected\]]{} treats examples in doubly connected domains. We warm up with a simple case of a punctured disc, giving a short proof that the radial SLE${}_\kappa$ is coupled with a compactified free field with jump-Dirichlet boundary conditions as stated in [@Dubedat-SLE_and_free_field]. In an annulus with jump-Dirichlet boundary conditions on one boundary component and Neumann boundary conditions on the other, we show that the compactified free field is coupled with the standard annulus SLE${}_4$ introduced in [@BB-zig_zag; @Zhan-SLE_in_doubly_connected_domains]. We also review the SLE${}_4$ variants proposed in [@HBB-free_field_in_annulus] on grounds of free field partition functions, and show that they indeed admit couplings with the non-compactified free fields with corresponding boundary conditions. Another new example consists in imposing jump-Dirichlet boundary conditions on both boundary components for a compactified free field, leading to a curve with prescribed winding. In [Section \[sec: Dirichlet general kappa\]]{} we show that the cases with Dirichlet boundary conditions admit generalizations to $\kappa \neq 4$. Appendix \[sec: commutation\] explains why extensions at $\kappa \neq 4$ don’t work with all boundary conditions, and Appendix \[sec: Loewner lemma\] contains the proof of a property of Loewner chains we need in conjunction with the general Hadamard’s formulas ### Relation to other work {#relation-to-other-work .unnumbered} We note that the relation of the free field and SLE has already been explored beyond the basic example of Schramm and Sheffield. One research direction has been establishing the coupling in a strong sense. Note that in this article we content ourselves with the weak form of the coupling described above, and we only consider restriction of the free field to subdomains almost surely untouched by the curve. Dubédat has however given a procedure to extend couplings from subdomains to the full domain, and shown the strong interpretation of the coupling in which the random curve is a deterministic function of the free field configuration [@Dubedat-SLE_and_free_field]. The effect of the boundary conditions of the free field on the law of the curve is another important generalization of the basic example, and this is the direction we systematically pursue also in the present article. Earlier work in this direction concerns especially the appropriate SLE variants when one allows several jumps in the Dirichlet boundary conditions, discussed in some cases already in [@SS-contour_lines; @Cardy-SLE_kappa_rho] and developed in more generality in [@Dubedat-SLE_and_free_field]. Recently, Hagendorf & Bauer & Bernard [@HBB-free_field_in_annulus] proposed natural SLE variants in annulus based on computations of free field partition functions with combined jump-Dirichlet and Neumann boundary conditions. Our examples cover also these cases explicitly. It is also worth noting that Schramm & Sheffield themselves indicated how their coupling can be extended to chordal SLE${}_\kappa$ with $\kappa \neq 4$ by modifying the conformal transformation property of the field in the manner dictated by the Coulomb gas formalism of conformal field theory. In our examples which involve piecewise Dirichlet boundary conditions we show how to treat $\kappa \neq 4$, and we give a non-commutation argument explaining why one is constrained to $\kappa=4$ in the presence of other boundary conditions. Generalizations to massive free fields have been treated in [@MS-massive_SLEs_ICMP; @BBC-near_critical]. Many aspects of SLE${}_4$ related conformal field theories are considered in the forthcoming articles [@MZ-free_fields; @KM-free_fields]. Growth processes and Loewner evolutions {#sec: Loewner chains} --------------------------------------- The Loewner evolution is a way of describing growth processes, curves in particular, in terms of conformal maps. In the case when ${\Omega}$ is a simply-connected domain with analytic boundary, a setup convenient for our purposes is as follows. To each point of the boundary $x \in {\partial}{\Omega}$ we associate a Loewner vector field $V_x(z) {\partial_{{z}}}$, satisfying the following properties - $V_x(z)$ is analytic inside the domain ${\Omega}$ and up to the boundary apart from the point $x$; - for $z \in {\partial}{\Omega}\setminus {\left\{ {x} \right\}}$ the vector field $V_x(z) {\partial_{{z}}}$ is tangential to the boundary; - $V_x(z)$ has a simple pole at $x$ with ${\mathrm{Res}_{x} \left( V_x(z) \right)} = 2 \, \tau_x^2$, where $\tau_x$ is a unit tangent to ${\partial}{\Omega}$ at $x$; - $V_x(z)$ is bounded apart from the neighborhood of $x$. Given a continuous function $t \mapsto X_t \in {\partial}{\Omega}$ called the driving process, the Loewner’s differential equation is $$\begin{aligned} \label{eq: Loewner} {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t(z) \; = \; V_{X_t}(g_t(z)), \qquad g_0(z) = z \tag{Loe}\end{aligned}$$ where the initial condition is a point of the domain, $z \in {\Omega}$. For all $t\geq0$ we let ${K}_t \subset {\Omega}$ be the set of points $z$ for which the solution fails to exist up to time $t$. The hulls $({K}_t)_{t \geq 0}$ form a growth process, ${K}_{t_1} \subset {K}_{t_2}$ for $t_1 < t_2$. The solution $(g_t)_{t \geq 0}$ is called a Loewner chain for the growth process. Familiar examples in the half-plane, disc and strip are respectively $$\begin{aligned} & \textrm{{\bf Domain}} & & \textrm{{\bf Vector fields}} & & \textrm{{\bf Flow}} & \vspace{.2cm} \\ \label{eq: Loewner H} & {\mathbb{H}}= {\left\{ {{\Im \mathrm{m } \, }z > 0} \right\}} \qquad & & V_x(z) = \frac{2}{z-x} \qquad & & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t(z) = \frac{2}{g_t(z)-X_t} \tag{Loe-${\mathbb{H}}$} \\ \label{eq: Loewner D} & {\mathbb{D}}= {\left\{ {|z|<1} \right\}} \quad & & V_x(z) = -z \frac{z+x}{z-x} \quad & & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t(z) = g_t(z) \frac{X_t+g_t(z)}{X_t-g_t(z)} \tag{Loe-${\mathbb{D}}$} \\ \label{eq: Loewner S} & {\mathbb{S}}= {\left\{ {0 < {\Im \mathrm{m } \, }z < \pi} \right\}} \quad & & V_x(z) = \coth \left( \frac{z-x}{2} \right) \quad & & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t(z) = \coth \left( \frac{g_t(z)-X_t}{2} \right). \tag{Loe-${\mathbb{S}}$}\end{aligned}$$ The first flow in ${\mathbb{H}}$ fixes the point $\infty$, the second in ${\mathbb{D}}$ fixes $0$, and the third in ${\mathbb{S}}$ fixes both $\pm \infty$. These properties make the chosen flows convenient, but we remark that the choices are by no means unique. In particular it is worth noting that a growth process can be described by several different Loewner chains. In what follows we assume that $V_x(z)$ depends sufficiently nicely on $x$, as is the case with the three examples. The following proposition is standard, and in concrete examples we only use it with the three vector fields listed above. \[prop: hloc\] For all $t>0$, ${\Omega}\setminus {K}_t$ is simply-connected, and $z \mapsto g_t(z)$ is a conformal map from ${\Omega}_t := {\Omega}\setminus {K}_t$ to ${\Omega}$. Moreover, ${\partial_{{t}}} {\mathrm{lhcap}}_{X_0} ({K}_t)|_{t=0} = 2$, where ${\mathrm{lhcap}}$ is the local half-plane capacity. The local half-plane capacity of ${K}_t$ is informally defined as follows: if the boundary near the point $X_0$ is a straight line, translate and rotate the domain so that it would actually become a part of ${\mathbb{R}}$ and ${K}_t$ would become a subset of ${\mathbb{H}}$, then take the half plane capacity. If the boundary is not a straight line, use a conformal map $f$ to $\mathbb{H}$ such that $|f'(x_0)|=1$. Roughly speaking, the last statement of the Proposition means that for small values of $t$, the size of the hull doesn’t depend much on global structure of the domain and the evolution — to first order it is completely determined by the residue of $V_x(z)$ (which in turn is fixed by our conventions for Loewner vector fields). We postpone the formal definition along with the proof of the Proposition to Appendix \[sec: Loewner lemma\]. Note that the map $g_t$ maps the tip of the growing hull ${K}_t$ to the point $X_t$. The notion of the tip is intuitive if the hulls are growing curves, ${K}_t = \gamma[0,t]$. It is, however, always well-defined, since the Loewner chain satisfies the *local growth property*: $\lim_{{\varepsilon}\searrow 0} ({\overline}{{K}_{t+{\varepsilon}} \setminus {K}_t})$ is always a boundary point of ${\Omega}\setminus {K}_t$ (more precisely, a prime end). We call this point the tip of the hull and denote it by $\tilde{x}(t)$. ### Loewner chains in doubly connected domains {#sec: LoewnerNSC .unnumbered} For multiply connected domains the Loewner flow $V_x(z)$ cannot be tangential to the boundary on all boundary components — once we start growth, the conformal moduli of the domain change, and $z \mapsto g_t(z)$ cannot be a map from ${\Omega}\setminus {K}_t$ onto ${\Omega}$ anymore. Hence, instead of one domain, we fix a family of representatives of conformal equivalence classes, and $g_t$ maps to one of these. In doubly connected case, a natural family is provided by the annuli ${\mathbb{A}}_r=\{ z \in {\mathbb{C}}\; : \; e^{-r}<|z|<1\}$, $r>0$, with the unit circle as their common boundary component. For the Loewner flow to preserve this family, the radial component of the vector field should be constant on the inner boundary circle. This is equivalent to the condition ${\Re \mathrm{e } \, }\left( V^r_x(z)/z \right) = C$ on $|z|=e^{-r}$. On the outer component of the boundary, we want $V_x^r(z) {\partial_{{z}}}$ to be tangential to the boundary, meaning ${\Re \mathrm{e } \, }\left(V^r_x(z)/z \right) = 0$ for $|z|=1$, $z \neq x$. For any value of the constant $C$, there exists a unique harmonic function with such boundary conditions and desired singularity at $x$, but only at one value of $C$ the harmonic conjugate becomes a single-valued function. Namely, there exists a unique function $S^r_x(z)$ (Schwarz kernel) satisfying the following properties: - $S_x^r(z)$ is analytic in the annulus ${\mathbb{A}}_r$; - ${\Re \mathrm{e } \, }S_x^r(z)=\delta_x(z)$ on the outer boundary ${\left\{ {|z|=1} \right\}}$; - ${\Re \mathrm{e } \, }S_x^r(z)=\frac{1}{2\pi}$ on the inner boundary ${\left\{ {|z|=e^{-r}} \right\}}$. There is a complicated explicit expression for $S_x^r(z)$ [@BB-zig_zag; @Zhan-SLE_in_doubly_connected_domains], but we will not need it. We define Loewner vector fields as $V^r_x(z)=2\pi z S_x^r(z)$. With this choice the modulus $r$ decreases at unit speed under the flow analogous to (\[eq: Loewner\]): if ${\Omega}= {\mathbb{A}}_p$, then $g_t({\Omega}\setminus {K}_t) = {\mathbb{A}}_{p-t}$. The modulus therefore directly serves as a time parametrization of the Loewner chain. $$\begin{aligned} & \textrm{{\bf Domains}} & & \textrm{{\bf Vector fields}} & & \textrm{{\bf Flow}} \\ \label{eq: Loewner A} & {\mathbb{A}}_r = {\left\{ { e^{-r}<|z|<1} \right\}} \qquad & & V^r_x(z) = 2\pi \, z \, S_x^r(z) \qquad & & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t(z) \; = \; 2 \pi \, z \, S^{p-t}_{X_t}(z) . \tag{Loe-${\mathbb{A}}$}\end{aligned}$$ The analogue of [Proposition \[prop: hloc\]]{} remains valid for this Loewner chain on the time interval $t \in [0,p)$. Schramm-Loewner evolutions {#sec: SLE} -------------------------- Stochastic Loewner evolutions (or Schramm-Loewner evolutions, SLE) are random growth processes defined via a Loewner chain with random driving process. The random driving process is chosen so that the growth process satisfies two fundamental properties: *conformal invariance* and *domain Markov property* — the reader is referred to one of the many excellent introductions to SLE for details, e.g. [@Werner-random_planar_curves; @Lawler-conformally_invariant_processes; @BB-2d_growth_processes]. In particular the driving process will always be chosen to be a semimartingale (living on the boundary of the domain) whose quadratic variation grows at constant speed $\kappa>0$, indicated by a subscript SLE${}_\kappa$. In the following well known examples the driving process is simply a Brownian motion on ${\partial}{\Omega}$ with the appropriate speed — here and in the sequel $(B_t)_{t\geq0}$ stands for a standard Brownian motion on ${\mathbb{R}}$: - [**Chordal SLE${}_\kappa$ in ${\mathbb{H}}$ from $0$ to $\infty$:**]{}\ The Loewner chain is (\[eq: Loewner H\]) with the driving process $X_t = \sqrt{\kappa} \,B_t$. - [**Radial SLE${}_\kappa$ in ${\mathbb{D}}$ from $1$ to $0$:**]{}\ The Loewner chain is (\[eq: Loewner D\]) with the driving process $X_t = \exp ({\mathfrak{i}}\sqrt{\kappa} \, B_t)$. - [**Dipolar SLE${}_\kappa$ in ${\mathbb{S}}$ from $0$ to ${\mathbb{R}}+ {\mathfrak{i}}\pi$:**]{}\ The Loewner chain is (\[eq: Loewner S\]) with the driving process $X_t = \sqrt{\kappa} \, B_t$. Note also that this is a special case of the example of SLE${}_\kappa(\rho)$ in ${\mathbb{S}}$ below, with $\rho = \frac{\kappa-6}{2}$. In other examples the driving process $X$ may have a drift. For instance, if the domain has marked points $x, x_1,x_2,\dots, x_n$ on the boundary, then the slitted domain $(\Omega_t,\tilde{x}(t),x_1,\dots,x_n)$ is in general not conformally equivalent to $\Omega,x_1,x_2,\dots, x_n$. The drift term of the Itô diffusion may therefore depend on conformal moduli of this configuration, as in the first of the following two examples: - [**SLE${}_{\kappa}(\overline{\rho})$ in ${\mathbb{H}}$ started from $0$:**]{}\ Here $\overline{\rho}=(\rho_1, \rho_2,\dots,\rho_n)$ is an $n$-tuple of real parameters. The marked points other than $x=0$ are $x_1, x_2, \dots, x_n \in {\mathbb{R}}$ on the boundary. The Loewner chain is (\[eq: Loewner H\]) with driving process obeying the Itô diffusion ${\mathrm{d}}X_t = \sqrt{\kappa} \, {\mathrm{d}}B_t + \sum_j \frac{\rho_j}{X_t-g_t(x_j)} \, {\mathrm{d}}t$, with $X_0=x=0$. - [**SLE${}_\kappa(\rho)$ in ${\mathbb{S}}$ started from $0$:**]{}\ In the above example if $n=1$ it is convenient to perform a coordinate change from ${\mathbb{H}}$ to ${\mathbb{S}}$ sending $0 \mapsto 0$, $x_1 \mapsto \pm \infty$ and $\infty \mapsto \mp \infty$, see e.g. [@Kytola-SLE_kappa_rho]. The resulting growth process is described, up to a time reparametrization, by a Loewner chain (\[eq: Loewner S\]) with driving process $X_t = \sqrt{\kappa} \, B_t \mp ( \rho + \frac{6-\kappa}{2} ) \, t$. The simplest example of SLE${}_\kappa$ in doubly connected domains is the following, proposed independently in [@BB-zig_zag; @Zhan-SLE_in_doubly_connected_domains]: - [**Standard annulus SLE${}_\kappa$ in ${\mathbb{A}}_p$ started from $1$:**]{}\ The Loewner chain is (\[eq: Loewner A\]) with driving process $X_t = \exp({\mathfrak{i}}\sqrt{\kappa} \, B_t)$. There are, of course, more variants of SLE${}_\kappa$. We will find natural free fields admitting coupling with each of the above examples — and when boundary conditions of the free field are more complicated, we find other variants. Couplings of SLEs and Gaussian Free Fields {#sec: couplings of SLE and GFF} ========================================== Basic equations {#sec: basic equations} --------------- Recall that we’re interested in Gaussian free fields coupled with random curves or growth processes in the way described in the introduction. Suppose we have a rule associating to each domain ${\Omega}$ (with marked points) a free field $\Phi_{\Omega}$, determined by a harmonic function $M_{\Omega}: {\Omega}\rightarrow {\mathbb{R}}$ and a Green’s function $C_{{\Omega}}: {\Omega}\times {\Omega}\setminus {\left\{ {z_1 = z_2} \right\}} \rightarrow {\mathbb{R}}$. Consider a random growth process of hulls $({K}_t)_{t \in [0,{\sigma}]}$ in a domain ${\Omega}_0$ and let ${\Omega}_t = {\Omega}_0 \setminus {K}_t$. Now construct a field ${\widetilde}{\Phi}$ by first sampling the final random hull ${K}_{\sigma}$, and then on the remaining random domain ${\Omega}_{\sigma}$ sampling an independent free field with the law of $\Phi_{{\Omega}_{\sigma}}$. Does the law of ${\widetilde}{\Phi}$ coincide with the law of $\Phi_{{\Omega}_0}$, at least on a subset of ${\Omega}_0$ that is almost surely untouched by $K_{\sigma}$? A necessary condition for the field ${\widetilde}{\Phi}$ to have the same law as $\Phi_{{\Omega}_0}$ is that the mean and covariance coincide, which can be written as $$\begin{aligned} \label{eq: M-weak} M_{{\Omega}_0}(z) \; \overset{?}{=} \; & {\mathsf{E}}\left[ M_{{\Omega}_{\sigma}}(z) \right] \\ \label{eq: C-weak} C_{{\Omega}_0}(z_1,z_2) + M_{{\Omega}_0} (z_1) M_{{\Omega}_0}(z_2) \; \overset{?}{=} \; & {\mathsf{E}}\left[ C_{{\Omega}_{\sigma}}(z_1,z_2) + M_{{\Omega}_{\sigma}}(z_1) M_{{\Omega}_{\sigma}}(z_2) \right] $$ for all $z$ in the domain where ${\widetilde}{\Phi}$ is defined. The expected values here refer to averages over the random hull ${K}_{\sigma}$. If we knew a priori that ${\widetilde}{\Phi}$ is Gaussian, then the conditions (\[eq: M-weak\]) and (\[eq: C-weak\]) would imply the desired coincidence of laws of ${\widetilde}{\Phi}$ and $\Phi_{{\Omega}_0}$, since the two Gaussian variables would have equal means and covariances. We will actually impose the following stronger conditions, from which the coincidence of laws will follow, as will be proven in [Section \[sec: coupling\]]{}. We require that $$\begin{aligned} \label{M-eq} M_t(z) \; := \; M_{{\Omega}_t}(z) \; \textrm{ are uniformly bounded continuous martingales} \tag{M-cond} \\ \label{C-eq} \textrm{such that } \; {\big<}M(z_1), M(z_2) {\big>}_t \; = \; C_{{\Omega}_0}(z_1, z_2) - C_{{\Omega}_t}(z_1,z_2). \tag{C-cond}\end{aligned}$$ Here ${\big<}\cdot, \cdot {\big>}_t$ denotes the quadratic cross variation — the second condition is therefore equivalent to $t \mapsto C_{{\Omega}_t} (z_1,z_2)$ being a process of finite variation such that $M_t(z_1)M_t(z_2)+C_{{\Omega}_t} (z_1,z_2)$ is martingale. Note that by optional stopping theorem for the martingales $M_t(z)$ and $M_t(z_1) M_t(z_2) + C_{{\Omega}_t}(z_1,z_2)$ at time ${\sigma}$ the above conditions indeed guarantee (\[eq: M-weak\]) and (\[eq: C-weak\]). In practise verifying the two basic conditions becomes rather explicit. We mostly deal with strictly conformally invariant boundary conditions in the following sense. Consider simply connected domains ${\Omega}$ with $n+1$ marked points $x,x_1,x_2,\ldots,x_n \in {\partial}{\Omega}$, and associate to them harmonic functions $M_{({\Omega};x,x_1,\ldots,x_n)}$ defined on ${\Omega}$ and Green’s functions $C_{({\Omega};x_1,\ldots,x_n)}$ (we assume the Green’s function not to depend on the marked point $x$). Suppose these are chosen so that for any conformal map ${\phi}: {\Omega}\rightarrow {\Omega}'$ sending $x,x_1,\ldots,x_n$ to $x',x_1',\ldots,x_n'$ we have $$\begin{aligned} \label{eq: strict CI} M_{{\Omega};x,x_1,\ldots,x_n}(z) \; = \; & M_{{\Omega}';x',x_1',\ldots,x_n'} ({\phi}(z)) \qquad \textrm{ and } \\ C_{{\Omega};x_1,\ldots,x_n}(z_1,z_2) \; = \; & C_{{\Omega}';x_1',\ldots,x_n'} ({\phi}(z_1), {\phi}(z_2)) . \tag{conf.inv.}\end{aligned}$$ In particular taking ${\phi}= g_t$, the conditions (\[M-eq\]) and (\[C-eq\]) require the processes $$\begin{aligned} \label{eq: M-ci} &M_{{\Omega}_0; X_t, g_t(x_1), \ldots, g_t(x_n)} (g_t(z)) \\ \label{eq: C-ci} & M_{{\Omega}_0; X_t, \ldots, g_t(x_n)} (g_t(z_1)) \, M_{{\Omega}_0; X_t, \ldots, g_t(x_n)} (g_t(z_2)) \; + \; C_{{\Omega}_0;g_t(x_1),\ldots,g_t(x_n)}(g_t(z_1), g_t(z_2)) $$ to be martingales. Since $(X_t)_{t \in [0,{\sigma}]}$ is a semimartingale and the flow $(g_t)_{t \in [0,{\sigma}]}$ is governed by [Equation (\[eq: Loewner\])]{}, computing the Itô derivatives of the two processes is now easy. Write first of all the Itô diffusion of the driving process as $$\begin{aligned} \label{eq: general driving process} {\mathrm{d}}X_t = {\mathrm{d}}W_{\kappa t} + \tau_{X_t} D_t \, {\mathrm{d}}t\end{aligned}$$ where $(W_t)_{t \geq 0}$ is a standard Brownian motion on ${\partial}{\Omega}_0$ and $\tau_x$ are positively oriented unit tangents to ${\partial}{\Omega}_0$ at $x$. Then write $M_{{\Omega}_0;x,x_1,\ldots,x_n}(z)$ as the imaginary part of an analytic function $F(z;x,x_1,\ldots,x_n)$ on ${\Omega}_0$, which we assume to depend smoothly also on the marked points $x,x_1,\ldots,x_n \in {\partial}{\Omega}_0$. The Itô derivative of (\[eq: M-ci\]) can be read from the imaginary part of $$\begin{aligned} \label{eq: Ito derivative of F ci} & {\mathrm{d}}F(g_t(z);X_t,g_t(x_1),\ldots,x_n) \\ \nonumber \; = \; & \left( \sqrt{\kappa} {\partial_{{x}}} F \right) \, {\mathrm{d}}B_t + \left\{ V_{X_t} \big(g_t(z) \big) {\partial_{{z}}} F + \frac{\kappa}{2} {\partial_{{x x}}} F + D_t {\partial_{{x}}} F + \sum_{j=1}^n V_{X_t} \big(g_t(x_j) \big) \frac{1}{\tau_{g_t(x_j)}} {\partial_{{x_j}}} F \right\} \, {\mathrm{d}}t ,\end{aligned}$$ the right hand side being evaluated at $(g_t(z); X_t, g_t(x_1), \ldots, g_t(x_n))$. Note that in the above formula and in what follows $x$ and $x_i$ are points on the boundary and the derivatives ${\partial_{{x}}}$ and ${\partial_{{x x}}}$ should be understood as first and second derivatives with respect to length parameter on the boundary (in the direction of the unit tangent $\tau$). We do not assume analyticity with respect to those points. Also, $B_t$ is a standard Brownian motion on ${\mathbb{R}}$ such that $\sqrt{\kappa} B_t$ is the length parameter of $W_{\kappa t}$. In view of [Equation (\[eq: Ito derivative of F ci\])]{}, the condition of the mean (\[eq: M-ci\]) being a local martingale is $$\begin{aligned} \label{eq: M-eq ci} {\Im \mathrm{m } \, }\left\{ V_{x}(z) {\partial_{{z}}} F + \frac{\kappa}{2} {\partial_{{x x}}} F + D_t {\partial_{{x}}} F + \sum_{j=1}^n V_{x}(x_j) \frac{1}{\tau_{x_j}} {\partial_{{x_j}}} F \right\} \; = \; 0 . \tag{M-cond'}\end{aligned}$$ If this equation is satisfied then the drift of $M_t(z)$ vanishes and we have $${\mathrm{d}}M_t(z) = \sqrt{\kappa} \; {\Im \mathrm{m } \, }\left( {\partial_{{x}}} F \big( g_t(z);X_t,\ldots,g_t(x_n) \big) \right) \, {\mathrm{d}}B_t ,$$ and the Itô derivative of (\[eq: C-ci\]) significantly simplifies due to the following $$\begin{aligned} {\mathrm{d}}\Big( M_t(z_1) \, M_t(z_2) \Big) = \; & \big( \cdots \big) \, {\mathrm{d}}B_t + \kappa \; {\Im \textrm{m} \left( {\partial_{{x}}} F \big(g_t(z_1); \ldots \big) \right)} \; {\Im \textrm{m} \left( {\partial_{{x}}} F \big( g_t(z_2); \ldots \big) \right)} \; {\mathrm{d}}t .\end{aligned}$$ The condition (\[C-eq\]) thus reduces to the following equation $$\begin{aligned} \label{eq: C-eq ci} & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} \, C_{{\Omega}_t; x_1, \ldots, x_n} (z_1, z_2) \tag{C-cond'} \\ \; = \; & - \kappa \; {\Im \textrm{m} \left( {\partial_{{x}}} F \big( g_t(z_1); X_t, \ldots, g_t(x_n) \big) \right)} \; {\Im \textrm{m} \left( {\partial_{{x}}} F \big( g_t(z_2); X_t, \ldots, g_t(x_n) \big) \right)} .\end{aligned}$$ In the strictly conformally covariant cases (\[eq: strict CI\]), the verification of the two basic conditions (\[M-eq\]) and (\[C-eq\]) therefore boils down simply to Equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\]) as well as appropriate boundedness of $M$. Example: chordal SLEs and Gaussian free fields {#sec: SSExample} ---------------------------------------------- ### Basic conditions for chordal SLE${}_4$ and GFF with jump-Dirichlet boundary conditions {#basic-conditions-for-chordal-sle_4-and-gff-with-jump-dirichlet-boundary-conditions .unnumbered} We will now illustrate the general idea in the example (\[eq: SS example\]) of Schramm and Sheffield, by checking the conditions (\[M-eq\]) and (\[C-eq\]) in this simplest case. We take the upper half-plane as the starting domain $\Omega_0 = {\mathbb{H}}$, and we have two marked points $0$ and $\infty$. Our Loewner chain (\[eq: Loewner H\]) constructed by the vector fields $V_x(z)=\frac{2}{z-x}$ preserves one of them (infinity), and the other corresponds to the tip of the curve. The driving process of chordal SLE${}_4$ is $X_t = \sqrt{\kappa} B_t$ with $\kappa=4$. Concretely, the equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\]) now have the following simple form: $$\begin{aligned} \label{MainEq1SS} & {\Im \mathrm{m } \, }\left( \frac{\kappa}{2} {\partial_{{x x}}} F + \frac{2}{z-x} \, {\partial_{{z}}} F \right) \; = \; 0 \qquad \textrm{ and } \\ \label{MainEq2SS} & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} C_{\Omega_t}(z_1,z_2) \; = \; - \kappa \; {\Im \mathrm{m } \, }\Big({\partial_{{x}}} F \big( g_t(z_{1});X_t \big) \Big) \; {\Im \mathrm{m } \, }\Big( {\partial_{{x}}} F \big( g_t(z_{2});X_t \big) \Big) .\end{aligned}$$ The harmonic function $M$ in ${\mathbb{H}}$ determined by boundary conditions (\[eq: SS example\]) is $\frac{2\lambda}{\pi} \arg(z-x) - \lambda$, hence $F(z;x) = \frac{2\lambda}{\pi} \log(z-x) - \lambda$ and an easy calculation confirms the validity of Equation (\[MainEq1SS\]) when $\kappa = 4$. The Dirichlet Green’s function in the half-plane is explicitly $$\label{DirGreen} C(z_1,z_2) \; = \; -\frac{1}{2 \pi} \; {\Re \mathrm{e } \, }\log \Big( \frac{z_1-z_2}{z_1-{\overline}{z}_2} \Big).$$ Applying conformal invariance of $C$, (\[eq: C-ci\]), and computing the time derivative of $C(g_t(z_1),g_t(z_2))$, we find that (\[MainEq2SS\]) also holds true provided that the jump size is adjusted to the value found by Schramm and Sheffield, $\lambda = \pm \sqrt{\frac{\pi}{8}}$. \[rem: rigid\] The above calculation is rather rigid in the following sense. Suppose we want to find some coupling of SLE with a free field whose covariance is given by the Dirichlet Green’s function (\[DirGreen\]). Then the equation (\[MainEq2SS\]) in fact determines the function $F$ up to a sign and an additive constant. One then only has to check that such $F$ is a martingale for the SLE, i.e. that Equation (\[MainEq1SS\]) is satisfied. The right hand side of (\[MainEq2SS\]) is $- \frac{16 \lambda^2}{\pi^2} \; {\Im \mathrm{m } \, }\big( \frac{1}{z_{1}-x} \big) \; {\Im \mathrm{m } \, }\big( \frac{1}{z_{2}-x} \big)$. Terms in this product have invariant meaning. Namely, they are multiples of Poisson’s kernel with zero Dirichlet boundary conditions. This is a general phenomenon and a consequence of a formula of Hadamard type which we discuss in [Section \[sec: Hadamard\]]{}. ### Modification to chordal SLE${}_\kappa$ for $\kappa \neq 4$ {#modification-to-chordal-sle_kappa-for-kappa-neq-4 .unnumbered} There is a way to save the validity of the basic conditions for chordal SLE${}_\kappa$, $\kappa \neq 4$, if one relaxes the assumption (\[eq: strict CI\]) of strict conformal invariance of $M$. By Remark \[rem: rigid\] the choice of the Dirichlet Green’s function together with Equation (\[MainEq2SS\]) implies we should take the same $F$ as before but with $\lambda = \lambda_\kappa = \pm \sqrt{\frac{\pi}{2\kappa}}$. However, Equation (\[MainEq1SS\]) fails for general $\kappa$ and we have instead $${\Im \mathrm{m } \, }\left( \frac{\kappa}{2} {\partial_{{x x}}} F + \frac{2}{z-x} \, {\partial_{{z}}} F \right) \; = \; \frac{(4-\kappa) \lambda_\kappa}{\pi} \; {\Im \mathrm{m } \, }\Big( \frac{1}{(z-x)^2} \big) \; \neq \; 0 .$$ We therefore adjust the definition of the mean $M_{{\Omega}_t}(z)$ for the field in the new domain $\Omega_t$ as follows $$\begin{aligned} \label{eq: ad hoc term} M_{{\Omega}_t} (z) \; = \; {\Im \mathrm{m } \, }\Big( F \big( g_t(z); X_t \big) + E_t(z) \Big) ,\end{aligned}$$ where the extra term $E_t(z)$ is taken to be the integral of the missing part $$\begin{aligned} \label{eq: ad hoc term 2} {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} E_t(z) \; = \; \frac{(\kappa-4) \lambda_\kappa}{\pi} \; \frac{1}{(g_t(z)-X_t)^2} , \qquad E_0(z) = 0 .\end{aligned}$$ This guarantees that $M_{{\Omega}_t}(z)$ are local martingales. Condition (\[M-eq\]) follows for appropriate stopping times ${\sigma}$, and since the added term $E_t(z)$ is of finite variation, the computation leading to Equation (\[MainEq2SS\]) remains unchanged and implies (\[C-eq\]). The definition (\[eq: ad hoc term 2\]) can be explicitly integrated to give $$\begin{aligned} \label{eq: ad hoc term 3} E_t(z) \; = \; \frac{(4-\kappa) \lambda_\kappa}{2\pi} \; \log g_t'(z) ,\end{aligned}$$ simply using ${\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} g_t'(z) = \frac{-2 g_t'(z)}{(g_t(z)-X_t)^2}$. In particular, ${\Im \mathrm{m } \, }(E_t(z))$ is determined by the domain ${\Omega}_t$ only and could be interpreted as a multiple of the harmonic interpolation of the argument of the tangent vector $\tau$ of ${\partial}{\Omega}_t$ (“winding of the boundary”) if the boundary would be smooth. In Appendix \[sec: commutation\] we show that for general boundary conditions the mean $M_{{\Omega}_t}$ defined as in (\[eq: ad hoc term\]) can depend on the full history of the Loewner chain $(g_s)_{0 \leq s \leq t}$ and not be determined by the domain ${\Omega}_t$ only. We remark also that the additional term (\[eq: ad hoc term 3\]) is what the Coulomb gas formalism of conformal field theory dictates in the presence of a background charge which modifies the central charge $c$ to its correct value $c(\kappa) = 1 - 6 \, (\frac{\kappa-4}{2 \sqrt{\kappa}})^2$. Basic equations imply coupling {#sec: coupling} ------------------------------ ### Definition of the free fields {#definition-of-the-free-fields .unnumbered} Let us now give a precise definition of our free fields $\Phi$. It is common to define them as random tempered distributions, although they are almost surely somewhat more regular objects. We denote by ${\mathcal{S}}$ the Schwarz class of functions of rapid decrease on ${\mathbb{C}}= {\mathbb{R}}^2$ and by ${{{\mathcal{S}}'}}$ the tempered distributions. Define the function $W : {\mathcal{S}}\rightarrow {\mathbb{C}}$ which will be the characteristic function of $\Phi$ $$W(f) \; = \; \exp \left( {\mathfrak{i}}\int_{\Omega}M(z) f(z) {\mathrm{d}}z - \frac{1}{2} \iint_{{\Omega}\times {\Omega}} f(z) C(z,w) f(w) \, {\mathrm{d}}z \, {\mathrm{d}}w\right) .$$ We clearly have $W(0)=1$. All of our choices of functions $M$ and $C$ will satisfy the properties - The function $M : {\Omega}\rightarrow {\mathbb{R}}$ is locally integrable and has at most polynomial growth at infinity - The function $C$ is locally integrable and has at most polynomial growth at infinity which imply that $W$ is continuous. Furthermore, all of our choices of $C$ will have the property - For all $f_1, \ldots, f_n \in {\mathcal{S}}$ the $n \times n$ real matrix with entries $C_{j,k} = \iint f_j(z) C(z,w) f_k(w) \, {\mathrm{d}}z \, {\mathrm{d}}w$ is positive semi-definite so a standard argument shows that for all $\zeta_1, \ldots, \zeta_n \in {\mathbb{C}}$ and $f_1, \ldots, f_n \in {\mathcal{S}}$ we have $\sum_{j,k} \zeta_j {\overline}{\zeta}_k W(f_j-f_k) \geq 0$. These conditions guarantee, by Minlos’ theorem, that $W$ is indeed a characteristic function of a probability measure on ${{{\mathcal{S}}'}}$, which is our definition of the law of the massless free field with mean $M$ and covariance $C$. It is evident from the definition of $W$ that the free field is almost surely supported on ${\overline}{{\Omega}}$. ### Coupling {#coupling .unnumbered} \[thm: coupling\] Let $A \subset {\Omega}$ be compact and $B \subset \Omega$ open neighborhood of $A$. Let $(g_t)_{t \geq 0}$ be a random Loewner chain of hulls $({K}_t)_{t \geq 0}$ and suppose that ${\sigma}$ is a stopping time for which ${\overline}{{K}}_{\sigma}\cap (B \cup {\left\{ {x_1, \ldots, x_n} \right\}}) = \emptyset$. Denote the tip of the hull at time $t$ by $\tilde{x}(t)$ and the complement by ${\Omega}_t = {\Omega}\setminus {K}_t$. Assume conditions (M-mgale) and (C-mgale). Then the random Loewner chain $(g_t)_{t \in [0,{\sigma}]}$ can be coupled with a free field ${\widetilde}{\Phi}$ defined on $A$ such that the following holds. - Let ${{\sigma'}}\leq {\sigma}$ be a stopping time. Conditionally on $(g_s)_{0 \leq s \leq {{\sigma'}}}$, the law of ${\widetilde}{\Phi}$ is the restriction to $A$ of the free field corresponding to the domain ${\Omega}_{{\sigma'}}$, that is the free field with mean $M_{({\Omega}_{{\sigma'}},\tilde{x}({{\sigma'}}),x_1,\ldots,x_n)}$ and covariance $C_{({\Omega}_{{\sigma'}},\tilde{x}({{\sigma'}}),x_1,\ldots,x_n)}$. The theorem will be proved by showing that for any test function $f$ the expectation ${\mathsf{E}}_{\mathrm{GFF}} [\exp( {\mathfrak{i}}\,{\big<}{\widetilde}{\Phi}_t, f{\big>})]$ is a martingale, where ${\widetilde}{\Phi}_t$ has the law of the free field in ${\Omega}_t = {\Omega}\setminus {K}_t$. Denote by $M_t$ the mean associated to the domain ${\Omega}_t$ with marked points $\tilde{x}(t), x_1, \ldots, x_n$, and by $C_t$ the covariance associated to that domain. Define ${\widetilde}{\Phi}$ by sampling a free field in ${\Omega}_{\sigma}$ with mean and covariance $M_{\sigma}$ and $C_{\sigma}$, and then restricting to $A$. Given $f \in {\mathcal{S}}$, ${{\mathrm{supp}(f)}} \subset A$, we define first of all the process $$L_t \; = \; \int_A M_t(z) f(z) {\mathrm{d}}z .$$ By the assumption (M-mgale) $(L_t)_{t \in [0,{\sigma}]}$ is a bounded continuous martingale. Its quadratic variation follows from assumption (C-mgale) $${\big<}L, L {\big>}_t \; = \; \iint_{A \times A} f(z) \Big( C_0(z,w) - C_t(z,w) \Big) f(w) \, {\mathrm{d}}z \, {\mathrm{d}}w .$$ For any $f \in {\mathcal{S}}$ such that ${{\mathrm{supp}(f)}} \subset A$, define the random process $({\widetilde}{W}_t(f))_{t \in [0,{\sigma}]}$ by $${\widetilde}{W}_t(f) \; = \; \exp \left( {\mathfrak{i}}\int_A M_t(z) f(z) {\mathrm{d}}z - \frac{1}{2} \iint_{A \times A} f(z) C_t(z,w) f(w) \, {\mathrm{d}}z \, {\mathrm{d}}w \right) .$$ Note that ${\widetilde}{W}_t(f)$ is up to a multiplicative constant the exponential martingale $\exp \Big( {\mathfrak{i}}\, L_t + \frac{1}{2} {\big<}L,L {\big>}_t \Big)$, so in particular it is a bounded martingale. It is now easy to describe the law of the random distribution ${\widetilde}{\Phi}$ conditionally on $(g_s)_{0 \leq s \leq t}$. The law is encoded in the characteristic function ${\mathsf{E}}[ \exp( {\mathfrak{i}}\,{\big<}{\widetilde}{\Phi}, f {\big>}) \, | \, {\mathcal{F}}_t ] $ . At time $t = {\sigma}$ this is exactly the characteristic function of ${\widetilde}{\Phi}$, that is ${\widetilde}{W}_{\sigma}(f)$. By construction it is a bounded martingale and therefore coincides with $${\mathsf{E}}[ \exp( {\mathfrak{i}}\,{\big<}{\widetilde}{\Phi}, f {\big>}) \, | \, {\mathcal{F}}_t ] \; = \; {\widetilde}{W}_t(f) .$$ Since ${\widetilde}{W}_t$ is by construction the characteristic function of the free field with mean and covariance $M_t$ and $C_t$ the assertion follows. Hadamard’s variational formulas for Loewner chains {#sec: Hadamard} -------------------------------------------------- Hadamard’s formula gives the variation of Green’s function in a smooth domain when the boundary changes in a smooth way. In this section we prove a version of Hadamard’s formula for Loewner chains. The need for this stems from the second basic condition for coupling — in we need the derivative of Green’s functions in domains ${\Omega}_t$ with respect to the time $t$ of the Loewner chain. \[theorem: Hadamard\] Let $(g_t)_{t \geq 0}$ be a Loewner chain in a simply or doubly connected domain as in [Section \[sec: Loewner chains\]]{}, and let ${\Omega}_t = {\Omega}_0 \setminus {K}_t$. Let $G_{{\Omega}_t}(z_1,z_2)$ be the zero Dirichlet boundary valued Green function in ${\Omega}_t$. Then $$\label{eq: Hadamard thm} {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} G_{{\Omega}_t}(z_1,z_2) \big|_{t=0} \; = \; - 2 \pi \; P_{{\Omega}_0}(X_0,z_1) \; P_{{\Omega}_0}(X_0,z_2) ,$$ where $P_{{\Omega}}$ is the Poisson kernel in ${\Omega}$. Fix the point $z_2$. The difference $\Gamma_{z_2}(z_1):=G_{{\Omega}_0}(z_1,z_2)-G_{{\Omega}_t}(z_1,z_2)$ is harmonic in ${\Omega}_t$ as function of $z_1$ and thus can be represented as the integral of its boundary values against the harmonic measure: $$\Gamma_{z_2}(z_1) \; = \; \int_{\partial {\Omega}_t} G_{{\Omega}_0}(z,z_2) \; {\mathrm{d}}{\omega}^{{\Omega}_t}_{z_1}(z) \; = \; \int_{{\partial}{K}_t} G_{{\Omega}_0}(z,z_2) \; {\mathrm{d}}{\omega}^{{\Omega}_t}_{z_1}(z), \label{Had_difference}$$ since the boundary values are zero everywhere but on $\partial {K}_t$. By conformal invariance we may assume that $X_0=0$, ${\Omega}_0 \subset {\mathbb{H}}$ and ${\Omega}_0$ coincides with the upper half-plane ${\mathbb{H}}$ in some neighborhood of $X_0=0$. If $x+iy = z \in \partial {K}_t$, then $$G_{{\Omega}_0}(z,z_2) \; = \; G_{{\Omega}_0}(x,z_2) + y \; {{\partial_{n}}}G_{{\Omega}_0}(x,z_2)+ o(y), \;y\rightarrow 0.$$ The first term in the right-hand side is equal to $0$, and the normal derivative of the Green’s function is the Poisson kernel $P_{{\Omega}_0}(x;z_2)$, which is roughly the same as $P_{{\Omega}_0}(0;z_2)$. More precisely, one has $$G_{{\Omega}_0}(z,z_2) \; = \; y \, P_{{\Omega}_0}(0;z_2)+ y \, O(x)+o(y).$$ Hence, (\[Had\_difference\]) reads $$\Gamma_{z_2}(z_1) \; \approx \; P_{{\Omega}_0}(0;z_2) \; \int_{\partial {K}_t} {\Im \mathrm{m } \, }(z) \; {\mathrm{d}}{\omega}^{{\Omega}_t}_{z_1}(z) \; =: \; P_{{\Omega}_0}(0,z_2) \; \Psi(z_1).$$ The notation “$\approx$” means that the ratio of two expressions tends to $1$ as the size of the hull tends to zero. Now, take a small $r>0$ such that ${K}_t$ is inside the demi-circle $T_r(0)$ of radius $r$ around $0$. Denote ${\Omega}^{(r)} := {\Omega}_0\setminus B_r(0)$. Write $\Psi(z_1)$ as $$\label{HadPsi} \Psi(z_1)=\int_{T_r(0)} \Psi(z) \; {\mathrm{d}}{\omega}^{{\Omega}^{(r)}}_{z_1}(z).$$ We are going to factor out a term that captures the dependence of the latter integral on $z_1$. To this end, we apply the map $\psi_r(z):=z+\frac{r^2}{z}$ which maps $\mathbb{H} \setminus B_r(0)$ onto $\mathbb{H}$ (and ${\Omega}_0$ onto some domain $\psi({\Omega}_0)$). Now, conformal invariance of the harmonic measure yelds $${\mathrm{d}}{\omega}^{{\Omega}^{(r)}}_{z_1}(z) \; = \; {\mathrm{d}}{\omega}^{\psi_r({\Omega}^{(r)})}_{\psi_r(z_1)}(\psi_r(z)) \; = \; P_{\psi_r({\Omega})}(\psi_r(z);\psi_r(z_1)) \; {\mathrm{d}}x .$$ Since $\psi_r(z)-z$ is small when $r$ is small, we have $$\label{PoissonClose} P_{\psi_r({\Omega})}(\psi_r(z);\psi_r(z_1)) \; \approx \; P_{\psi_r({\Omega})}(0;\psi_r(z_1)) \; \approx \; P_{{\Omega}}(0;z_1) .$$ Hence the equation (\[HadPsi\]) reads $$\Psi(z_1) \; \approx \; P_{{\Omega}}(0,z_1) \; \int^{\pi}_{\theta=0} \Psi(r e^{{\mathfrak{i}}\theta}) 2 \sin (\theta) r \; {\mathrm{d}}\theta.$$ The integral on the right-hand side is by definition equal to $\pi \, L^{\Omega}_{K_t,r}$, where $L^{\Omega}_{K_t,r}$ is the local half-plane capacity, see (\[lhcap\]), and we apply [Proposition \[prop: hloc\]]{} to finish the proof. It is easy to generalize this theorem to other boundary conditions. One possible generalization is as follows. Let the boundary of the domain ${\Omega}={\Omega}_0$ consist of several connected components, that in turn are divided to several arcs each. Without loss of generality, assume $\partial {\Omega}$ to be piecewize smooth, and let $\tilde{G}_{{\Omega}}(z_1,z_2))$ be the Green’s function with zero Dirichlet boundary conditions on some of those arcs and Neumann boundary conditions on others. Let $({\Omega}_t)$ be a family of domains defined by a Loewner chain (the setup for the chain being analogous to that of section \[sec: Loewner chains\], with residue of absolute value $2$ at the marked point). We demand that the point of growth $X_0 \in \partial {\Omega}$ of the Loewner chain would belong to the “Dirichlet” part of the boundary, and by definition $\tilde{G}_{{\Omega}_t}(z_1,z_2))$ assumes zero Dirichlet boundary values on ${K}_t$. Then we have the following proposition: \[prop: HadGeneral\] $$\label{Hadamard} {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} \tilde{G}_{{\Omega}_t}(z_1,z_2)) \big|_{t=0} \; = \; -2\pi \; \tilde{P}_{\Omega}(X_0; z_1) \, \tilde{P}_{\Omega}(X_0; z_2).$$ where $\tilde{P}_{\Omega}$ is the Poisson kernel with the same boundary conditions as $\tilde{G}$. The proof literally repeats the one of [Theorem \[theorem: Hadamard\]]{}; there are only two places where we have used a specific nature of the boundary conditions far away from the point $X_0$. One is the continuity or the Poisson kernel with respect to small variations of the domain (equation (\[PoissonClose\])). This is also clear in the present case. Another one is the definition of ${\mathrm{lhcap}}({K}_t)$, namely, the boundary conditions for $\Psi$ in (\[lhcap\]). It is clear, however, that they can be replaced by Neumann ones far from the point $X_0$, and the difference at the distance $r$ from $a$ is of order $o(r^2)$, which is negligible when computing $\partial_t {\mathrm{lhcap}}({K}_t) |_{t=0}$. Various boundary conditions in simply connected domains {#sec: simply connected} ======================================================= SLE${}_4$ in the strip and Riemann-Hilbert boundary conditions {#sec: strip RH} -------------------------------------------------------------- In this subsection, we develop a coupling of SLE${}_4$ and GFF in the following situation. We take ${\Omega}$ to be a simply-connected domain with three marked points on the boundary $x_0,x_1,x_2$ dividing the boundary into three arcs $l_{12}, l_{01}, l_{20}$. The mean $M (z) = M_{{\Omega};x_0,x_1,x_2}(z)$ of the field will be a harmonic function determined by the boundary conditions $$\begin{aligned} \label{eq: Riemann-Hilbert M} \left\{ \begin{array}{ll} M(z) = - \lambda & \textrm{ for $z \in l_{01}$} \\ M(z) = \lambda & \textrm{ for $z \in l_{20}$} \\ \alpha \, {{\partial_{n}}}M(z) + \beta \, {{\partial_{\mathbf \tau}}}M(z) = 0 & \textrm{ for $z \in l_{12}$.} \end{array} \right.\end{aligned}$$ The third condition can be reformulated in the following way: if $M(z) = {\Im \mathrm{m } \, }F(z)$, then on $l_{12}$ the derivative of $F$ in the direction of the boundary has a constant argument modulo $\pi$. If at some point of the arc $l_{12}$ the function $F$ vanishes, this implies that $F$ itself has the same argument (modulo $\pi$) on $l_{12}$. As the covariance $C(z_1,z_2) = C_{{\Omega};x_1,x_2}(z_1,z_2)$ we take the Green’s function in ${\Omega}$ having zero Dirichlet boundary conditions on $l_{20}$ and $l_{01}$ and the above type of Riemann-Hilbert boundary conditions on $l_{12}$: for all $z_2 \in {\Omega}$ we require $$\begin{aligned} \label{eq: RH for C} \left\{ \begin{array}{ll} C(\cdot,z_2) = 0 \quad & \textrm{ on $l_{20} \cup l_{01}$} \\ \alpha \, {{\partial_{n}}}C(\cdot,z_2) + \beta \, {{\partial_{\mathbf \tau}}}C(\cdot,z_2) \equiv 0 \quad & \textrm{ on $l_{12}$.} \end{array} \right.\end{aligned}$$ These boundary conditions are conformally invariant in the sense of [Equation (\[eq: strict CI\])]{}, so the essential part of establishing a coupling consists of verifying [Equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\])]{}. We already remark that Dirichlet and Neumann boundary conditions on $l_{12}$ are particular cases corresponding to vanishing $\alpha$ and vanishing $\beta$, respectively, and after having made the coupling explicit we return to comment on an interpolation between the two. The convenient choice of Loewner chain for the domains $({\Omega};x_0,x_1,x_2)$ with three marked boundary points is to keep $x_1$ and $x_2$ as fixed points. We therefore take the initial domain to be the strip, ${\Omega}_0 = {\mathbb{S}}$, with $x_1$ and $x_2$ at $+\infty$ and $-\infty$ respectively, and we use (\[eq: Loewner S\]) to encode the growth process. We furthermore choose $X_0=x_0=0$. Below $F(\cdot;x)$ denotes an analytic function in ${\mathbb{S}}$ whose imaginary part is the harmonic function $M_{{\mathbb{S}};x,+\infty,-\infty}$ determined by (\[eq: Riemann-Hilbert M\]). As the marked points $x_1$, $x_2$ are chosen to be fixed by the Loewner flow, the basic equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\]) have a simple form $$\begin{aligned} \label{MainEq1Strip} & {\Im \mathrm{m } \, }\left\{ 2 \; {\partial_{{x x}}} F(z;x) + \coth \big( \frac{z-x}{2} \big) \; {\partial_{{z}}} F(z;x) + D_t \; {\partial_{{x}}} F(z;x) \right\} \; = \; 0 \qquad \textrm{ and } \\ \label{MainEq2Strip} & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} C_{{\Omega}_t}(z_1,z_2) = - 4 \; {\Im \mathrm{m } \, }\left( {\partial_{{x}}} F(g_t(z_{1});X_t) \right) \; {\Im \mathrm{m } \, }\left( {\partial_{{x}}} F(g_t(z_{2});X_t) \right) .\end{aligned}$$ [Proposition \[prop: HadGeneral\]]{} combined with conformal invariance readily gives the expression $${\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} C_{{\Omega}_t}(z_1,z_2) \; = \; - 2 \pi \; \tilde{P}(X_t ; g_t(z_1)) \; \tilde{P}(X_t ; g_t(z_2)),$$ where $\tilde{P}$ is the Poisson kernel in $\mathbb{S}$ having the same boundary conditions as the Green’s function (\[eq: RH for C\]). As before, [Equation (\[MainEq2Strip\])]{} therefore determines ${\partial_{{x}}} F(z;x)$ up to a sign and a constant $$\begin{aligned} \label{eq: F generic formula} {\partial_{{x}}} F(z,x) \; = \; \pm {\mathfrak{i}}\; \sqrt{\frac{\pi}{2}} \; \tilde{S}_x(z) + \textrm{real constant},\end{aligned}$$ where $\tilde{S}_x(z)$ is the Schwarz kernel corresponding to the present boundary conditions — an analytic function in ${\mathbb{S}}$ such that ${\Re \mathrm{e } \, }( \tilde{S}_x(z) ) = \tilde{P}_x(z)$. We should then verify (\[MainEq1Strip\]). Note first that our function $F$ is invariant under shifts $${\partial_{{x}}} F + {\partial_{{z}}} F = 0,$$ and hence (\[MainEq1Strip\]) reads equivalently $$\label{StripHarmonic} {\Im \mathrm{m } \, }\left\{ 2 \; {\partial_{{x x}}} F(z;x) - \coth(\frac{z-x}{2}) \; {\partial_{{x}}} F(z;x) + D_t \; {\partial_{{x}}} F(z;x) \right\} \; = \; 0 .$$ This identity could be checked for correctly chosen $D_t$ by a direct calculation using an explicit expression for $\tilde{S}$, but we prefer an argument which identifies the drift $D_t$ in a way that generalizes directly to other cases where explicit expressions may in practise be unavailable. A similar technique was used by Zhan in the context of loop-erased random walks in multiply connected domains [@Zhan-thesis] The function on the left-hand side of (\[StripHarmonic\]) is harmonic in $\mathbb{S}$, it is zero on ${\mathbb{R}}$ and bounded apart from a possible singularity at $x$. On the upper part of the boundary, the first and third terms clearly satisfy the $(\alpha, \beta)$ Riemann-Hilbert boundary condition. In order to prove the same condition for the second term, recall that ${\partial_{{x}}} F$ was defined up to a real constant. If we now choose that constant so that ${\Re \mathrm{e } \, }({\partial_{{x}}} F) = 0$ at $-\infty$, then clearly ${\partial_{{x}}} F = 0$ at $-\infty$, and the Riemann-Hilbert boundary condition for ${\Im \mathrm{m } \, }({\partial_{{x}}} F)$ can be stated in the form that $\arg {\partial_{{x}}} F$ modulo $\pi$ is fixed on ${\mathbb{R}}+ {\mathfrak{i}}\pi$. Since $\coth(\frac{z-x}{2})$ is purely real, multiplication by it doesn’t harm this condition. It remains to prove that singularities of the left-hand side of (\[StripHarmonic\]) at the point $x$ actually cancel out. Expansions at $x$ for the Schwarz kernel and the Loewner vector field give $$\begin{aligned} \partial_x F(z;x) \; = \; & \frac{C}{z-x} + C \, \mu + o(1) , \\ \coth(\frac{z-x}{2}) \; = \; & \frac{2}{z-x}+o(1),\end{aligned}$$ where $C$ and $\mu$ are real since the Schwarz kernel $\tilde{S}_x(z)$ is purely imaginary on the real line. Hence, the left-hand side of (\[StripHarmonic\]) is bounded if and only if $$D_t \; \equiv \; 2 \, \mu ,$$ which determines the drift $D_t$ of the driving process (\[eq: general driving process\]) and establishes the condition (\[eq: M-eq ci\]) for the correctly chosen drift. In order to find $\mu$ in terms of $\alpha$ and $\beta$, we need the explicit formula for the function ${\partial_{{x}}} F$. Note that for $-\frac{1}{2} < \theta < \frac{1}{2}$ the expression $$\begin{aligned} \label{eq: explicit RH Schwarz} \tilde{S}_x(z) \; = \; \frac{{\mathfrak{i}}}{2 \pi}\frac{e^{\theta (z-x)}}{\sinh(\frac{z-x}{2})}\end{aligned}$$ gives a Schwarz kernel in $\mathbb{S}$ satisfying $$\arg \partial_\tau \tilde{S} = \pi \theta \; \mod \pi \qquad \textrm{ on ${\mathbb{R}}+ {\mathfrak{i}}\pi .$}$$ so we find that for such boundary conditions $\mu=\theta$. We have proven the following proposition. Choose $\lambda = \sqrt{\pi/8}$ and $$\alpha = \cos(\pi \theta) , \qquad \beta = -\sin(\pi \theta)$$ and let $\Phi$ be the Gaussian free fields with means $M_{{\Omega}; x_0,x_1,x_2}(z)$ determined by boundary conditions (\[eq: Riemann-Hilbert M\]), and covariances $C_{{\Omega}; x_1, x_2}$ determined by (\[eq: RH for C\]). Then $\Phi$ are coupled, in the sense of [Theorem \[thm: coupling\]]{}, with the SLE${}_4(\rho)$ in ${\mathbb{S}}$ with $\rho = 2 \theta - 1$. Free fields and SLEs are conformally invariant if we allow for (random) time reparametrizations of the Loewner chains, so the given coupling works in any other domain $({\Omega}; x_0,x_1,x_2)$, too. Both cases $\theta \rightarrow \pm {\frac{1}{2}}$ correspond to Dirichlet boundary conditions also on $l_{12} = {\mathbb{R}}+ {\mathfrak{i}}\pi$. Correspondingly, the curves become just chordal SLE${}_4$ in the strip from $0$ to $\pm \infty$, and these cases can be seen as mere coordinate changes of the case of Schramm & Sheffield discussed in [Section \[sec: SSExample\]]{}. The symmetric value $\theta = 0$ corresponds to Neumann boundary conditions on ${\mathbb{R}}+ {\mathfrak{i}}\pi$. The drift $D_t$ then vanishes and the curve is a dipolar SLE${}_4$. It appears that this case was first conjectured in [@BBH-dipolar]. As $\theta$ varies from $-{\frac{1}{2}}$ to ${\frac{1}{2}}$, the free fields and the curves interpolate between the above cases. This was suggested in [@Kytola-SLE_kappa_rho], where $\tilde{S}_x(z)$ was also used to give a formula for left passage probability of the SLE${}_4(\rho)$ curve. One would like, as in the chordal case, to extend the coupling to $\kappa \neq 4$. Again, [Equation (\[MainEq2Strip\])]{} and Hadamard’s formula for $C$ leave us essentially no choice but ${\partial_{{x}}} F(z;x) = 2 {\mathfrak{i}}\, \lambda_\kappa \, \tilde{S}_x(z)$ with $\lambda_\kappa = \sqrt{\frac{\pi}{2 \kappa}}$. [Equation (\[MainEq1Strip\])]{} then fails, giving instead $${\Im \mathrm{m } \, }\left\{ \frac{\kappa}{2} \; {\partial_{{x x}}} F(z;x) + \coth \big( \frac{z-x}{2} \big) \; {\partial_{{z}}} F(z;x) + D_t \; {\partial_{{x}}} F(z;x) \right\} \; = \; (\kappa-4) \lambda_\kappa \; {\Im \mathrm{m } \, }\left( {\mathfrak{i}}\, {\partial_{{x}}} \tilde{S}_x(z) \right) \; \neq \; 0$$ As in (\[eq: ad hoc term\]), we could try to save the basic conditions by adding a non-conformally invariant term $E_t$ to the mean of the field: $M_{{\Omega}_t}(z) = {\Im \mathrm{m } \, }\big( F(g_t(z);X_t) + E_t(z) \big)$, now taken to be $$\begin{aligned} \label{eq: ad hoc strip} E_t(z) \; = \; (4-\kappa) \lambda_\kappa \; \int_0^t \left( {\mathfrak{i}}\, {\partial_{{x}}} \tilde{S}_{X_s}(g_s(z)) \right) \, {\mathrm{d}}s .\end{aligned}$$ One observes that $E_t$, thus defined, satisfies the following properties: - ${\Im \mathrm{m } \, }(E_t)$ has the same Riemann-Hilbert boundary conditions as ${\Im \mathrm{m } \, }(F)$ on ${\mathbb{R}}+ {\mathfrak{i}}\pi$ - ${\Im \mathrm{m } \, }(E_t) \equiv 0$ on ${\partial}{\Omega}_t \cap {\mathbb{R}}$ - If $z \in {\partial}{K}_t$ for some $t$, then ${\Im \mathrm{m } \, }(E_s(z)) = {\Im \mathrm{m } \, }(E_t(z))$ for all $s>t$ unless the point $z$ is swallowed by time $s$. Thus, the boundary value of ${\Im \mathrm{m } \, }E$ on the curve is determined at the instant the point becomes a part of the boundary. Note that this property also held for the winding boundary conditions (\[eq: ad hoc term 3\]) which generalized the chordal coupling to $\kappa \neq 4$. Despite the above properties, there is a crucial difference to the case of jump-Dirichlet boundary conditions: the mean (\[eq: ad hoc term\]) will be determined by the domain only if the commutation condition of Appendix \[sec: commutation\] is satisfied — and for [Equation (\[eq: ad hoc strip\])]{} it is not. More marked points ------------------ In this section, we show how to compute the driving process of the SLE${}_4$ variant coupled with free field whose boundary conditions change also at additional marked points $-\infty+ {\mathfrak{i}}\, \pi = x_0, x_1, x_2, \dots,x_{n+1} = \infty+{\mathfrak{i}}\, \pi$ on the upper boundary of $\mathbb{S}$. In our example, the mean of the field will satisfy the following boundary conditions: $$\begin{aligned} \left\{ \begin{array}{ll} M(z,x,x_1,\dots,x_n)=-\lambda & \textrm{ for } z \in (x,+\infty) \\ M(z,x,x_1,\dots,x_n)=+\lambda & \textrm{ for } z \in (-\infty,x) \\ M(z,x,x_1,\dots,x_n) \textrm{ obeys BC${}_i$ } & \textrm{ for } z \in l_i := (x_i, x_{i+1}) \subset {\mathbb{R}}+ {\mathfrak{i}}\pi \end{array} \right.\end{aligned}$$ - Here BC${}_i$ may stand either for constant Dirichlet condition $M \equiv \lambda_i$, or zero Neumann boundary condition ${{\partial_{n}}}M \equiv 0$. The covariance $C(z_1,z_2;x,x_1,\dots,x_n)$ is taken to have zero Dirichlet boundary conditions on ${\mathbb{R}}$, and BC’${}_i$ on $l_i$, where BC’${}_i$ stands for the homogeneous condition corresponding to BC${}_i$. We’ve only given the mean and covariance in $({\mathbb{S}}; x, -\infty, x_1, \ldots, x_n, +\infty)$, but it is understood that the definitions are transported to other domains with marked points by [Equation (\[eq: strict CI\])]{}. The initial position of the growth is $X_0=0$. Let $\tilde{M}$ be the harmonic conjugate to $M$ normalized to be equal to $0$ at $-\infty$, and let $\tilde{S}_x(z)$ be the Schwarz kernel with BC’${}_i$ boundary conditions on the corresponding segments of the upper boundary and with the same normalization at $-\infty$. We have the following proposition: For $\lambda=\sqrt{\frac{\pi}{8}}$, there exist a unique function $D(x,x_1,x_2,\dots,x_n)$ such that the SLE${}_4$ variant defined by (\[eq: Loewner S\]) with the driving process $${\mathrm{d}}X_t \; = \; 2 \; {\mathrm{d}}B_t + D(X_t, g_t(x_1), \dots, g_t(x_n)) \; {\mathrm{d}}t$$ is coupled with the GFF described above. The function $D(x,x_1,x_2,\dots,x_n)$ is given by $$\label{defineD} D(x,x_1,\dots,x_n) \; = \; 2 \; \mu(x, x_1, \dots, x_n) - 2 \; \sum_{i=0}^{n} {\partial_{{x_i}}} \tilde{M}(x,x,x_1,\dots,x_n) ,$$ where $\mu$ is the second coefficient in the expansion at $z=x$ of the Schwarz kernel $\tilde{S}_x(z)$ $$\mu := \frac{\pi}{{\mathfrak{i}}} \lim_{z\rightarrow x} \left( \tilde{S}_x(z) - \frac{{\mathfrak{i}}}{\pi(z-x)} \right) .$$ Hadamard’s formula implies that [Equation (\[eq: C-eq ci\])]{} will hold provided that when we write $M = {\Im \mathrm{m } \, }(F)$ the function $F$ satisfies ${\partial_{{x}}} F \, = \, 2 {\mathfrak{i}}\lambda \, \tilde{S}$, where $\tilde{S}$ is the Schwarz kernel with corresponding boundary conditions. The first equation (\[eq: M-eq ci\]) now reads $$\label{MainEq1StripMP} {\Im \mathrm{m } \, }\left\{ 2\, {\partial_{{x x}}} F + \coth(\frac{z-x}{2}) \; {\partial_{{z}}} F + \sum_i \coth(\frac{x_i-x}{2}) \; {\partial_{{x_i}}} F + D_t \; {\partial_{{x}}} F \right\} \; = \; 0 .$$ Obviously the function in the parentheses in (\[MainEq1StripMP\]) is purely real when $z \in {\mathbb{R}}\setminus \{x\}$. We show that it satisfies the homogeneous BC’${}_i$ boundary conditions on the upper part of the boundary, and that for appropriate choice of $D_t$ the singularities at $x$ cancel out. It is clear that if the function ${\Im \mathrm{m } \, }(F)$ satisfies Dirichlet or zero Neumann boundary conditions on $l_i=(x_i,x_{i+1})\subset {\mathbb{R}}+ {\mathfrak{i}}\pi$, then ${\Im \mathrm{m } \, }({\partial_{{x x}}} F)$, ${\Im \mathrm{m } \, }({\partial_{{x_i}}} F)$ and ${\Im \mathrm{m } \, }({\partial_{{x}}} F)$ satisfy corresponding homogeneous conditions. The function ${\partial_{{z}}} F$ is purely real where BC${}_i$ is Dirichlet and purely imaginary where BC${}_i$ is Neumann. Multiplication by real constant does not affect homogeneous Dirichlet or Neumann boundary conditions, and multiplication by the real function $\coth(\frac{z-x}{2})$ does not change the argument of ${\partial_{{z}}} F$ modulo $\pi$. So all terms in (\[MainEq1StripMP\]) satisfy BC’${}_i$ on $l_i$. Our function $F$ is invariant under simultanious translation of all arguments, i.e. $${\partial_{{z}}} F \;= \; - {\partial_{{x}}} F - \sum_i {\partial_{{x_i}}}F .$$ So, we rewrite the equation (\[MainEq1StripMP\]) as $$\label{StripFunnyEq} {\Im \mathrm{m } \, }\left\{ 2 \; {\partial_{{x x}}} F - \coth(\frac{z-x}{2}) \; {\partial_{{x}}} F - \coth(\frac{z-x}{2}) \; \sum_i {\partial_{{x_i}}} F + \sum_i \coth(\frac{x_i-x}{2}) \, {\partial_{{x_i}}} F + D_t \; {\partial_{{x}}} F \right\} \; = \; 0$$ Note that $\partial_{x_i} F$ might have a singularity at $x_i$ — however, it can only be of order $O((z-x_i)^{-1})$, so in the above expression these singularities cancel out, and the function is bounded near $x_i$’s. It remains to handle the singularity at $x$. To do so, note that the expansion of ${\partial_{{x}}} F$ at $x$ is $${\partial_{{x}}} F \; = \; \frac{C}{z-x} + C \, \mu + o(1) ,$$ where $C$ is a real constant and $\mu$ is as specified in the statement. Hence the second-order singularities, which only come from the first two terms, cancel out. The first-order singularities come from the second, the third and the last term in (\[StripFunnyEq\]). Clearly, there is a unique choice of $D_t$, specified in the statement of the proposition, for which they also cancel out. If one wishes, one may allow some of BC${}_i$’s be Riemann-Hilbert boundary conditions. The proof is similar to the above one, and we leave it to the reader. We do not focus on this case to avoid discussion of existence and positivity of Green’s function with these boundary conditions and uniqueness of solution to boundary value problem. A comparison of two simple particular cases of the Proposition leads to a curious observation. Take the entire upper boundary with homogeneous Dirichlet or homogeneous Neumann boundary conditions. In both cases the drift $D_t$ vanishes. These two Gaussian free fields with mutually singular laws are therefore both coupled with dipolar SLE${}_4$. Couplings in doubly connected domains {#sec: doubly connected} ===================================== In this section we address the question of couplings of SLE and GFF in doubly connected domains. We first consider punctured disc and exhibit a coupling of GFF with radial SLE, and then consider annuli ${\mathbb{A}}_p$. The non-simply connectedness requires in many cases non trivial monodromies of the free field — in order to obtain couplings with single valued fields we need to compactify the field, that is consider free field with values on a circle. In physics literature considerations of lattice model height functions in multiply connected domains or in the presence of vortices, and considerations of operator algebra and modular invariance of conformal field theories have both lead to the study of such compactified free fields. Throughout this chapter, $x$, $x_1$, $\dots$ denote points on the boundary of an annulus ${\mathbb{A}}_r$ for some $r>0$, and the derivatives ${\partial_{{x}}}$, ${\partial_{{x_1}}}$, $\dots$ will be taken in counterclockwise direction, both for inner and outer boundary. Compactified GFF and radial SLE${}_4$ {#sec: radial SLE} ------------------------------------- We first investigate the solutions to basic equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\]) in the radial case. We will see that the solution to these equation will not be a harmonic function in the disc, but rather a harmonic function with monodromy. This situation has also been considered in [@Dubedat-SLE_and_free_field]. We use the Loewner chain (\[eq: Loewner D\]) to describe the growth process in ${\Omega}_0 = {\mathbb{D}}$, and for radial SLE${}_4$ we have the driving process $X_t = \exp( {\mathfrak{i}}2 B_t )$ so in the absence of other marked points but the tip of the growth the basic equations (\[eq: M-eq ci\]) and (\[eq: C-eq ci\]) read $$\begin{aligned} \label{MainEqRad1} & {\Im \mathrm{m } \, }\left\{ 2 \; {\partial_{{x x}}} F + z \frac{x+z}{x-z} \; {\partial_{{z}}} F \right\} \; = \; 0 \qquad \textrm{ and } \\ \label{MainEqRad2} & {\frac{{\mathrm{d}}}{{\mathrm{d}}{t}}} C_{{\Omega}_t}(z_1,z_2) \; = \; - 4 \; {\Im \mathrm{m } \, }\Big( {\partial_{{x}}} F(z_{1};x) \Big) \; {\Im \mathrm{m } \, }\Big( {\partial_{{x}}} F(z_{2};x) \Big) .\end{aligned}$$ As usually, with $C_{{\Omega}_t}$ the Dirichlet Green’s function, Hadamard’s formula suggests the solution to (\[MainEqRad2\]), with the ambiguity of a sign and an additive real constant. Namely we have $${\partial_{{x}}} F(z;x) \; = \; \pm 2 {\mathfrak{i}}\lambda \; S_x(z) + {\mathrm{const.}}\; = \; \pm {\mathfrak{i}}\; \frac{\lambda}{\pi} \; \frac{x+z}{x-z} + {\mathrm{const.}},$$ expressed in terms of the Schwarz kernel $S_x(z)$ in the unit disc. The sign of ${\partial_{{x}}} F$ is unimportant, but the constant will have to vanish. We warn the reader that here, for the first time, it is important not to confuse the derivatives ${\partial_{{x}}}$ w.r.t. the length parameter on the boundary with derivatives w.r.t. the position of the marked point $x$. The function $F$ can be taken invariant under rotations, and we get $$\label{rotinv} {\mathfrak{i}}\, z \, {\partial_{{z}}} F + {\partial_{{x}}} F \; = \; 0 .$$ Using this in [Equation (\[MainEqRad1\])]{}, and integrating explicitly gives $$F(z;x) \; = \; \frac{\lambda}{\pi} \big(( 2\log (x-z) - \log (z) \big)$$ and $M(z;x) = \frac{\lambda}{\pi} \big( - \arg(z) + 2 \, \arg (x-z) \big)$. The function $M$ is not single-valued. However, all the formulas we have used make sense: as soon as we fix the branch of $M(z)$, the branch of $M(g_t(z))$ will also be fixed by continuity. We can thus define a multi-valued harmonic function $M(z)$ in the punctured disc ${\mathbb{D}}\setminus \{0\}$, such that it has monodromy of $2 \lambda = \sqrt{\frac{\pi}{2}}$ around zero, and the boundary conditions have a jump of $2\lambda$ at the point $x$, being otherwise locally constant. Adding this function to a zero Dirichlet boundary valued GFF in ${\mathbb{D}}$, one obtaines a multi-valued GFF $\Phi$ of the same monodromy, which could be interpreted as a single valued free field with values in ${\mathbb{R}}/ 2 \lambda {\mathbb{Z}}$. [Theorem \[thm: coupling\]]{} is not directly formulated for multivalued free fields, but this problem is superficial. It is easy to see that the corresponding single valued free field on the universal cover of ${\mathbb{D}}\setminus {\left\{ {0} \right\}}$ (with periodic covariance, in particular) is coupled with the growth process obtained by lifting the radial SLE${}_4$ to the universal cover. Here and in the sequel we nevertheless prefer to talk about either multivalued free field or free field with values on a circle ${\mathbb{R}}/ 2 \lambda {\mathbb{Z}}$. Compactified GFF and standard annulus SLE${}_4$ ----------------------------------------------- A natural generalization of the radial SLE to annuli of finite modulus is the standard annulus SLE. We will now show that at $\kappa=4$ it is coupled with a multivalued free field having Neumann boundary conditions on the inner boundary component of the annulus and jump-Dirichlet boundary conditions on the outer boundary component. The starting domain is taken to be ${\Omega}_0 = {\mathbb{A}}_p$, and we use the Loewner chain (\[eq: Loewner A\]) to describe the growth process. The conformal maps $g_t : {\mathbb{A}}_p \setminus {K}_t \rightarrow {\mathbb{A}}_{p-t}$ uniformize the complements of the hull to thinner annuli, so even with strict conformal invariance we have to specify the mean and the covariance for all annuli ${\mathbb{A}}_{p-t}$. We take the covariance $C_t = C_{{\mathbb{A}}_{p-t}}$ to be the Green’s function with Dirichlet boundary conditions on ${\left\{ {|z|=1} \right\}}$ and Neumann boundary conditions on ${\left\{ {|z|=e^{-p+t}} \right\}}$. The mean $M_t = M_{{\mathbb{A}}_{p-t}}$ will be represented as the imaginary part of a multivalued analytic function $F_t$ defined on ${\mathbb{A}}_{p-t}$. Correspondingly, the equation (\[eq: M-eq ci\]) should be generalized to $$\label{MainEq1Annulus} {\Im \mathrm{m } \, }\left\{ \frac{\kappa}{2} \; {\partial_{{x x}}} F_t(z;x) + V^{p-t}_x(z) \; {\partial_{{z}}} F_t(z;x) + {\partial_{{t}}} F_t(z;x) \right\} \; = \; 0 .$$ Recall that $V^{p-t}_x(z) = 2\pi \, z \, S_{x}^{p-t}(z)$ where $S_{x}^{p-t}(z)$ is the Schwarz kernel in the annulus ${\mathbb{A}}_{p-t}$, as specified in [Section \[sec: Loewner chains\]]{}. The equation (\[eq: C-eq ci\]) is exactly the same as before and Hadamard’s formula applies, so we find ${\Im \mathrm{m } \, }({\partial_{{x}}} F_t(z;x))$ to be equal to a multiple of the Poisson kernel in the annulus ${\mathbb{A}}_{p-t}$ with zero Dirichlet boundary conditions on the outer boundary and Neumann boundary conditions on the inner one. We can write $ {\partial_{{x}}} F_t(z;x) = 2 {\mathfrak{i}}\lambda \; \tilde{S}_x^{p-t}(z)$ where $\tilde{S}$ is the Schwarz kernel with the following boundary conditions: ${\Re \mathrm{e } \, }(\tilde{S}_x(z)) = \delta_x(z)$ on $\{|z|=1\}$ and ${\Im \mathrm{m } \, }(\tilde{S})=0$ on $|z|=e^{t-p}$. As in the radial case, we have rotational invariance (\[rotinv\]) which allows us to rewrite (\[MainEq1Annulus\]) as $$\begin{aligned} \label{MainEq1AnnulusBis} & {\Im \mathrm{m } \, }({H}) \; \equiv \; 0 , \qquad \textrm{ where } \\ \nonumber & {H}\; := \; 2{\mathfrak{i}}\; {\partial_{{x}}} \tilde{S}_x^{p-t}(z) - 2\pi \; S_x^{p-t}(z) \; \tilde{S}_x^{p-t}(z) + \frac{1}{2\lambda} \; \partial_t F_t(z;x)\end{aligned}$$ We now prove that ${\Im \mathrm{m } \, }({H})$ is a harmonic function in the annulus satisfying - ${\Im \mathrm{m } \, }({H}) = 0$ on the outer part of the boundary - ${{\partial_{n}}}\, {\Im \mathrm{m } \, }({H}) = 0$ on the inner part of the boundary - ${\Im \mathrm{m } \, }({H})$ is bounded. This will imply the equation (\[MainEq1AnnulusBis\]), and consequently establish (\[M-eq\]) and (\[C-eq\]). First two boundary conditions for ${\Im \mathrm{m } \, }(H)$ obviously hold on ${\partial}{\mathbb{A}}_{p-t} \setminus {\left\{ {x} \right\}}$: if $M_{{\Omega}_t}$ satisfies those conditions for all $t$, then so does its drift ${\Im \mathrm{m } \, }({H})$. So, we only need to prove that ${\Im \mathrm{m } \, }({H})$ has no singularity at $x$. Without loss of generality, assume $t=0$. The expansions of the two Schwarz kernels at $z = x$ coincide up to constant order $$\begin{aligned} S_x(z) \; = \; \frac{-x}{\pi (z-x)} - \frac{1}{2\pi} + {\mathcal{O} \left( {z-x} \right)} \qquad \textrm{ and } \qquad \tilde{S}_x(z) \; = \; \frac{-x}{\pi (z-x)} - \frac{1}{2\pi} + {\mathcal{O} \left( {z-x} \right)} , $$ as follows from the condition that their real parts give the delta function on the outer boundary. Plugging these into (\[MainEq1AnnulusBis\]) shows that the possible singularities at $x$ cancel out. We summarize the result of this subsection in the following proposition: \[prop: AnnulusNeumann\] For any $p>0$, let $M^p(z;x)$ be the unique multi-valued harmonic function in the annulus ${\mathbb{A}}_p$ satisfying the following properties: - $M^p(z;x)$ obeys zero Neumann boundary conditions on the inner boundary circle $\{|z|=e^{-p}\}$; - $M^p(z;x)$ has a jump-Dirichlet boundary conditions on the outer boundary circle, namely, for any branch of $M^p(z;x)$ there exist $n\in {\mathbb{Z}}$ such that $M^p(x e^{\pm i\theta};x) \equiv \mp \lambda + 2\lambda n = \mp\sqrt{\frac{\pi}{8}}+2\sqrt{\frac{\pi}{8}} n$ for small positive $\theta$, and $M^p(x,z)$ is locally constant on $\{|z|=1\} \setminus \{x\}$. As the free field $\Phi$ in ${\mathbb{A}}_p$, $p>0$, take the sum $\Phi(z) = \Phi_0(z) + M^p(z;x)$, where $\Phi_0$ is a GFF in ${\mathbb{A}}_p$ with zero Neumann boundary conditions on $\{|z|=e^{-p}\}$ and zero Dirichlet boundary conditions on $\{|z|=1\}$. In other domains define the free field in the same manner, using conformal invariance. Then the standard annulus SLE${}_4$ is coupled with these free fields in the sense of [Theorem \[thm: coupling\]]{}. More marked points on the outer boundary ---------------------------------------- In this subsection, we extend the result above to the case of additional marked points $x_1,x_2,\dots$ on the outer boundary of the annulus ${\mathbb{A}}_p$. The free field will have locally constant Dirichlet boundary conditions with jumps $2\lambda=\sqrt{\frac{\pi}{2}},2\lambda_1,2\lambda_2,\dots$ at $x,x_1,x_2,\dots$ . If we impose zero Neumann boundary conditions on the inner boundary, then, for any choice of $(\lambda_j)$ we find a variant of SLE${}_4$ which is coupled with this field. If jumps add up to zero, one can also impose Dirichlet boundary condition on the inner boundary. In all cases, drifts of driving processes are computed explicitly. We start with the case of one additional marked point and Neumann boundary conditions on the inner boundary. Let $\tilde{S}_x(z)$ be as in the previous section. \[prop: AnnulusNeumannMore\] For any $p>0$, let $M^p(z;x,x_1)$ be the multi-valued harmonic function in the annulus ${\mathbb{A}}_p$ satisfying the following properties: - $M^p(z;x,x_1)$ obeys zero Neumann boundary conditions on the inner boundary $\{|z|=e^{-p}\}$; - $M^p(z;x,x_1)$ has a jump-Dirichlet boundary conditions on the outer part of the boundary with jumps $-2\lambda=-\sqrt{\frac{\pi}{2}}$ at $x$ and $-2\lambda_1$ at $x_1$. Let $\Phi_0$ be a GFF in ${\mathbb{A}}_p$ with zero Neumann boundary conditions on $\{|z|=e^{-p}\}$ and zero Dirichlet boundary conditions on $\{|z|=1\}$. In ${\mathbb{A}}_p$, $p>0$, take the GFF as $\Phi(z) = M^p(x,z) + \Phi_0(z)$, and for other domains use conformal invariance. These free fields are coupled with an annulus SLE${}_4$ variant defined using (\[eq: Loewner A\]) with the driving process $${\mathrm{d}}X_t \; = \; {\mathrm{d}}W_{4t} - {\mathfrak{i}}\pi \rho \; \tilde{S}^{p-t}_{g_t(x_1)}(X_t) \; \tau_{X_t} \; {\mathrm{d}}t , $$ where $W$ stands for the Brownian motion on $\{|z|=1\}$ and $\rho = 2 \frac{\lambda_1}{\lambda}$. The letter $\rho$ is used here analogously to the case of ordinary SLE${}_\kappa(\rho)$. Indeed, the drift of the driving process of SLE${}_\kappa(\rho)$ in ${\mathbb{H}}$ is $\frac{\rho}{X_t - g_t(x_1)} = -{\mathfrak{i}}\pi \rho \, S^{{\mathbb{H}}}_{g_t(x_1)}(X_t)$, where $S^{\mathbb{H}}_x(z)$ is the Schwarz kernel in ${\mathbb{H}}$ with Dirichlet boundary conditions. The value $\rho = 2 \frac{\lambda_1}{\lambda}$ is also what one gets in the case of simply connected domains and piecewise constant Dirichlet boundary conditions with jump of size $2 \lambda_1$ at a marked point. The proof essentially repeats the one of [Proposition \[prop: AnnulusNeumann\]]{}. Let us stress the differences. The first basic equation (\[eq: M-eq ci\]) now reads $$\label{MainEq1AnMore} {\Im \mathrm{m } \, }\left\{ \frac{\kappa}{2} {\partial_{{x x}}} F + V_x(z) {\partial_{{z}}} F + {\partial_{{t}}} F + \frac{D_t}{{\mathfrak{i}}x} \; {\partial_{{x}}} F - 2\pi {\mathfrak{i}}S_x^{p-t}(x_1) \; {\partial_{{x_1}}} F \right\} \; = \; 0 ,$$ whereas the second one (\[eq: C-eq ci\]) is exactly the same as in [Proposition \[prop: AnnulusNeumann\]]{}. Hence we should choose ${\partial_{{x}}} F_t(z;x) = 2 {\mathfrak{i}}\lambda \, \tilde{S}_{x,p-t}(z)$ with the same $\tilde{S}$ (note that this identity holds true for the choice of $M$ made in the assertion). The rotational invariance (\[rotinv\]) now reads $${\mathfrak{i}}z {\partial_{{z}}} F + {\partial_{{x}}} F +{\partial_{{x_1}}} F = 0$$ and we rewrite (\[MainEq1AnMore\]) as $$\begin{aligned} \label{MainEq1AnMoreBis} {\Im \mathrm{m } \, }({H}) \; = \; & 0 \qquad \text{, where} \\ \nonumber {H}\; := \; & 2{\mathfrak{i}}\; {\partial_{{x}}} \tilde{S}^{p-t}_x(z) - 2\pi \; S^{p-t}_x(z) \; \tilde{S}^{p-t}_x(z) \\ \nonumber & + \frac{1}{2 \lambda} \big( 2\pi {\mathfrak{i}}\; S^{p-t}_x(z) \; {\partial_{{x_1}}} F + {\partial_{{t}}} F_t + D_t \; {\partial_{{x}}} F - 2\pi {\mathfrak{i}}\; S_x^{p-t}(x_1) \; {\partial_{{x_1}}}F \big) .\end{aligned}$$ As before, it suffices to show that for an appropiate choice of $D_t$ the function ${H}$ is bounded and has zero imaginary part on the outer part of the boundary and constant real part on the inner one. The boundary conditions on ${\partial}{\mathbb{A}}_{p-t} \setminus \{x,x_1\}$ follow immediately. A first-order singularity at $x_1$ might be produced by the two terms containing ${\partial_{{x_1}}}F$, but we see that their contributions exactly cancel each other. As we have seen in the proof of [Proposition \[prop: AnnulusNeumann\]]{}, $2{\mathfrak{i}}\, {\partial_{{x}}} \tilde{S}^{p-t}_x(z) - 2\pi \, S^{p-t}_x(z) \, \tilde{S}^{p-t}_x(z)$ is bounded near $x$, and hence there could only be a first-order singularity produced by $2\pi {\mathfrak{i}}\, S_x^{p-t}(z) {\partial_{{x_1}}} F + D_t {\partial_{{x}}} F$. The choice of $D_t$ made in the assertion is exactly to guarantee that it vanishes. We now consider the case of Dirichlet boundary conditions on the inner boundary circle. \[prop: AnnulusDirichletMore\] For any $p>0$, let $M^p(z;x,x_1)$ be the unique harmonic function in the annulus ${\mathbb{A}}_p$ satisfying the following boundary conditions: - $M^p(z;x,x_1)=\lambda=\sqrt{\frac{\pi}{8}}$ on counterclockwise arc from $x_1$ to $x$ and - $M^p(z;x,x_1)=-\lambda$ on counterclockwise arc from $x$ to $x_1$ - $M^p(z;x,x_1)=\mu \in {\mathbb{R}}$ on the inner boundary circle $\{|z|=e^{-p}\}$ Let $\Phi_0(z)$ be a GFF in $A_p$ with zero Dirichlet boundary conditions. Then the GFF $M^p(x,x_1,z)+\Phi_0(z)$, transported to other domains using conformal invariance, is coupled in the sense of [Theorem \[thm: coupling\]]{} with the following annulus SLE${}_4$ variant. The driving process $X_t$ in (\[eq: Loewner A\]) is given by $$\begin{aligned} {\mathrm{d}}X_t \; = \; {\mathrm{d}}W_{4t} + D_t \, \tau_{X_t} \; {\mathrm{d}}t ,\end{aligned}$$ where $W_t$ stands for the Brownian motion on $\{|z|=1\}$, and the drift is explicitly $$\begin{aligned} D_t \; = \; - {\mathfrak{i}}\pi \rho \; S^{p-t}_{g_t(x_1)}(X_t) + \frac{2\pi}{p-t} \left( \frac{\mu}{2 \lambda} + \frac{{{L_{[X_t,g_t(x_1)]}}} - \pi}{2 \pi} \right)\end{aligned}$$ with $\rho = -2$ and ${{L_{[x,x_1]}}}$ denoting the length of the counterclockwise boundary arc from $x$ to $x_1$. Since $\rho = -2 = \kappa - 6$, it is easy to show using coordinate changes of the kind described in [@SW-coordinate_changes], that in the limit $p \rightarrow \infty$ one recovers a chordal SLE${}_4$ in ${\mathbb{D}}$ from $x$ to $x_1$. This limit therefore degenerates to the basic example of Schramm & Sheffield discussed in [Section \[sec: SSExample\]]{}. The annulus SLE with the above driving process was proposed in [@HBB-free_field_in_annulus], based on considerations of regularized free field partition function with these boundary conditions. That article also computes the probabilities that the curve passes to the left or right of the inner boundary circle, and finds that there is a non-zero probability for the curve to touch the inner circle only if $-\lambda < \mu < \lambda$ — as anticipated for a discontinuity line of the free field between the levels $\pm \lambda$. For the above choice of $M$, if $F$ is holomorphic function such that $M = {\Im \mathrm{m } \, }(F)$, we have that ${\Im \mathrm{m } \, }( {\partial_{{x}}} F )$ is equal to Dirichlet boundary valued Poisson kernel $P_x^{p-t}(z) = {\Re \mathrm{e } \, }\left( S^{p-t}_x(z) + \frac{1}{2 \pi (p-t)} \log(z) \right)$, exactly as required by (\[eq: C-eq ci\]) and Hadamard’s formula. Observe that the harmonic conjugate of $M$ is not a single-valued function and one should be careful defining ${\Re \mathrm{e } \, }(F)$. A rotationally invariant definition of $F$ is given by the following formula: $$F_t(z;x,x_1) \; := \; - \lambda {\mathfrak{i}}\; \int_x^{x_1} S_w^{p-t}(z) \, |dw| + \lambda {\mathfrak{i}}\; \int_{x_1}^x S_w^{p-t}(z) \, |dw| + \frac{ {\mathfrak{i}}\; \log(z/x) }{(t-p)} \left( \mu- 2\lambda \frac{\pi-{{L_{[x,x_1]}}}}{2\pi} \right) ,$$ the intergals being along the boundary in a counterclockwise direction. We have the following expressions for derivatives of $F$: $$\begin{aligned} \label{partxF} {\partial_{{x}}} F \; = \; & 2 \lambda {\mathfrak{i}}\; \left( S_x^{p-t}(z) + R_1 \right) \\ \label{partx1F} {\partial_{{x_1}}} F \; = \; & 2\lambda {\mathfrak{i}}\; \left( - S_{x_1}^{p-t}(z) + R_2 \right), \qquad \text{ where } \\ \nonumber R_1(x,x_1,z,t) \; = \; & - {\mathfrak{i}}\; \frac{\frac{\mu}{2\lambda}}{t-p} + {\mathfrak{i}}\; \frac{\pi-{{L_{[x,x_1]}}}}{2\pi(t-p)} - \frac{\log (z/x)}{2 \pi (t-p)} \\ \nonumber R_2(x,x_1,z,t) \; = \; & \frac{\log\frac{z}{x}}{2\pi(t-p)}\end{aligned}$$ As in the proof of [Proposition \[prop: AnnulusNeumannMore\]]{} we write the first basic equation (\[M-eq\]) using the above expressions and rotational invariance as $$\begin{aligned} \label{MainEq1AnMoreBis1} {\Im \mathrm{m } \, }( {H}) \; = \; & 0 ,\qquad \text{ where } \\ \nonumber {H}\; := \; & 2 {\mathfrak{i}}\; {\partial_{{x}}} (S_x^{p-t}(z)+R_1) - 2 \pi S_x^{p-t}(z) (S_x^{p-t}(z) + R_1) \\ \nonumber & + 2 \pi \big( S_x^{p-t}(x_1) - S_x^{p-t}(z) \big) \big( -S_{x_1}^{p-t}(z) + R_2 \big) + \frac{1}{2 \lambda} \Big( {\mathfrak{i}}D_t \; (S_x^{p-t}+R_1) + {\partial_{{t}}} F \Big) $$ Now the function ${H}$ is possibly multi-valued. To prove (\[MainEq1AnMoreBis1\]) we check that ${H}$ satisfies the following properties: - ${\Im \mathrm{m } \, }({H}) = 0$ on ${\partial}{\mathbb{A}}_{p-t} \setminus \{x,x_1\}$; - Any branch of ${\Im \mathrm{m } \, }({H})$ is bounded near $x$ and $x_1$. Since ${H}$ clearly cannot grow faster than linearly at infinity on the universal cover, these conditions guarantee (\[MainEq1AnMoreBis1\]). The first condition is justified as in the previous propositions. The singularities at $x_1$ clearly cancel out. It remains to handle possible singularities at $x$. As before, $2 {\mathfrak{i}}\, {\partial_{{x}}} (S_x^{p-t}(z)+R_1) - 2\pi \, S_x^{p-t}(z) \, S_x^{p-t}(z)$ is bounded near $x$, and three terms that have first-order singularities at $x$ remain in ${H}$: $$-2\pi \; S_x^{p-t}(z) \, R_1 + 2\pi \; S_x^{p-t}(z) \, \left( S_{x_1}^{p-t} - R_2 \right) + {\mathfrak{i}}\, D_t \, S_x^{p-t}(z).$$ We see that since $R_1+R_2$ doesn’t contain $\log z$, the choice ${\mathfrak{i}}\, D_t = 2\pi(-S_{x_1}(x)+ (R_1 + R_2) |_{z=x})$ guarantees vanishing of the singularity for all branches of ${H}$, and we are done. The extension of [Propositions \[prop: AnnulusNeumannMore\] and \[prop: AnnulusDirichletMore\]]{} to the case of several marked point $x_1,x_2,\dots$ with jumps $2\lambda_1,2\lambda_2,\dots$ is straightforward; proofs are literally the same. The drift term $D_t$ is just the sum $D_t = \sum_j \frac{\lambda_j}{\lambda} D^{j}_t$ where $D^{j}_t(x,x_j)$ is the drift we would have if we only had one jump of size $2\lambda_j$ at $x_j$. Indeed, one may observe that procedure of determining the drift term for $F$ is in fact linear in $F$. One should remember, however, that in the Dirichlet case the construction only makes sense if all jumps add up to $0$, including the jump of size $2 \lambda = \sqrt{\frac{\pi}{2}}$ at $x$. Compactified GFF with a marked point on the inner boundary ---------------------------------------------------------- In this section we consider the case when an additional marked point $x_1$ is on the inner part of the boundary. The mean of the field $M$ will be a multi-valued harmonic function obeying Dirichlet boundary conditions with jumps $-2\lambda$ both at $x$ and $x_1$. However, these conditions do not define $M$ completely, we should also define the change of fixed branch of the function $M$ along the radius, say, from $e^{-p}$ to $1$. Let $\hat{M}^p(z)$ be the unique multi-valued harmonic function in ${\mathbb{A}}_p$ such that any continuous branch in any sector $\{ r e^{{\mathfrak{i}}\theta} \, : \, e^{-p} < r < 1 , \; \theta_1 < \theta < \theta_2 \}$ determined by angles $\theta_2 \in [0,2\pi[$ and $\theta_1 \in ]\theta_2-2\pi,\theta_2[$, has boundary values $\lambda \, ({\mathrm{sign}}(\theta) + 2 n)$ at $z = e^{{\mathfrak{i}}\theta}$ and $z = e^{-p+{\mathfrak{i}}\theta}$. The function $M_t(z;x,x_1)$ in ${\mathbb{A}}_{p-t}$ is constructed by continuously moving the discontinuity points of boundary conditions of $\hat{M}^{p-t}$ from $1$ to $x$ and from $e^{-p}$ to $x_1$. Hence, $M$ is in fact a multi-valued harmonic function in $z$ that depends on $x$, $x_1$, $t$ and the choice of $\arg x_1 - \arg x$. More precisely, represent $\hat{M}^p$ as the imaginary part of a multivalued analytic function $\hat{F}^p$, and $M_t$ as the imaginary part of $$\begin{aligned} F_t(z;\arg x,\arg x_1) \; = \; & \hat{F}^{p-t}(z) + 2 \lambda {\mathfrak{i}}\; \int_0^{\arg x} S_{e^{{\mathfrak{i}}\theta}}^{p-t}(z) \; {\mathrm{d}}\theta + 2 \lambda {\mathfrak{i}}\; \int_0^{\arg x_1} S^{{\mathrm{inv.}}; p-t}_{e^{{\mathfrak{i}}\theta+t-p}}(z) \; {\mathrm{d}}\theta \\ & \qquad -2 \lambda {\mathfrak{i}}\; \frac{ \log \frac{z}{x}}{ 2 \pi (t-p)} \; \arg x -2 \lambda {\mathfrak{i}}\; \frac{t-p - \log \frac{z}{x}}{2 \pi (t-p)} \; \arg x_1 .\end{aligned}$$ Here $S^{{\mathrm{inv.}};p}_{y}(z):=S^p_{e^{-p}/y}(e^{-p}/z)$. With this definition, the function $F$ is invariant under rotations. We will sometimes write it as function of $x$ and $x_1$ where the branch of the argument will be clear from the context. We have the following proposition: \[prop: AnnulusJump\] Let $M$ be as above, and let $\Phi_0(z)$ be a GFF in ${\mathbb{A}}_p$ with zero Dirichlet boundary conditions. Consider the multi-valued GFF defined in ${\mathbb{A}}_p$ as $M^p(x,x_1,z)+\Phi_0(z)$ and in other domains by conformal invariance. It is coupled in the sense of [Theorem \[thm: coupling\]]{} with the annulus SLE${}_4$ variant whose driving process $X_t$ in (\[eq: Loewner A\]) satisfies $$\begin{aligned} {\mathrm{d}}X_t \; = \; & {\mathrm{d}}W_{4t} + D_t \, \tau_{X_t} \; {\mathrm{d}}t \qquad \textrm{ with} \\ D_t \; = \; & - 2 \pi {\mathfrak{i}}\left( S^{{\mathrm{inv.}}; p-t}_{g_t(x_1)}(X_t) - \frac{1}{2\pi} \right) + \frac{\arg g_t(x_1)-\arg X_t}{p-t} .\end{aligned}$$ The proof literally repeats the one of [Proposition \[prop: AnnulusDirichletMore\]]{}. We now have, as in [Equations (\[partxF\]) and (\[partx1F\])]{}, $$\begin{aligned} {\partial_{{x}}} F \; = \; & 2 \lambda {\mathfrak{i}}\; \left( S_x^{p-t}(z)+R_1 \right) \\ e^{-p} \, {\partial_{{x_1}}} F \; = \; & 2\lambda {\mathfrak{i}}\; \left( S^{{\mathrm{inv.}}; p-t}_{x_1}(z) + R_2 \right) \qquad \textrm{, where} \\ R_1(x,x_1,z,t) \; = \; & -\frac{\log\frac{z}{x}}{2\pi(t-p)} + {\mathfrak{i}}\; \frac{(\arg x - \arg x_1) }{2 \pi (t-p)} \\ R_2(x,z,t) \; = \; & - \frac{1}{2\pi}\left(1-\frac{\log\frac{z}{x}}{t-p}\right) .\end{aligned}$$ and we find that the basic equations are verified provided that $${\mathfrak{i}}\, D_t \, = \, 2\pi \left( S^{{\mathrm{inv.}}}_{x_1}(x) + (R_1+R_2)|_{z=x} \right) .$$ One can write explicitly the stochastic differential equation satisfied by the process $\arg(g_t(x_1)) - \arg(X_t)$. It turns out to be a Brownian bridge which at time $t=p$ hits $0$. Therefore, at $t=p$, the curve hits $x_1$ with a winding determined by the initial choice of $\arg(x_1) - \arg(x)$. Generalizations to $\kappa \neq 4$ for Dirichlet boundary conditions {#sec: Dirichlet general kappa} -------------------------------------------------------------------- In the case of Dirichlet boundary conditions, one can generalize the previous couplings to $\kappa\neq 4$. As the example of the section (\[sec: SSExample\]) shows, the rule that associates a field to a domain is not conformally invariant: if we have a conformal map $\varphi:{\Omega}_1\rightarrow {\Omega}_2$, then $$\label{eq: preSchwarzian} M_{{\Omega}_1;x_1,x_2,\dots}(z) \; = \; M_{{\Omega}_2;\varphi(x_1),\varphi(x_2),\dots}(\varphi(z))+ \alpha_\kappa \; \arg \varphi'(z),$$ where $\alpha_\kappa = \frac{4-\kappa}{2 \sqrt{2\pi\kappa}}$ as in [Equation (\[eq: ad hoc term 3\])]{}. The covariance, however, is still the Dirichlet Green’s function. Consider the annulus ${\mathbb{A}}_p$ with two marked points $x\in\{z:|z|=1\}$, $x_1\in \partial {\mathbb{A}}_p$, and let $M^{p}_4(z,x,x_1)$ be one of the functions $M^{p}$ defined in proposition (\[prop: AnnulusDirichletMore\]) or (\[prop: AnnulusJump\]). We define $$M^{p}_{\kappa}(z,x,x_1) \; := \; \sqrt{\frac{4}{\kappa}}M^{p}_{4}(z,x,x_1) - \alpha_\kappa \; \arg z.$$ This is a multi-valued harmonic function (with a single-valued derivative); the monodromy is equal to $\big( \kappa - 6 \big) \lambda_\kappa $. For an arbitrary doubly-connected domain, we define the mean of the field by conformal map to an annulus and the rule (\[eq: preSchwarzian\]); in particular, for ${\mathbb{A}}_p\backslash {K}_t$ we have $$\label{eq: MeanNot4} M^{{\mathbb{A}}_p\backslash {K}_t}_{\kappa}(z,x,x_1)\; = \; \sqrt{\frac{4}{\kappa}} M^{p-t}_{4} \big( g_t(z);g_t(x),g_t(x_1) \big) + \alpha_\kappa \; \Big( \arg g'_t(z)-\arg g_t(z) \Big).$$ We have the following proposition: A GFF defined as above (with marked point on the outer or inner boundary) is coupled with annulus SLE defined using (\[eq: Loewner A\]) with the driving process $${\mathrm{d}}X_t \; = \; {\mathrm{d}}W_{\kappa t} + D_t \, \tau_{X_t} \; {\mathrm{d}}t,$$ $D_t$ being the same as in [Proposition \[prop: AnnulusDirichletMore\]]{} or \[prop: AnnulusJump\] correspondingly. The additional term $\alpha_\kappa \, (\arg g'_t(z)-\arg g_t(z))$ has finite variation, hence the proof of will be the same as before (we have adjusted the coefficient in front of $M$ to compensate the change of speed for $W_{\kappa t}$). Note, however, that without that term the proof of [Proposition \[prop: AnnulusDirichletMore\]]{} (correspondingly \[prop: AnnulusJump\]) would fail for $\kappa\neq 4$ because the coefficient in front of the first term of the definition of $\tilde{F}$ (see the equation (\[MainEq1AnMoreBis1\])) changes from $2$ to $\frac{\kappa}{2}$, hence the second-order singularities at $x$ would not cancel out anymore. We now show that the additional term exactly compensates this effect, without destroying zero Dirichlet boundary conditions elsewhere. Simple geometric considerations show that ${\mathrm{d}}(\arg g'_t(z)-\arg g_t(z))=0$ when $g_t(z)\in\partial {\mathbb{A}}_{p-t}\backslash {\left\{ {X_t} \right\}}$. One has $$\begin{aligned} \partial_t \log g'_t(z) \; = \; & V'_{X_t}(g_t(z)) \; = \; 2\pi g_t(z)S'_{X_t}(g_t(z))+2\pi S_{X_t}(z) \qquad \textrm{ and }\\ \partial_t \log g_t(z) \; = \; & 2\pi S_{X_t}(z) .\end{aligned}$$ Recall the rotational invariance of the Schwarz kernel: $\partial_x S_x(z)+ {\mathfrak{i}}z \, S'_x(z)=0$, and the fact that the second-order singularity of ${H}$ comes from its first term $\frac{\kappa}{2} {\mathfrak{i}}\partial_{x}S_{x,p-t}(z)$. Comparing the coefficients finishes the proof. Non-commutation at $\kappa \neq 4$ for general boundary conditions {#sec: commutation} ================================================================== This appendix discusses a difference between Dirichlet boundary conditions and other boundary conditions concerning the couplings with SLEs at $\kappa \neq 4$. In the case of Schramm & Sheffield treated in [Section \[sec: SSExample\]]{} as well as those of [Section \[sec: Dirichlet general kappa\]]{} we have remarked that for the coupling with SLE variants with $\kappa \neq 4$, it suffices to modify the the the boundary conditions of the one point function $M$ by a harmonic interpolation of the winding of the boundary. In other cases no such claims were made, and we now explain why these cases indeed don’t admit a generalization of this sort. For the sake of concreteness we detail the argument only in the simplest case of combined jump-Dirichlet and Riemann-Hilbert boundary conditions as treated in [Section \[sec: strip RH\]]{}. Recall that ${\partial_{{x}}} F$ is determined by (\[eq: C-eq ci\]) and the Hadamard formula. One then defines, as in (\[eq: ad hoc term\]), $$M_{{\Omega}_t}(z) = {\Im \mathrm{m } \, }\left( F(g_t(z);X_t) + E_t(z) \right) ,$$ which contains a process of finite variation $(E_t(z))_{t \geq 0}$ introduced in order to restore the martingale property of the mean (\[eq: M-eq ci\]) at the cost of relaxing strict conformal invariance. Concretely, $$\begin{aligned} \label{eq: ad hoc general} E_t(z) = \int_0^t {\Im \mathrm{m } \, }\Big( J_{X_s}(g_s(z)) \Big) \, {\mathrm{d}}s ,\end{aligned}$$ where $J_x(z)$ is a multiple of the derivative of the appropriate Schwarz kernel, see [Equations (\[eq: ad hoc term 2\]) and (\[eq: ad hoc strip\])]{}. A question naturally arises: is the modified formula for $M_{{\Omega}_t}$ consistent with having a function $M_{{\Omega};x,x_1, \ldots, x_n}$ associated to any domain with marked points? Does (\[eq: ad hoc general\]) depend on the full history $(g_s)_{s \in [0,t]}$ of the Loewner chain, or can it be expressed as a function of domain ${\Omega}_t$ only, as is the case in (\[eq: ad hoc term 3\])? Imagine two different Loewner chains that in the end uniformize the same hull. The prototype is a hull $K = {K}_- \cup {K}_+$ consisting of two small pieces ${K}_+$, ${K}_-$ away from each other, located roughly at $\xi_+, \xi_- \in {\partial}{\Omega}_0$. We can uniformize $K$ by first uniformizing one piece and then what remains of the other. Suppose that the local half plane capacities of ${K}_+$ and ${K}_-$ are ${\varepsilon}_+$ and ${\varepsilon}_-$, respectively. In the calculations below we keep track of terms of order ${\varepsilon}_\pm$ as well as the second order cross terms of type ${\varepsilon}_+ {\varepsilon}_-$, but we omit other second order and higher order terms. Write the uniformizing maps of complements of ${K}_\pm$ constructed by a Loewner chain (\[eq: Loewner\]) as $$\begin{aligned} g_\pm \; : \; & {\Omega}_0 \setminus {K}_\pm \rightarrow {\Omega}_0 \\ g_\pm(z) \; \approx \; & z \; + \; {\varepsilon}_\pm \, V_{\xi_\pm}(z) + \cdots .\end{aligned}$$ After having thus removed one piece ${K}_\pm$, we are left with the hull ${\widetilde}{K}_\mp = g_\pm({K}_\mp)$ whose local half plane capacity is $${\widetilde}{{\varepsilon}}_\mp \; \approx \; {\varepsilon}_\mp \; |(g_\pm)'(\xi_\mp)|^2 + \cdots \; \approx \; {\varepsilon}_\mp + 2 {\varepsilon}_\pm {\varepsilon}_\mp \; (V_{\xi_\pm})'(\xi_\mp) + \cdots$$ and the hull ${\widetilde}{K}_\mp$ can be uniformized by a map constructed by the same Loewner fields $$\begin{aligned} {\widetilde}{g}_\mp \; : \; & {\Omega}_0 \setminus {\widetilde}{K}_\mp \rightarrow {\Omega}_0 \\ {\widetilde}{g}_\mp(z) \; \approx \; & z \; + \; {\widetilde}{{\varepsilon}}_\mp \, V_{{\widetilde}{\xi}_\mp}(z) + \cdots ,\end{aligned}$$ where ${\widetilde}{\xi}_\mp$ is the location of the hull ${\widetilde}{K}_\mp$ $${\widetilde}{\xi}_\mp = g_\pm(\xi_\mp) \approx \xi_\mp + {\varepsilon}_\pm \, V_{\xi_\pm}(\xi^\mp) + \cdots .$$ We then have two conformal maps $${\widetilde}{g}_+ \circ g_- \; \textrm{ and } \; {\widetilde}{g}_- \circ g_+ \quad : \quad {\Omega}_0 \setminus K \rightarrow {\Omega}_0 .$$ In practise the Loewner vector fields are chosen to be the unique ones preserving some normalization condition, so the two maps must actually be equal. In any case, we can ask whether formula (\[eq: ad hoc general\]) gives the same answer for the hull $K$ built in the two possible ways. The two expressions for $E_t$ are approximately $${\varepsilon}_\mp \; J_{\xi_\mp}(z) + {\widetilde}{{\varepsilon}}_\pm \; J_{{\widetilde}{\xi}_\pm} (g_\mp(z)) ,$$ so their difference can be expressed expanding in all small parameters $$\begin{aligned} \nonumber \Delta E_t \; \approx \; {\varepsilon}_+ {\varepsilon}_- \Big\{ & 2 \, (V_{\xi_-})'(\xi_+) \, J_{\xi_+}(z) - 2 \, (V_{\xi_+})'(\xi_-) \, J_{\xi_-}(z) \\ \nonumber & + V_{\xi_-}(\xi_+) \,{\partial_{{x}}} J_{\xi_+}(z) - V_{\xi_+}(\xi_-) \,{\partial_{{x}}} J_{\xi_-}(z) \\ \label{eq: commutation} & + V_{\xi_-}(z) \, {\partial_{{z}}} J_{\xi_+}(z) - V_{\xi_+}(z) \, {\partial_{{z}}} J_{\xi_-}(z) \Big\} + \cdots\end{aligned}$$ For $E_t$ to be a function of the hull $K$ only, and not of the history of the Loewner chain, it is necessary that $J$ satisfies the functional equation that makes the above expression vanish identically. As is already clear from considerations of the chordal SLE${}_\kappa$ coupling, in particular [Equation (\[eq: ad hoc term 3\])]{}, the function $J_x(z) = {\mathrm{const.}}\; \frac{1}{(z-x)^2}$ satisfies the appropriate equation with $V_x(z) = \frac{2}{z-x}$ chosen according to the Loewner flow (\[eq: Loewner\]). In the strip ${\mathbb{S}}$ we considered jump-Dirichlet boundary conditions on ${\mathbb{R}}$ and Riemann-Hilbert on ${\mathbb{R}}+ {\mathfrak{i}}\pi$. We chose correspondingly $J_x(z) = {\mathrm{const.}}\; {\partial_{{x}}} \tilde{S}_x(z)$, where $\tilde{S}_x(z)$ is the Schwarz kernel (\[eq: explicit RH Schwarz\]) with the same boundary conditions. A direct computation shows that with the appropriate Loewner vector field $V_{x}(z) = \coth(\frac{z-x}{2})$, this $J_x(z)$ produces a non vanishing difference in (\[eq: commutation\]). It is therefore not possible to generalize the coupling of [Section \[sec: strip RH\]]{} to $\kappa \neq 4$ in the manner analogous to Dirichlet boundary conditions. Local half-plane capacity and [Proposition \[prop: hloc\]]{} {#sec: Loewner lemma} ============================================================ Most of the statements of [Proposition \[prop: hloc\]]{} are standard Loewner chains techniques (and may be found in the literature for all particular cases we deal with in this paper), so we leave the proof to the reader. We will only discuss the slightly less standard statement about the local half-plane capacity. Let ${\Omega}$ be a planar domain, $x\in\partial {\Omega}$, and let $\partial {\Omega}$ be analytic in a neigborhood of $x$. Let $({K}_t)$ be a family of growing compact hulls in $\overline{{\Omega}}$, $\lim\limits_{t\rightarrow 0} {K}_t=\{x\}$. Henceforth we assume that $x=0$, the tangent to the boundary at $x$ is parallel to the real line, and that the inner normal at $0$ points to the upper half-plane. Let $\Psi$ be a harmonic function in ${\Omega}\backslash {K}_t$ with the following boundary conditions: - $\Psi(z)={\mathrm{dist}}(z,\partial {\Omega})$ on $\partial {K}_t$ - $\Psi(z)=0$ on $\partial {\Omega}\backslash {K}_t$ Let $r>0$ be small enough, so that ${\Omega}\cap\{|z|=r\}$ consists of one arc $\{r e^{{\mathfrak{i}}\theta} : \theta_1<\theta<\theta_2\}$. If the diameter of ${K}_t$ does not exceed $r$, define $$\label{lhcap} L^{{\Omega}}_{{K}_t,r}=\frac{1}{\pi}\int_{\theta_1}^{\theta_2} \Psi (r e^{{\mathfrak{i}}\theta}) r \sin (\theta) \; {\mathrm{d}}\theta.$$ If ${\Omega}=\mathbb{H}$, then $L^{{\Omega}}_{{K}_t,r}$ is well-known to be the half-plane capacity of ${K}_t$. We will thus call this quantity the *local half-plane capacity at distance $r$*. It is easy to see that $L^{{\Omega}}_{{K}_t,r}$ satisfies the following two properties, that express its stability under slight changes of the domain: - Let $\phi:{\Omega}_1\rightarrow {\Omega}_2$ be a conformal map such that $\phi(0)=0$ and $\phi'(0)=1$. Then $|\frac{L^{{\Omega}_1}_{{K}_t,r}}{L^{{\Omega}_2}_{{K}_t,r}}-1|\leq C r$. - Let $R>r$, and ${\Omega}_1\cap B_R(0)={\Omega}_2\cap B_R(0)$. Then $|\frac{L^{{\Omega}_1}_{{K}_t,r}}{L^{{\Omega}_2}_{{K}_t,r}}-1| \leq C \frac{r}{R}$. These properties allow us to define $\partial_t {\mathrm{lhcap}}({K}_t)|_{t=0} := \lim\limits_{r\rightarrow 0}\partial_t L^{{\Omega}}_{{K}_t,r}$. It remains unchanged under conformal maps $\phi$ as in the first property above, and is equal to the derivative of the half-plane capacity of ${K}_t$ if $\partial {\Omega}$ coincides with the real line in some neighborhood of zero. Henceforth we assume without loss of generality that this is the case. Now, let ${K}_t$ be generated by a Loewner chain as in [Proposition \[prop: hloc\]]{}. We first claim that, when computing $\partial_t {\mathrm{lhcap}}({K}_t)|_{t=0}$, we can replace $\Psi (z)$ by ${\Im \mathrm{m } \, }z -{\Im \mathrm{m } \, }g_t(z)$ in the integral (\[lhcap\]). Indeed, the difference $H(z):= \Psi (z)-{\Im \mathrm{m } \, }z +{\Im \mathrm{m } \, }g_t(z)$ is a harmonic function; $H(z)\equiv 0$ on $\partial {\Omega}\cap B_R(0)$ for some constant $R$, and $|H(z)|\leq C t$ elsewhere on $\partial {\Omega}$. Hence $|H(r e^{{\mathfrak{i}}\theta})|\leq C\frac{r}{R} t$, and this is negligible when we take $r$ to zero. However, we have $$\partial_t \, {\Im \mathrm{m } \, }g_t(z)|_{t=0} \; = \; {\Im \mathrm{m } \, }(V_0(z)) \; = \; {\Im \mathrm{m } \, }(\frac{2}{z}) + O(1) \; = \;\partial_t \, {\Im \mathrm{m } \, }h_t(z)|_{t=0} +O(1), \qquad r\rightarrow 0,$$ where $h_t(z)$ is the conformal map from $\mathbb{H}\backslash {K}_t$ to $\mathbb{H}$ (i. e. the solution to the half-plane Loewner equation). Since in the half-plane the formula (\[lhcap\]) defines the half-plane capacity, we are done. [**Acknowledgements:**]{} Work supported by Swiss National Science Foundation and . [Zha04b]{} M. Bauer and D. Bernard, , [CFT]{} and zig-zag probabilities, in *Proceedings of the conference ‘Conformal Invariance and Random Spatial Processes’, Edinburgh*, 2003. M. Bauer and D. Bernard, *2[D]{} growth processes: [SLE]{} and [L]{}oewner chains*, Phys. Rep. **432**(3-4), 115–222 (2006), [\[arXiv:math-ph/0602049\]]{}. M. Bauer, D. Bernard and L. Cantini, *Off-critical SLE(2) and SLE(4): a field theory approach*, J. Stat. Mech. , P07037 (2009), [\[arXiv:0903.1023\]]{}. M. Bauer, D. Bernard and J. Houdayer, *Dipolar stochastic [L]{}oewner evolutions*, J. Stat. Mech. (3), P03001, 18 pp. (electronic) (2005). J. Cardy, (kappa,rho) and Conformal Field Theory, 2004. J. Dub[é]{}dat, and the free field: [P]{}artition functions and couplings, , 2007. C. Hagendorf, M. Bauer and D. Bernard, The Gaussian free field and SLE(4) on doubly connected domains, 2010. N.-G. Kang and N. Makarov, in preparation. K. Kyt[ö]{}l[ä]{}, *On conformal field theory of [SLE]{}(kappa, rho)*, J. Stat. Phys. **123**(6), 1169–1181 (2006), [\[arXiv:math-ph/0504057\]]{}. G. F. Lawler, *Conformally invariant processes in the plane*, volume 114 of *Mathematical Surveys and Monographs*, American Mathematical Society, Providence, RI, 2005. N. Makarov and S. Smirnov, Off-critical lattice models and massive SLEs, Proceedings of ICMP, to appear., 2009. N. Makarov and D. Zhan, in preparation. O. Schramm and S. Sheffield, *Harmonic explorer and its convergence to [${\rm SLE}\sb 4$]{}*, Ann. Probab. **33**(6), 2127–2148 (2005), [\[arXiv:math.PR/0310210\]]{}. O. Schramm and S. Sheffield, *Contour lines of the two-dimensional discrete Gaussian free field*, Acta Math. **202**, 21–137 (2009), [\[arXiv:math.PR/0605337\]]{}. O. Schramm and D. B. Wilson, *S[LE]{} coordinate changes*, New York J. Math. **11**, 659–669 (electronic) (2005), [\[arXiv:math.PR/0505368\]]{}. W. Werner, Random planar curves and [S]{}chramm-[L]{}oewner evolutions, in *Lectures on probability theory and statistics*, volume 1840 of *Lecture Notes in Math.*, pages 107–195, Springer, Berlin, 2004. D. Zhan, *Random Loewner Chains in Riemann Surfaces*, PhD thesis, California Institute of Technology, 2004. D. Zhan, *Stochastic [L]{}oewner evolution in doubly connected domains*, Probab. Th. Rel. Fields **129**(3), 340–380 (2004), [\[arXiv:math/0310350\]]{}. [^1]: `Konstantin.Izyurov@unige.ch` [^2]: `Kalle.Kytola@unige.ch`
--- author: - 'A. Epitropakis' - 'I. E. Papadakis' - 'M. Dovčiak' - 'T. Pecháček' - 'D. Emmanoulopoulos' - 'V. Karas' - 'I. M. M$^{\mathrm{c}}$Hardy' bibliography: - 'refs.bib' date: 'Received .. ...... 2015; accepted .. ...... 2015' title: 'Theoretical modelling of the AGN iron line vs continuum time-lags in the lamp-post geometry' --- [Theoretical modelling of time-lags between variations in the Fe K$\alpha$ emission and the X-ray continuum might shed light on the physics and geometry of the X-ray emitting region in active galaxies (AGN) and X-ray binaries. We here present the results from a systematic analysis of time-lags between variations in two energy bands ($5-7$ vs $2-4\,\mathrm{keV}$) for seven X-ray bright and variable AGN.]{} [We estimate time-lags as accurately as possible and fit them with theoretical models in the context of the lamp-post geometry. We also constrain the geometry of the X-ray emitting region in AGN.]{} [We used all available archival *XMM-Newton* data for the sources in our sample and extracted light curves in the $5-7$ and $2-4\,\mathrm{keV}$ energy bands. We used these light curves and applied a thoroughly tested (through extensive numerical simulations) recipe to estimate time-lags that have minimal bias, approximately follow a Gaussian distribution, and have known errors. Using traditional $\chi^2$ minimisation techniques, we then fitted the observed time-lags with two different models: a phenomenological model where the time-lags have a power-law dependence on frequency, and a physical model, using the reverberation time-lags expected in the lamp-post geometry. The latter were computed assuming a point-like primary X-ray source above a black hole surrounded by a neutral and prograde accretion disc with solar iron abundance. We took all relativistic effects into account for various X-ray source heights, inclination angles, and black hole spin values.]{} [Given the available data, time-lags between the two energy bands can only be reliably measured at frequencies between $\sim5\times10^{-5}\,\mathrm{Hz}$ and $\sim10^{-3}\,\mathrm{Hz}$. The power-law and reverberation time-lag models can both fit the data well in terms of formal statistical characteristics. When fitting the observed time-lags to the lamp-post reverberation scenario, we can only constrain the height of the X-ray source. The data require, or are consistent with, a small ($\lesssim10$ gravitational radii) X-ray source height.]{} [In principle, the $5-7\,\mathrm{keV}$ band, which contains most of the Fe K$\alpha$ line emission, could be an ideal band for studying reverberation effects, as it is expected to be dominated by the X-ray reflection component. We here carried out the best possible analysis with *XMM-Newton* data. Time-lags can be reliably estimated over a relatively narrow frequency range, and their errors are rather large. Nevertheless, our results are consistent with the hypothesis of X-ray reflection from the inner accretion disc.]{} Introduction {#sec1} ============ According to the currently accepted paradigm, active galactic nuclei (AGN) contain a central, super-massive ($M_{\mathrm{BH}}\sim10^{6-9}\,\mathrm{M}_{\odot}$) black hole (BH), onto which matter accretes in a disc-like configuration. In the standard $\alpha$-disc model , this accretion disc is optically thick and releases part of its gravitational energy in the form of black-body radiation, which peaks at optical to ultra-violet wavelengths. A fraction of these low-energy thermal photons is assumed to be Compton up-scattered by a population of high-energy ($\sim100\,\mathrm{keV}$) electrons, which is often referred to as the corona. The Compton up-scattered disc photons form a power-law spectrum that is observed in the X-ray spectra of AGN at energies $\sim2-10\,\mathrm{keV}$ [e.g. @1991ApJ...380L..51H]. We here refer to this source as the X-ray source and to its spectrum as continuum emission. Depending on the X-ray source and disc geometry, a significant amount of continuum emission may illuminate disc and be reflected towards a distant observer. The strongest observable features of such a reflection spectrum from neutral material are the fluorescent Fe K$\alpha$ emission line at $\sim6.4\,\mathrm{keV}$ and the so-called Compton hump, which is an excess of emission at energies $\sim10-30\,\mathrm{keV}$ [e.g. @1991MNRAS.249..352G]. Additionally, if the disc is mildly ionised, an excess of emission at energies $\sim0.3-1\,\mathrm{keV}$ can be observed [e.g. @2005MNRAS.358..211R]. In addition to these spectral features, the X-ray reflection scenario also predicts unique timing signatures. For example, showed that X-ray reflection should leave its imprint in the X-ray power spectra. Owing to X-ray illumination, the observed power spectra should show a prominent dip at high frequencies, and an oscillatory behaviour, with decreasing amplitude, at higher frequencies. These reverberation echo features should be more prominent in energy bands where the reflection component is more pronounced. Furthermore, as a result of the different light travel paths between photons arriving directly at a distant observer and those reflected off the surface of the disc, variations in the reprocessed disc emission are expected to be delayed with respect to continuum variations. The magnitude of these delays will depend on the size and location (with respect to the disc) of the X-ray source, the viewing angle, the mass, and spin of the BH. Hints for such reverberation delays were first reported by @2007MNRAS.382..985M in Ark 564. The first statistically robust detection was later reported by @2009Natur.459..540F in 1H 0707–495, where variations in the $0.3-1\,\mathrm{keV}$ band (henceforth, the soft band) were found to lag behind variations in the $1-4\,\mathrm{keV}$ band by $\sim30\,\mathrm{sec}$ on timescales shorter than $\sim30\,\mathrm{min}$. The discovery of these time-lags, commonly referred to as soft lags in the literature, has triggered a significant amount of research over the past few years. Soft lags have been discovered in $\sim20$ AGN so far . A growing number of AGN show evidence of reverberation time-lags between the Fe K$\alpha$ emission line and the continuum [e.g. @2012MNRAS.422..129Z; @2013MNRAS.428.2795K; @2013MNRAS.430.1408K; @2013MNRAS.434.1129K; @2013ApJ...767..121Z; @2014MNRAS.440.2347M], and between the Compton hump and the continuum [e.g. @2014ApJ...789...56Z; @2015MNRAS.446..737K]. Detecting them is a particularly difficult task because of the low sensitivity of most current detectors and the intrinsically low brightness of AGN at Fe K$\alpha$ line and Compton hump energies. Theoretical modelling of X-ray time-lags can elucidate the physical and geometrical nature of the X-ray emitting region in AGN. This requires knowledge of how the disc responds to the continuum emission, and the construction of theoretical time-lag spectra, which can then be fitted to the observed ones. Initial modelling attempts were based on the assumption that this response is a simple top-hat function [e.g. @2011MNRAS.412...59Z; @2011MNRAS.416L..94E]. @2012MNRAS.420.1145C were the first to consider a more realistic scenario, in which relativistic effects and a moving X-ray source were considered to quantify the response of the disc. They deduced that, for 1H 0707–495, a more complex physical model is required to explain both the source geometry and intrinsic variability. More recently, @2013MNRAS.430..247W considered a variety of different geometries for the primary X-ray source and deduced that, in 1H 0707–495, it has a radial extent of $\sim35r_{\mathrm{g}}$ (where $r_{\mathrm{g}}\equiv GM_{\mathrm{BH}}/c^2$ is the gravitational radius) and is located at a height of $\sim2r_{\mathrm{g}}$ above the disc plane. @2014MNRAS.439.3931E [; E14 hereafter] were the first to perform systematic model fitting of the time-lags between the $0.3-1$ and $1.5-4\,\mathrm{keV}$ bands (henceforth, the soft excess vs continuum time-lags) for 12 AGN. They assumed the X-ray source to be point-like and located above the BH , and calculated the response of the disc taking all relativistic effects into account. They deduced that the average X-ray source height is $\sim4r_{\mathrm{g}}$ with little scatter. @2014MNRAS.438.2980C were the first to model the time-lags between the $5-6\,\mathrm{keV}$ (which contains most of the photons from the red wing of a relativistically broadened Fe K$\alpha$ line) and $2-3\,\mathrm{keV}$ bands in the AGN NGC 4151. They used a similar procedure to E14, and deduced that the X-ray source height is $\sim7r_{\mathrm{g}}$, while the viewing angle of the system is $<30^{\circ}$. More recently, @2015MNRAS.452..333C [; CY15, hereafter] simultaneously fitted, for the first time, the $4-6.5$ vs $2.5-4\,\mathrm{keV}$ time-lags and the $2-10\,\mathrm{keV}$ spectrum of Mrk 335. They found that the X-ray source is located very close to the central BH, at a height of $\sim2r_{\mathrm{g}}$. Our main aim is to study the iron line vs continuum time-lag spectra (hereafter, the iron line vs continuum time-lags), within the context of the lamp-post geometry, similarly to E14, C14, and CY15. To this end, we chose the $5-7\,\rm{keV}$ band as representative of the energy band where most of the iron line photons will be (henceforth, the iron line band), and the $2-4\,\rm{keV}$ band as the energy band where the primary X-ray continuum dominates (henceforth, the continuum band). In our case, the exact choice of these two energy bands is relatively unimportant since, contrary to previous works (with the exception of CY15), we take into account the full disc reflection spectrum in both the iron line and continuum bands when constructing the theoretical lamp-post time-lag models, which we subsequently fitted to the observed iron line vs continuum time-lag spectra. Our sample consists of seven AGN. We chose these objects because they are X-ray bright and have been observed many times by *XMM-Newton*. We used all the existing *XMM-Newton* archival data for these objects to estimate their iron line vs continuum time-lags. Our work improves significantly on the estimation of time-lags. We relied on to calculate time-lag estimates that are minimally biased, have known errors, and are approximately distributed as Gaussian variables. These properties render them appropriate for model fitting using traditional $\chi^2$ minimisation techniques. Our results indicate that the data are consistent with a reverberation scenario, although the quality of the data is not high enough to estimate the various model parameters with high accuracy, except for the X-ray source height. Observations and data reduction {#sec2} =============================== Table\[table1\] lists the details of the *XMM-Newton* observations we used. Columns1–4 show the source name, mass of the central BH in units of $10^6\,\mathrm{M}_{\odot}$, identification number (ID) of each observation, and net exposure in units of $\mathrm{ks}$, respectively. -------- ------------------------------ ----------- ------- -------- ------------------------------ ------------ ------- (1) (2) (3) (4) (1) (2) (3) (4) Source $M_{\mathrm{BH}}$ Obs. ID Exp. Source $M_{\mathrm{BH}}$ Obs. ID Exp. $(10^6\,\mathrm{M}_{\odot})$ (ks) $(10^6\,\mathrm{M}_{\odot})$ (ks) $2.3\pm0.7\,^a$ $1.7^{+0.6}_{-0.5}\,^d$ 110890201 40.7 109141401 103.0 148010301 78.1 157560101 50.0 506200201 38.7 606320101 45.3 506200301 38.7 606320201 42.0 506200401 40.6 606320301 21.1 506200501 40.9 606320401 18.9 511580101 121.6 606321301 30.2 511580201 102.1 606321501 34.0 511580301 104.1 606321601 41.5 511580401 101.8 606321701 38.4 554710801 96.1 606321801 18.8 653510301 113.8 606322001 22.1 653510401 125.7 606322101 29.2 653510501 116.9 606322201 30.8 653510601 119.5 606322301 42.3 $5.1^{+3.8}_{-2.4}\,^b$ $2.3\pm0.1\,^e$ 693781201 127.2 206400101 98.7 693781301 129.7 670130201 59.1 693781401 48.5 670130301 55.5 111570101 33.1 670130401 56.7 111570201 53.0 670130501 66.9 029740101 80.6 670130601 57.0 029740701 122.5 670130701 43.5 029740801 124.1 670130801 57.8 $1.8^{+1.6}_{-1.4}\,^c$ 670130901 55.5 109141301 116.9 006810101 10.6 304030101 95.1 $0.8\pm0.1\,^f$ 304030301 98.5 0111790101 43.3 304030401 93.0 0311190101 77.5 304030501 74.7 0725200101 124.7 304030601 85.2 0725200301 130.6 304030701 29.1 $28\pm6\,^g$ 101040101 31.6 306870101 122.5 510010701 16.8 600540501 36.9 600540601 112.3 -------- ------------------------------ ----------- ------- -------- ------------------------------ ------------ ------- We processed data from the *XMM-Newton* satellite using the Scientific Analysis System [SAS, v. 14.0.0; @2004ASPC..314..759G]. We only used EPIC-pn data. Source and background light curves were extracted from circular regions on the CCD, with the former having a fixed radius of 800 pixels ($40^{\prime\prime}$) centred on the source coordinates listed on the NASA/IPAC Extragalactic Database. The positions and radii of the background regions were determined by placing them sufficiently far from the location of the source, while remaining within the boundaries of the same CCD chip. The source and background light curves were extracted in the iron line and continuum bands with a bin size of $100\,\mathrm{sec}$, using the SAS command evselect. We included the criteria PATTERN==0–4 and FLAG==0 in the extraction process, which select only single- and double-pixel events and reject bad pixels from the edges of the detector CCD chips. Periods of high solar flaring background activity were determined by observing the $10-12\,\mathrm{keV}$ light curves (which contain very few source photons) extracted from the whole surface of the detector, and subsequently excluded during the source and background light curve extraction process. We checked all source light curves for pile-up using the SAS task epatplot and found that only observations 670130201, 670130501, and 670130901 of Ark 564 are affected. For those observations we used annular instead of circular source regions with inner radii of 280, 200, and 250 pixels (the outer radii were held at 800 pixels), respectively, which we found to adequately reduce the effects of pile-up. The background light curves were then subtracted from the corresponding source light curves using the SAS command epiclccorr. Most of the resulting light curves were continuously sampled, except for a few cases that contained a small ($\lesssim5\%$ of the total number of points in the light curve) number of missing points. These were either randomly distributed throughout the duration of an observation, or appeared in groups of $\lesssim10$ points. We replaced the missing points by linear interpolation, with the addition of the appropriate Poisson noise. Time-lag estimation {#sec3} =================== ------------- ------------------ ----------------- -------------------------------- -------------------------------- ---------------------- Source Segment duration No. of segments Mean count rate Mean count rate Max. frequency (ks) $m$ (cts/sec; $5-7\,\mathrm{keV}$) (cts/sec; $2-4\,\mathrm{keV}$) $\nu_{\mathrm{max}}$ 1H 0707–495 20.2 57 0.02 0.10 $6.9\times10^{-4}$ MCG–6-30-15 24.2 28 0.85 3.12 $1.0\times10^{-3}$ Mrk 766 23.2 24 0.30 1.22 $6.9\times10^{-4}$ NGC 4051 20.6 22 0.35 1.02 $1.2\times10^{-3}$ Ark 564 27.7 18 0.34 2.15 $7.2\times10^{-4}$ NGC 7314 20.7 17 0.59 1.88 $1.1\times10^{-3}$ Mrk 335 26.9 12 0.23 0.93 $3.0\times10^{-4}$ ------------- ------------------ ----------------- -------------------------------- -------------------------------- ---------------------- We used standard Fourier techniques to estimate time-lags between light curves in the iron line and continuum bands for our sample. We denote by $\{x(t_r),y(t_r)\}$ a pair of light curves in two energy bands, where $t_r=\Delta t,2\Delta t,\ldots,N\Delta t$, $N$ is the number of points and $\Delta t=100\,\mathrm{sec}$ is the time bin size. The discrete Fourier transforms (DFTs), $\{\zeta_x(\nu_p),\zeta_y(\nu_p)\}$, of the light curves are $$\begin{aligned} \label{eq1} \zeta_x(\nu_p) &\equiv\sqrt{\frac{\Delta t}{N}}\sum_{r=1}^{N}[x(t_r)-\overline{x}]\mathrm{e}^{-\mathrm{i}2\pi\nu_pt_r}, \\ \label{eq2} \zeta_y(\nu_p) &\equiv\sqrt{\frac{\Delta t}{N}}\sum_{r=1}^{N}[y(t_r)-\overline{y}]\mathrm{e}^{-\mathrm{i}2\pi\nu_pt_r},\end{aligned}$$ where $\overline{x}$ and $\overline{y}$ are the light-curve sample means, and $\nu_p=p/N\Delta t$ ($p=1,2,\ldots,N/2$). The cross-periodogram, $I_{xy}(\nu_p)$, of the light-curve pair is defined as [@Priestley:81; henceforth P81] $$\label{eq3} I_{xy}(\nu_p)\equiv\zeta_x(\nu_p)\zeta^{*}_y(\nu_p).$$ The cross-periodogram is an estimator of the intrinsic cross-spectrum (CS), $C_{xy}(\nu_p)$, which is a measure of the cross-correlation between two random signals in Fourier space. The cross-periodogram is generally biased, in the sense that the mean of $I_{xy}(\nu_p)$ is not equal to $C_{xy}(\nu_p)$. The traditional time-lag estimator, which we define below, is based on the cross-periodogram. Therefore, the statistical properties of the two estimators are closely dependent. As shown by EP16, there are two main factors that contribute to the bias of the cross-periodogram: the finite duration of the light curves, and their sampling rate and time bin size (in our work, the sampling rate is equal to the time bin size). Discrete sampling of a continuous process introduces aliasing effects to the CS of the resulting discrete process, which is only defined in the frequency range $[-1/2\Delta t,1/2\Delta t]$ and is equal to the superposition of the intrinsic CS at frequencies $\nu,\nu\pm1/\Delta t,\nu\pm2/\Delta t,$ etc. Aliasing effects are reduced when the light curves are binned. They are similar to the aliasing effects in the power-spectral density (PSD) of a light curve, although while PSDs are always positive, this is generally not the case with CS. As a result, aliasing effects are more complex in this case. EP16 found that light-curve binning generally causes the measured time-lags to converge to zero at frequencies $\gtrsim\nu_{\mathrm{Nyq}}/2$, where $\nu_{\mathrm{Nyq}}=1/2\Delta t$ is the Nyquist frequency. In this work $\nu_{\mathrm{Nyq}}=5\times10^{-3}\,\mathrm{Hz}$, and hence we only computed cross-periodograms at frequencies $\le2.5\times10^{-3}\,\mathrm{Hz}$. Owing to the finite light-curve duration, the mean of the cross-periodogram is equal to the convolution of the intrinsic CS (as modified by aliasing effects) with a particular window function, just like the case of the periodogram [i.e. the traditional PSD estimator; see e.g. @1993MNRAS.261..612P]. However, the effects of this convolution on the time-lag estimates cannot be predicted a priori, since they depend on the shape of the (unknown) intrinsic CS (and not just on the intrinsic time-lag spectrum). They were quantitatively investigated by EP16, who considered three different types of time-lag spectra that are typically observed between X-ray light curves of accreting systems: constant time-lags, time-lags with a power-law dependence on frequency, and time-lags that have a characteristic oscillatory behaviour with frequency, similar to what is expected in a reverberation scenario. For the model CS they considered, they concluded that time-lag estimates based on the cross-periodogram will not be significantly biased, in the sense that their mean will be within $\sim15\%$ (in absolute value) of their corresponding intrinsic values when the light-curve duration is $\gtrsim20\,\mathrm{ks}$. The cross-periodogram has a large and unknown variance. As a result, this feature will be shared by the time-lag estimates computed from it. This problem is ameliorated in practice by either binning together $m$ neighbouring frequency ordinates of the cross-periodogram (a process called smoothing), and/or binning different cross-periodogram ordinates at a given frequency obtained from $m$ distinct light-curve pairs. If, as is often the case in practice, the real and imaginary parts of the intrinsic CS vary in a non-linear fashion over the smoothed frequency range, then smoothing will introduce an additional source of bias to the cross-periodogram. This bias can only be taken into account a posteriori when fitting observed time-lags by prescribing a model CS (and not just a model time-lag spectrum), as it affects the cross-periodogram itself. Since this is a complicated model-dependent procedure, we did not perform any smoothing on the cross-periodograms. We instead divided the available *XMM-Newton* observations of each source into shorter segments of duration $20-40\,\mathrm{ks}$. The segment duration for each source (listed in Col.2 of Table\[table2\]) was determined in such a way as to maximise their number, $m$, for the total available light curves ($m$ is listed in Col.3 of Table\[table2\]). ![Sample iron line vs continuum coherence function (top panel) and time-lag spectrum (bottom panel) of MGG–60-30-15, estimated using the data listed in Table\[table2\]. The dashed brown line in the top panel shows the best-fit model to the sample coherence. The continuous red vertical line indicates the highest frequency up to which time-lags should be estimated, and the horizontal blue dotted-dashed line indicates the coherence value at this frequency (see Sect.\[sec3\]).[]{data-label="fig1"}](figures/fig1.pdf){width="\hsize"} ![Observed iron line vs continuum time-lag spectra for 1H 0707–495 (first row), MCG–6-30-15 (second row), and Mrk 766 (third row). The solid brown and dashed red lines indicate the best-fit models A and B, respectively, to each time-lag spectrum (see Sect.\[sec5\] for details on these models).[]{data-label="fig2"}](figures/fig2.pdf){width="\hsize"} ![Same as in Fig.\[fig2\] for NGC 4051 (first row), Ark 564 (second row), NGC 7314 (third row), and Mrk 335 (fourth row).[]{data-label="fig3"}](figures/fig3.pdf){width="\hsize"} For each segment we calculated the cross-periodogram according to Eq.\[eq3\], and adopted $$\label{eq4} \hat{C}_{xy}(\nu_p)=\frac{1}{m}\sum_{k=1}^{m}I^{(k)}_{xy}(\nu_p)$$ and $$\label{eq5} \hat{\tau}_{xy}(\nu_p)\equiv\frac{1}{2\pi\nu_p}\mathrm{arg}[\hat{C}_{xy}(\nu_p)]$$ as our estimates of the CS and time-lag spectrum, respectively ($I^{(k)}_{xy}(\nu_p)$ is the cross-periodogram of the $k-$th segment at frequency $\nu_p$). We adopted the standard convention of defining $\mathrm{arg}[\hat{C}_{xy}(\nu_p)]$ on the interval $(-\pi,\pi]$. The analytic error estimate of $\hat{\tau}_{xy}(\nu_p)$ is given by [e.g. P81; @1999ApJ...510..874N] $$\label{eq6} \sigma_{\hat{\tau}}(\nu_p)\equiv\frac{1}{2\pi\nu_p}\frac{1}{\sqrt{2m}}\sqrt{\frac{1-\hat{\gamma}^2_{xy}(\nu_p)}{\hat{\gamma}^2_{xy}(\nu_p)}},$$ where [e.g. P81; @1997ApJ...474L..43V] $$\label{eq7} \hat{\gamma}^2_{xy}(\nu_p)\equiv\frac{|\hat{C}_{xy}(\nu_p)|^2}{\hat{P}_x(\nu_p)\hat{P}_y(\nu_p)}.$$ $\hat{P}_x(\nu_p)$ and $\hat{P}_y(\nu_p)$ are the traditional periodograms of the two light curves, which are also calculated by binning over $m$ segments. Equation\[eq7\] defines an estimator of the so-called coherence function. This function is defined on the interval $[0,1]$ and quantifies the degree of linear correlation between sinusoidal components of two light curves at each frequency. Figure\[fig1\] shows the sample iron line vs continuum coherence and time-lag spectrum of MCG–6-30-15 (top and bottom panel, respectively), which were calculated using Eqs.\[eq7\] and \[eq5\]. The sample coherence decreases to zero with increasing frequency. This loss of coherence is mostly caused by Poisson noise. In the presence of measurement errors, even if the intrinsic coherence is unity at all frequencies, the resulting coherence will decrease towards zero at frequencies where the amplitude of experimental noise variations dominates the amplitude of the intrinsic variations. The sample coherence will, however, always converge to a constant value $1/m$ at these frequencies. EP16 found that this decrease can be reasonably approximated by an exponential function of the form $$\label{eq8} \hat\gamma^2_{xy}(\nu)=\left(1-\frac{1}{m}\right)\mathrm{exp}[-(\nu/\nu_0)^q]+\frac{1}{m},$$ where $\nu_0$ and $q$ are parameters that are determined by fitting this function to the coherence estimates. This was empirically found by EP16 to fit the sample coherence well, using many simulations of light curves in the case of various model CS and light curve signal-to-noise ratios (S/N). An example of such a fit to the coherence estimates of MCG–6-30-15 is shown in the top panel of Fig.\[fig1\] (brown dashed line). The fit describes the sample coherence function well (this was the case for all sources). According to Eq.\[eq6\], the error of the time-lag estimates increases as the coherence decreases. Therefore, above a certain maximum frequency, $\nu_{\mathrm{max}}$, when the coherence is sufficiently small (i.e. $\sim0$), we expect that Poisson noise will severely affect the reliability of the time-lag estimates. The effects of Poisson noise on the bias and distributions of the time-lag estimates were quantitatively investigated by EP16. They found that $\nu_{\mathrm{max}}$ decreases as the S/N of the light curves decreases. In addition, $\nu_{\mathrm{max}}$ is mainly affected by the energy band with the lowest mean count rate, which in our case corresponds to the iron line band. In Cols.4 and 5 of Table\[table2\] we list the mean count rate in the iron line and continuum band, respectively. According to EP16, $\nu_{\mathrm{max}}$ corresponds approximately to the frequency at which the sample coherence function becomes equal to $1.2/(1+0.2m)$. Above $\nu_{\mathrm{max}}$, EP16 found that Poisson noise has the following effects on the time-lag estimates: (a) The analytic error estimate given by Eq.\[eq6\] increasingly underestimates their true scatter, and (b) their distribution becomes uniform and symmetrical about a zero time-lag value. As a result, the time-lag estimates become biased, in the sense that their mean converges to zero, independent of the intrinsic time-lag spectrum. Below $\nu_{\mathrm{max}}$, and as long as $m\gtrsim10$, the mean of the time-lag estimates is not affected, Eq.\[eq6\] provides a reliable estimate of their true scatter, and their distribution is approximately Gaussian. We therefore fitted the coherence estimates of each source to the exponential function given by Eq.\[eq8\] (as we did for MCG–6-30-15), and equated this function to the constant $1.2/(1+0.2m)$ to estimate $\nu_{\mathrm{max}}$ in each case. The values of $\nu_{\mathrm{max}}$ calculated thus are listed in Col.6 of Table\[table2\]. We did not estimate time-lags above this frequency. Instead of using the values of the sample coherence function to determine the errors of the time-lag estimates according to Eq.\[eq6\], we used the values of the best-fit exponential model. We found that the resulting errors are more representative of the observed scatter of the time-lag estimates, although the differences are small ($\lesssim20\%$). The iron line vs continuum time-lag estimates for each source, along with their errors, obtained by the above procedure are shown in Figs.\[fig2\] and \[fig3\]. Theoretical modelling of the time-lag spectra {#sec4} ============================================= In this section we describe the basic physical and geometrical properties of the lamp-post model and show how we determined the corresponding theoretical iron line vs continuum time-lag spectra. All physical quantities in the lamp-post model are estimated in geometrised units ($G=c=1$) and scale with $M_{\mathrm{BH}}$. Thus, for instance, time-scales have to be multiplied by a factor $t_{\mathrm{g}}\equiv GM_{\mathrm{BH}}/c^3\sim5(M_{\mathrm{BH}}/10^6\,\mathrm{M}_{\odot})\,\mathrm{sec}$ to be converted into units of seconds. Geometrical layout of the model {#subsec41} ------------------------------- The lamp-post model consists of a BH, surrounded by an equatorial accretion disc, that is illuminated by an X-ray source located on the disc symmetry axis. The parameters of the model are the mass and spin ($a$) of the BH, the height ($h$) of the X-ray source, and the viewing angle ($\theta$) of a distant observer with respect to the disc axis. The disc is assumed to be geometrically thin and Keplerian, co-rotating with the BH, with a radial extent ranging from the innermost stable circular orbit (ISCO), $r_{\mathrm{ISCO}}$, up to an outer radius $r_{\mathrm{out}}$. The BH spin uniquely defines $r_{\mathrm{ISCO}}$. When measured in geometrised units, the spin can attain any value between zero and unity, with $a=0$ ($r_{\mathrm{ISCO}}=6r_{\mathrm{g}}$) and $a=1$ ($r_{\mathrm{ISCO}}=1r_{\mathrm{g}}$) indicating a non-spinning (i.e. Schwarzschild) and maximally spinning (i.e. extreme Kerr) BH, respectively. The X-ray source is assumed to be point-like and located at a fixed position above the BH. It emits isotropically with an intrinsic (i.e. rest-frame) spectrum of $\mathscr{N}(t)E^{-2}\mathrm{exp}(-E/300\,\mathrm{keV})$. We assumed it to be variable in amplitude only, and that $\mathscr{N}(t)$ is a stationary random process (i.e. that it has a finite and time-independent mean and variance). Observed fluxes at infinity {#subsec42} --------------------------- We assumed that the total flux recorded by an observer at a very large distance in a given energy band $\mathcal{E}=[E_1(\mathrm{keV}),E_2(\mathrm{keV})]$ is $\mathscr{F}_{\mathcal{E}}(t;a,\theta,h,r_{\mathrm{out}})$. This flux is equal to the sum of the continuum and reprocessed flux from the disc, $\mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t;a,\theta,h)$ and $\mathscr{F}^{(\mathrm{r})}_{\mathcal{E}}(t;a,\theta,h,r_{\mathrm{out}})$, respectively. In other words, $$\begin{aligned} \label{eq9} \nonumber \mathscr{F}_{\mathcal{E}}(t;a,\theta,h)= & \mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t;a,\theta,h)+\mathscr{F}^{(\mathrm{r})}_{\mathcal{E}}(t;a,\theta,h,r_{\mathrm{out}}) \\ \nonumber = & \mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t;a,\theta,h) \\ & +\int_{-\infty}^{\infty}\Psi_{\mathcal{E}}(t';a,\theta,h,r_{\mathrm{out}})\mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t-t';a,\theta,h)\mathrm{d}t',\end{aligned}$$ where $\Psi_{\mathcal{E}}(t';a,\theta,h,r_{\mathrm{out}})$ is the so-called response function, which quantifies the response of the disc to an instantaneous flare of continuum emission. We define the normalisation of the response function such that its time-integrated value is equal to the observed ratio of reprocessed-to-continuum photons. The observed continuum spectrum differs in amplitude from its rest-frame value as a result of relativistic effects [@2011ApJ...731...75D]. This is quantified by the factor $\mathscr{G}(a,\theta,h)$, such that $\mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t;a,\theta,h)=\mathscr{N}_{\mathcal{E}}(t)\mathscr{G}(a,\theta,h)\int_{E_1}^{E_2}E^{-2}\mathrm{exp}(-E/300\,\mathrm{keV})\mathrm{d}E$. The dependence of the various terms on the right-hand side of Eq.\[eq9\] on the parameters of the lamp-post model were explicitly included and are henceforth be omitted for reasons of brevity. Time-lag spectra {#subsec43} ---------------- We assumed that the total photon fluxes observed in the iron line and continuum bands are $\mathscr{F}_{5-7}(t)$ and $\mathscr{F}_{2-4}(t)$, respectively. According to Eq.\[eq9\], $$\begin{aligned} \label{eq10} \mathscr{F}_{5-7}(t) &=\mathscr{F}^{(\mathrm{c})}_{5-7}(t)+\int_{-\infty}^{\infty}\Psi_{5-7}(t')\mathscr{F}^{(\mathrm{c})}_{5-7}(t-t')\mathrm{d}t', \\ \label{eq11} \mathscr{F}_{2-4}(t) &=\mathscr{F}^{(\mathrm{c})}_{2-4}(t)+\int_{-\infty}^{\infty}\Psi_{2-4}(t')\mathscr{F}^{(\mathrm{c})}_{2-4}(t-t')\mathrm{d}t'.\end{aligned}$$ The cross-correlation function (CCF) between the iron line and continuum bands, $R_{5-7,2-4}(\tau)$, is then $$\label{eq12} R_{5-7,2-4}(\tau)\equiv\mathrm{E}\{[\mathscr{F}_{2-4}(t)-\mu_{2-4}][\mathscr{F}_{5-7}(t+\tau)-\mu_{5-7}]\},$$ where $\mathrm{E}$ is the expectation operator, and $\mu_{5-7}$ ($\mu_{2-4}$) is the mean flux in the iron line (continuum) band. The CS, $C_{5-7,2-4}(\nu)$, between the two energy bands is, by definition, the Fourier transform of the CCF. Hence (see Appendix \[appa\] for a more detailed derivation) $$\begin{aligned} \label{eq13} \nonumber C_{5-7,2-4}(\nu) &\equiv\int_{-\infty}^{\infty}R_{5-7,2-4}(\tau)\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}\tau \\ &=C^{(\mathrm{c})}_{5-7,2-4}(\nu)[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*},\end{aligned}$$ where the asterisk denotes complex conjugation, $\tilde{\Psi}_{\mathcal{E}}(\nu)\equiv\int_{-\infty}^{\infty}\Psi_{\mathcal{E}}(t)\mathrm{e}^{-\mathrm{i}2\pi\nu t}\mathrm{d}t$ is the Fourier transform of the response function, and $C^{(\mathrm{c})}_{5-7,2-4}(\nu)$ is the CS of the continuum emission. The iron line vs continuum time-lag spectrum, $\tau_{5-7,2-4}(\nu)$, is defined as $\tau_{5-7,2-4}(\nu)\equiv(2\pi\nu)^{-1}\mathrm{arg}[C_{5-7,2-4}(\nu)]$. Given our adopted convention, a positive value of $\tau_{5-7,2-4}(\nu)$ indicates that variations in the iron line band lead variations in the continuum band (and vice versa). According to Eq.\[eq13\], $$\label{eq14} \tau_{5-7,2-4}(\nu)=\tau^{(\mathrm{c})}_{5-7,2-4}(\nu)+\tau^{(\mathrm{r})}_{5-7,2-4}(\nu).$$ The equation above shows that the time-lags between the observed variations in the two energy bands equal the sum of two terms; time-lags between variations in the X-ray continuum, $\tau^{(\mathrm{c})}_{5-7,2-4}(\nu)$, and time-lags due to reprocessed X-ray emission from the disc, $\tau^{(\mathrm{r})}_{5-7,2-4}(\nu)$ (henceforth, the continuum and reverberation time-lags, respectively). The continuum time-lags are given by $\tau^{(\mathrm{c})}_{5-7,2-4}(\nu)\equiv(2\pi\nu)^{-1}\mathrm{arg}[C^{(\mathrm{c})}_{5-7,2-4}(\nu)]$, while the reverberation time-lags are given by $$\label{eq15} \tau^{(\mathrm{r})}_{5-7,2-4}(\nu)\equiv\frac{1}{2\pi\nu}\mathrm{arg}\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}\}.$$ This function is uniquely determined by the disc response functions in the iron line and continuum bands. Response functions in the lamp-post geometry {#subsec44} -------------------------------------------- To determine the response function of the disc, we assumed that the primary X-ray source isotropically emits a flare of duration equal to $1t_{\mathrm{g}}$. Upon being illuminated, each area element of the disc responds to this flare by isotropically and instantaneously emitting a reflection spectrum in its rest-frame. We assumed that the reprocessed flux is proportional to the incident flux and that the disc material is neutral, with an iron abundance equal to the solar value. We then used the rest-frame reflection spectrum computed with the multi-scattering code NOAR . We determined the time-varying $0.1-8\,\mathrm{keV}$ disc reflection spectrum at infinity, with a time resolution of $0.1t_{\mathrm{g}}$ and energy resolution of $20\,\mathrm{eV}$, taking all relativistic effects into account [e.g. gravitational and Doppler energy shifts, light bending, and time delays; @2006AN....327..961K]. Finally, we calculated the disc response function in the iron line and continuum bands by integrating the observed disc reflection spectrum in the appropriate energy ranges. In Appendix \[appb\] we show how the disc response functions depend on the parameters of the lamp-post model, while in Appendix \[appc\] we show how we computed the model time-lag spectra given by Eq.\[eq15\] from the numerically computed disc response functions. In Appendix \[appd\] we discuss how these model time-lag spectra depend on the parameters of the lamp-post model. The response functions we computed are similar to those presented by @1999ApJ...514..164R and E14, although our approach is different. They calculated the response function considering only the Fe K$\alpha$ photons emitted in the disc rest-frame. In contrast, we counted all the photons from the reflection component that an observer will detect in an energy band (within $0.1-8\,\mathrm{keV}$) at each time step. We therefore considered the total reflection spectrum, as emitted by the disc rest frame, hence we computed the total reflection response function and self-consistently included the reflection component in both the continuum and iron line bands along with the X-ray continuum emission. This approach is be more appropriate for comparing our predictions with data. Our approach is similar to the one adopted by CY15, although we did not consider the effects of disc ionisation. Fitting procedure {#sec5} ================= As explained in Sect.\[sec3\], our time-lag estimates should be approximately distributed as Gaussian random variables. Fitting the observed iron line vs continuum time-lags was therefore based on minimising the $\chi^2$ function, which is defined as $$\label{eq16} \chi^2(a_1,a_2,\ldots,a_q)\equiv\sum_{p=1}^{n}\frac{[\hat{\tau}(\nu_p)-\tau(\nu_p;a_1,a_2,\ldots,a_q)]^2}{\sigma^2_{\hat{\tau}}(\nu_p)},$$ where $\{a_1,a_2,\ldots,a_q\}$ are the parameters of the model, $\hat{\tau}(\nu_p)$ is the time-lag estimate with error $\sigma_{\hat{\tau}}(\nu_p)$, $\tau(\nu_p;a_1,a_2,\ldots,a_q)$ is the model time-lag spectrum, and $n$ is the number of time-lag estimates. The location of the $\chi^2(a_1,a_2,\ldots,a_q)$ minimum, say $\chi^2_{\mathrm{min}}$, determines the best-fit parameter values. Their corresponding 68% (95%) confidence intervals are determined by the standard $\Delta\chi^2=1$ ($\Delta\chi^2=4$) method for one independent parameter. Unless otherwise mentioned, confidence intervals of best-fit parameters are henceforth quoted at the 68% level. As we showed in Sect.\[subsec43\], the observed time-lags should depend on both the continuum and reverberation time-lags. We thus considered two different model time-lag spectra, one for the continuum and the other for the reverberation time-lags. We describe them in more detail below. Model A: Continuum time-lags model {#subsec51} ---------------------------------- In AGN and X-ray binaries, time-lag spectra between X-ray light curves are typically observed to have a power-law dependence on frequency. High-energy bands are delayed with respect to lower energy bands, and the magnitude of the time-lags decreases with increasing frequency, typically following a power-law like form [e.g. @1989Natur.342..773M; @1996MNRAS.280..227N; @1999ApJ...510..874N; @2001ApJ...554L.133P; @2004MNRAS.348..783M; @2006MNRAS.372..401A; @2008MNRAS.388..211A; @2009ApJ...700.1042S]. In addition, the magnitude of these time-lags is observed to increase with increasing energy separation between the two energy bands. We therefore considered a power-law model of the form $$\label{eq17} \tau^{(\mathrm{c})}_{5-7,2-4}(\nu)=-A(\nu/10^{-4}\,\mathrm{Hz})^{-s},$$ where $A$ and $s$ are positive, to account for the continuum time-lags. These continuum time-lags are expected to be negative in our case, meaning that variations in the iron line band should be delayed with respect to variations in the continuum band. Model B: Reverberation time-lags model {#subsec52} -------------------------------------- The model B time-lag spectrum corresponds to the function $\tau^{(\mathrm{r})}_{5-7,2-4}(\nu)$ given by Eq.\[eq15\], that is to say, it accounts for the reverberation time-lags. This function is uniquely determined by the Fourier transforms of the response functions in the iron line and continuum bands. Since these response functions are not given by an analytical formula, we had to numerically compute them (following the procedure outlined in Sect.\[subsec44\]) on a grid of points corresponding to different combinations of $\{a,\theta,h,M_{\mathrm{BH}}\}$ values (we set $r_{\rm out}=10^3r_{\mathrm{g}}$ in all cases). Results {#sec6} ======= ------------- ------------------- --------------------- -------------------------------------- -------------------- ------------------------------ -------------------------------------- Source $A$ $s$ $\chi^2_{\mathrm{min}}/\mathrm{dof}$ $h$ $M_{\mathrm{BH}}\,^{a}$ $\chi^2_{\mathrm{min}}/\mathrm{dof}$ ($\mathrm{sec}$) ($r_{\mathrm{g}}$) ($10^6\,\mathrm{M}_{\odot}$) $a$ free ($a=0$, $a=1$)$\,^{b}$ 1H 0707–495 $33^{+46}_{-24}$ $<0.9$ $15.9/12$ $<20$ $2.3$ $15.9/12$ (16.0, 15.9) MCG–6-30-15 $21^{+23}_{-9}$ $<0.6$ $19.0/23$ $<3$ $5.1$ $18.4/23$ (20.7, 27.2) Mrk 766 $103\pm39$ $<0.6$ $18.6/13$ $22^{+12}_{-10}$ $1.8$ $20.5/13$ (20.7, 20.5) NGC 4051 $56^{+24}_{-33}$ $1.0^{+0.8}_{-0.5}$ $28.3/23$ $<30$ $1.7$ $29.3/23$ (29.3, 29.5) Ark 564 $239^{+57}_{-56}$ $1.4^{+0.2}_{-0.3}$ $13.5/18$ $>28$ $2.3$ $23.2/18$ (23.2, 23.2) NGC 7314 $94\pm37$ $1.3^{+0.5}_{-0.4}$ $25.9/20$ $>82$ $0.8$ $25.2/20$ (25.2, 25.2) Mrk 335 $154^{+73}_{-79}$ $1.3^{+0.8}_{-0.5}$ $8.3/6$ $7^{+2}_{-3}$ $28$ $7.9/6$ (9.6, 7.9) ------------- ------------------- --------------------- -------------------------------------- -------------------- ------------------------------ -------------------------------------- According to Eq.\[eq14\], we should fit the observed time-lags with the sum of models A and B. However, we discovered that due to the limited frequency range of the observed time-lag spectra and the relatively large errors of the time-lag estimates, it was not possible to simultaneously constrain the parameters of both models in a meaningful way. We therefore decided to fit the two models separately to the data and then investigate whether they provided a good fit or not. The only exceptions were Ark 564 and NGC 7314, whose observed time-lag spectra we also fitted to a combined model A+B as well for reasons we discuss in Sect.\[subsec62\] below. The continuum time-lags model (i.e. model A) is defined by Eq.\[eq17\]. The model has two free parameters ($A$ and $s$). For each observed time-lag spectrum shown in Figs.\[fig2\] and \[fig3\], we calculated $\chi^2(A,s)$ using Eq.\[eq16\]. We then minimised this function numerically using the Levenberg-Marquardt method, and determined the best-fit values and confidence intervals of the model A parameters. For the reverberation time-lags (i.e. model B), the parameter space we considered for the model parameters $\{a,\theta,h,M_{\mathrm{BH}}\}$ is similar to the one used by E14. First, we considered three spin values, $a=\{0,0.676,1\}$. For each spin value we considered an ensemble of 18 heights ranging from $2.3$ to $100r_{\mathrm{g}}$. For every such combination we finally considered three values for the viewing angle, $\theta=\{20^{\circ},40^{\circ},60^{\circ}\}$, and 1000 values for $M_{\mathrm{BH}}$ ranging from $0.1\times10^6\,\mathrm{M}_{\odot}$ to $100\times10^6\,\mathrm{M}_{\odot}$ with a step of $0.1\times10^6\,\mathrm{M}_{\odot}$. The parameter space thus consists of a grid of $3\times3\times18\times1000=162,000$ points. We computed the response functions in the iron line and continuum bands for each point in the parameter space, and used Eq.\[eq15\] to estimate the corresponding model B time-lag spectrum. We then calculated $\chi^2(a,\theta,h,M_{\mathrm{BH}})$ on the parameter space, based on the observed time-lag spectra of each source, according to Eq.\[eq16\]. The resulting grid of $\chi^2$ points was subsequently interpolated quadratically in the parameters $\{a,\theta\}$, and cubically in $\{h,M_{\mathrm{BH}}\}$. We finally used the continuous, interpolated $\chi^2(a,\theta,h,M_{\mathrm{BH}})$ space to obtain $\chi^2_{\mathrm{min}}$, along with the corresponding best-fit values and confidence intervals of the model B parameters. Model A best-fit results {#subsec61} ------------------------ Model A fits the observed time-lag spectra well for all sources. Our best-fit results are listed in Cols.2–4 of Table\[table3\]. The best-fit models are shown as continuous brown lines in Figs.\[fig2\] and \[fig3\]. The observed time-lag spectra of 1H 0707–495, MCG–6-30-15, and Mrk 766 are flat. The fit is thus contrived for these sources, in the sense that the best-fit $s$ value is $\sim0$. In Col.3 of Table\[table3\] we therefore list only the upper limit on $s$. We re-fitted the observed time-lag spectra of these sources to a constant delay (i.e. we set $s=0$). The resulting fit is statistically acceptable ($\chi^2_{\mathrm{min}}/\mathrm{dof}=15.9/13$, 19.0/24, and 19.5/14 for 1H 0707–495, MCG–6-30-15, and Mrk 766, respectively), and the best-fit normalization (i.e. the best-fit constant delay in this case) is $A=-29^{+21}_{-20}\,\mathrm{sec}$, $-21\pm9\,\mathrm{sec}$ and $-71\pm19\,\mathrm{sec}$ for 1H 0707–495, MCG–6-30-15, and Mrk 766, respectively. When we assumed a Gaussian distribution for the best-fit $A$ values, the best-fit errors can be used for $s=0$ to estimate the probability of $A=0$ (i.e. the probability that the observed time-lag spectrum is identically zero). We find a probability of 16%, 2%, and 0.02% for 1H 0707–495, MCG–6-30-15, and Mrk 766, respectively (this is a rough estimate and should thus only be considered as indicative). The time-lag spectra for the remaining sources show evidence of curvature at low frequencies ($\lesssim2\times10^{-4}\,\mathrm{Hz}$), in the sense that model A requires a non-zero best-fit $s$ value. Model B best-fit results {#subsec62} ------------------------ Model B fits the observed time-lag spectra of all sources well. When allowing for all four model B parameters to be free during the fitting procedure, we found that $a$ and $\theta$ are unconstrained, in the sense that even their 68% confidence interval is larger than the broadest allowed range for the parameter value ($0-1$ and $20^{\circ}-60^{\circ}$ for $a$ and $\theta$, respectively). The reason is the large errors of the observed time-lags and, as discussed in Appendix \[appd\], the weak dependence of the model B time-lag spectra on these parameters. Furthermore, for most sources there is a degeneracy between $h$ and $M_{\mathrm{BH}}$, which is caused by the similar dependence of the model B time-lag spectra on these parameters (see Appendix \[appd\]). To constrain $a$ and $h$ in the best possible way, we set $\theta=40^{\circ}$ (the mean value found for a similar sample of sources studied by E14) for all sources, and $M_{\mathrm{BH}}$ to the values listed in Col.2 of Table\[table1\]. We then repeated the fitting procedure to obtain the best-fit $a$ and $h$ values. The best-fit models are shown as dashed red lines in Figs.\[fig2\] and \[fig3\]. Even by fixing $\theta$ and $M_{\mathrm{BH}}$, we found that $a$ cannot be constrained. In the last column of Table\[table3\] we list the $\chi^2_{\mathrm{min}}/\mathrm{dof}$ value when we allowed $a$ to be free during the fitting procedure, while in parentheses we list the corresponding $\chi^2_{\mathrm{min}}$ values when we froze $a$ to the value of 0 and 1, respectively. They are very similar for almost all sources, indicating that we are unable to constrain $a$. MCG–6-30-15 stands as an exception, since for this source we obtained a best-fit $a$ value of $0.3^{+0.3}_{-0.2}$. The upper 95% level is 0.8, which is somewhat inconsistent with the results obtained by modelling the X-ray spectrum of this source, which requires $a\sim1$ [e.g. @2014ApJ...787...83M]. This is due to our choice of $M_{\mathrm{BH}}$ during the fitting procedure. For example, when we set $M_{\mathrm{BH}}=3\times10^6\,\mathrm{M}_{\odot}$ (which is consistent, within the errors, with the value listed in Col.2 of Table\[table1\]), the best-fit value of $a$ is 0.4, while the 95% confidence level ranges from 0 to 1. Column 5 of Table\[table3\] lists our best-fit results for $h$. The X-ray source height is well defined only for Mrk 766 and Mrk 335. For 1H 0707–495 and MCG–6-30-15 the best-fit $h$ values are $4r_{\mathrm{g}}$ and $2.3r_{\mathrm{g}}$ (which is the lowest allowed fitting value for $h$), respectively. The lower 68% limit is $2.3r_{\rm g}$ for 1H 0707–495. The upper limit is $20r_{\mathrm{g}}$ and $3r_{\mathrm{g}}$ for 1H 0707–495 and MCG–6-30-15, respectively. For NGC 4051 we obtained a best-fit $h$ value of $17r_{\mathrm{g}}$, with a lower and upper 68% limit of $2.3r_{\mathrm{g}}$ and $30r_{\mathrm{g}}$, respectively. Given that the lower limit is equal to the lowest value we considered for $h$, we list only the upper limit on $h$ for these three sources. The best-fit $h$ values for Ark 564 and NGC 7314 are $83r_{\mathrm{g}}$ and $100r_{\mathrm{g}}$ (which is the highest allowed fitting value for $h$), respectively. The X-ray source height is consistent with the value of $100r_{\mathrm{g}}$ for Ark 564. The lower limit on the best-fit $h$ value is $28r_{\mathrm{g}}$ and $82r_{\mathrm{g}}$ for Ark 564 and NGC 7314, respectively. As a result, we list the lower limit of this parameter for these two sources in Table\[table3\]. The high $h$ values arise because the observed time-lag spectra of these sources increase (in magnitude) with decreasing frequency. The lower limit of $h$ for NGC 7314 is higher than for Ark 564 because $M_{\mathrm{BH}}$ in lower in the former source. However, it is not certain that these two sources have a large X-ray source height. To investigate this further, we fitted their observed time-lag spectra with a model A+B combination. We kept the X-ray source fixed at $h=3.7r_\mathrm{g}$ (the mean height found by E14), set $\theta=40^{\circ}$, fixed $M_{\mathrm{BH}}$ to the respective values listed in Col.2 of Table\[table1\], and let $a=1$. In effect, we kept all the model B parameters fixed to a given value during the fit (as we explained above, we cannot reach a meaningful fit when we let all the model A and B parameters free during the fit) so that the number of degrees of freedom is the same as when we fit the data with model A. Our best-fit results in this case are $A_{\rm Ark\,564}=172^{+62}_{-55}\,\mathrm{sec}$, $A_{\rm NGC\,7314}=69^{+38}_{-41}\,\mathrm{sec}$, $s_{\rm Ark\,564}=1.7^{+0.3}_{-0.4}$, and $s_{\rm NGC\,7314}=1.6^{+1.0}_{-0.6}$. As expected, the reverberation time-lag component causes the resulting best-fit $A$ and $s$ values to be lower and steeper, respectively, than the respective best-fit model A values listed Table\[table3\]. The quality of the combined model A+B fit is similar to that of model A: $\chi^2_{\mathrm{min}}/\mathrm{dof}=14.6/18$, and $\chi^2_{\mathrm{min}}/\mathrm{dof}=26.6/20$ in the case of Ark 564 and NGC 7314, respectively. This result shows that the observed iron line vs continuum time-lags of Ark 564 and NGC 7314 can be fitted well by a combination of a continuum plus reverberation component, the latter of which corresponds to a low $h$ and high $a$ value. Discussion and conclusions {#sec7} ========================== We performed a systematic analysis of the iron line vs continuum ($5-7$ vs $2-4\,\mathrm{keV}$) time-lags in seven AGN. The AGN we studied are X-ray bright and highly variable. The BH mass estimates for these sources are $\lesssim5\times10^6\,\mathrm{M}_{\odot}$, except for Mrk 335, which has a corresponding estimate of $\sim3\times10^7\,\mathrm{M}_{\odot}$ (note that these mass estimates are determined from optical techniques like reverberation mapping, and are not derived here). Our measurements are among the best that can currently be achieved and are able to be obtained for many years to come (with current X-ray satellites). Our choice of focusing on the iron line band was motivated by the simple fact that its existence indicates the presence of an X-ray reflection component (either from the disc or from distant material) in this band. It is thus is a clean band, ideal for investigating whether X-ray reflection operates in the inner part of the putative accretion disc. However, the low number of photons in this band undermines this advantage. Nevertheless, we found that the iron line vs continuum time-lags are consistent with the simplest X-ray reflection scenario. They also imply X-ray source heights that are close to those derived using data from lower energy bands. This result supports the hypothesis that the X-ray soft excess in these sources is a reflection component (see the relevant discussion in Sect.\[subsec73\]). Estimation of time-lag spectra {#subsec71} ------------------------------ We used all the available archival *XMM-Newton* data for seven X-ray bright and highly variable Seyfert galaxies and employed standard Fourier techniques to estimate the iron line vs continuum time-lag spectrum of each source. These sources have a large ($\gtrsim0.3\,\mathrm{Ms}$) amount of archival *XMM-Newton* data. We also took the results obtained from extensive numerical simulations performed by EP16 into account, who studied the effects of the light curve characteristics (duration, time bin size, and Poisson noise) on the statistical properties of the traditional time-lag estimators assuming various intrinsic time-lag spectra commonly observed between X-ray light curves of accreting systems. EP16 found the following: - Time-lag estimates should be computed at frequencies lower than half the Nyquist frequency. This minimises the effects of light-curve binning on their mean values. - The cross-periodogram should not be binned over neighbouring frequencies, as this may introduce significant bias that can only be taken into account when a model CS (and not just a model time-lag spectrum) is assumed. - Time-lags should be estimated from a cross-periodogram that is averaged over pairs of continuous light-curve segments with the same duration. - If the number of segments, $m$, is $\gtrsim10$, the time-lag estimates will have known errors and approximately follow a Gaussian distribution, provided they are estimated at frequencies at which the sample coherence is $\gtrsim1.2/(1+0.2m)$. This minimises the effects of Poisson noise on their mean values. Following these results, we chose the segment duration to be $\sim20\,\mathrm{ks}$. This limits the minimum frequency that can be reliably probed to be $\sim5\times10^{-5}\,\mathrm{Hz}$. A longer segment duration would allow us to probe even lower frequencies, but at the same time it would decrease the number of the available segments, and, consequently, increase the error of the resulting time-lag estimates. According to EP16, if the segment duration is $\gtrsim20\,\mathrm{ks}$, then the time-lag bias should be $\lesssim15\%$ compared to their intrinsic values for the model CS they considered. In Appendix \[appe\], we demonstrate that we do not expect the time-lag bias to be a problem in our study. The maximum frequency that can be reliably probed by the current data is set by point (d) above. The frequency at which the coherence becomes lower than the critical value of $\sim1.2/(1+0.2m)$ depends on the number of segments and is mainly determined by the energy band with the lowest average count rate. This is the iron line band in all cases; the mean count rate of all light curves in our sample is $0.38\pm0.27\,\mathrm{cts/sec}$ and $1.49\pm0.98\,\mathrm{cts/sec}$ for the iron line and continuum band, respectively. We found that the maximum frequency is $\lesssim10^{-3}\,\mathrm{Hz}$ for all sources. Given that the sources in our sample are X-ray bright and have a large amount of archival data, the available *XMM-Newton* data allow for the reliable estimation of iron line vs continuum time-lags at frequencies between $\sim5\times10^{-5}\,\mathrm{Hz}$ and $\sim10^{-3}\,\mathrm{Hz}$. A direct comparison with published iron line vs continuum time-lags for the sources in our sample is complicated by three factors: the choice of energy bands, the *XMM-Newton* observations used to estimate them, and the cross-periodogram smoothing and/or averaging scheme employed to estimate the time-lags. Similar energy bands to ours have been used for Mrk 335, NGC 7314, NGC 4151, and MCG–5-23-16. For Mrk 335, the time-lag magnitudes and errors we find are consisted with those reported by CY15, although they only used data from a single *XMM-Newton* observation, which corresponds to $\sim40\%$ of the data we used. The iron line vs continuum time-lags reported by @2013ApJ...767..121Z for NGC 7314 are also roughly consistent in magnitude with our findings. They used data from only two *XMM-Newton* observations, which corresponds to $\sim30\%$ of the data we used. Their time-lags are larger (in magnitude) than ours at low frequencies. They provide time-lag estimates at frequencies lower than ours. Owing to the limited length of the data sets they used, their low-frequency estimates must have been obtained from averaging a small number of cross-spectral estimates at neighbouring frequencies. As a result, according to the EP16, these estimates should be far from being Gaussian-distributed, and the frequently used time-lag error prescription of @1999ApJ...510..874N should severely underestimate the true scatter of these estimates around their mean. We did not estimate time-lags for NGC 4151 and MCG–5-23-16, since the available *XMM-Newton* archival light curves at the time we were analysing the data were not long enough to obtain reliable (in the sense explained in Sect.\[sec3\]) time-lag estimates. Modelling the observed time-lag spectra {#subsec72} --------------------------------------- We considered two different model time-lag spectra: (a) a power-law time-lag spectrum that describes delays between X-ray continuum variations in different energy bands (model A), and (b) a reverberation time-lag spectrum that describes delays between the X-ray continuum and reprocessed disc emission in a lamp-post geometry (model B). The first is a phenomenological model, while the second is a physical model that depends on the central source geometry. We calculated the model B time-lag spectra by determining accurate disc response functions in the iron line and continuum bands. We fixed the photon index of the X-ray source at a value of 2 and assumed a neutral, prograde disc with an iron abundance equal to the solar value, around a spinning BH. The inner disc radius was set to the location of the ISCO, and the outer radius was fixed at $10^3r_{\mathrm{g}}$. We took all relativistic effects into account and considered the total reprocessed disc emission (and not just the photons initially emitted by the disc at $6.4\,\mathrm{keV}$) in both the iron line and continuum bands. In this respect, our modelling is more accurate than previous attempts (e.g. E14 and C14). We found that the model B time-lag spectra have a weak dependence on the BH spin and viewing angle. On the other hand, they depend strongly on the BH mass and X-ray source height. These parameters affect the model B time-lag spectra in a similar way. As the height increases, the model B time-lag spectra flatten at lower frequencies, and to a lower level; the same effect can also be produced by a higher BH mass for the same height (in units of $r_{\mathrm{g}}$). In addition, the characteristic flattening of the reverberation time-lag spectra to a constant value at low frequencies also depends on the outer disc radius. Therefore, the magnitude of this constant level cannot be used in a straight-forward way to determine either the X-ray source height or the outer disc radius, even when the BH mass is known. Our modelling can be improved in many ways. For example, we could let the slope of the X-ray continuum spectrum, as well as the iron abundance, be free parameters. These parameters mainly influence the amplitude of the disc response function (as they affect the reflection fraction in each energy band). In this case, these parameters should affect the response functions similarly to the BH spin (at small heights). Consequently, we do not expect the difference in the resulting model time-lag spectra to be significant (see the bottom left panel in Fig.\[figb1\]). As shown by CY15, for instance, disc ionisation also affects the model time-lag spectra and should be included in the determination of the response functions. More importantly, however, the main limitation of our modelling is the adopted geometry. The lamp-post geometry is a simplification of the AGN X-ray emitting region. A different geometry can significantly affect the shape and amplitude of the disc response function, and as a result, it can significantly alter the resulting model time-lag spectrum (see the discussion in Appendices \[appb\] and \[appd\]). We adopted it (as has been done by many authors in the past) because the estimation of the disc response is relatively straightforward in this case. Furthermore, our intention was to investigate whether the observed iron line vs continuum time-lag spectra are consistent with the simplest theoretical reverberation model, and to see which constraints they can impose on the X-ray source and disc geometry. In retrospect, given the results of our study (see the discussion below), the current data sets fail to distinguish between the predictions of the lamp-post model and those from a more detailed approach. Model-fit results {#subsec73} ----------------- We fitted models A and B separately to the observed time-lag spectra because given their quality (limited frequency range and large errors), we would not have been able to constrain the lamp-post parameters by fitting a combined model A+B to the data. Both models provide statistically acceptable fits. We therefore cannot prefer one model based on the quality of the model fits. However, our best-fit results do provide useful hints. For example, the best-fit model A power-law index values for 1H 0707–495, MCG–6-30-15, and Mrk 766 are consistent with zero. The observed time-lags in these sources are flat, and the best-fit model A reduces to just a constant. This result (i.e. that the best-fit power-law model to the data is a horizontal line) leads us to believe that the case for X-ray reverberation time-lags is strong, at least in these three sources. If the observed time-lags were indeed representative of continuum time-lags, we would expect a non-zero best-fit slope. As we showed in Sect.\[subsec43\], the observed time-lags should have both a continuum and a reverberation component. The lack of a significant detection of the expected continuum component for these three sources (at least) is not surprising and can be explained physically. The continuum time-lags depend on the energy separation between the chosen energy bands, which is small in our case. Our best-fit model A amplitude values are systematically lower than the respective best-fit values found by E14. This is what we should expect for continuum time-lags, as the energy separation between the iron line and continuum bands is smaller than the separation between the $1.5-4$ and $0.3-1\,\mathrm{keV}$ bands used by E14. When fitting the observed time-lags to the model B time-lag spectrum, we found that the BH spin and inclination cannot be constrained. This is due to the large errors of the time-lag estimates and the weak dependence of the model B time-lag spectra on these parameters. Furthermore, there is a degeneracy between the X-ray source height and the BH mass that is due to the similar dependence of the model B time-lag spectrum on these parameters. We thus froze the BH mass value for each source to the most accurate and reliable values we could find in the literature and managed to constrain the X-ray source height. The observed iron line vs continuum time-lag spectra either require, or are consistent with, small X-ray source heights. For example, the best-fit height estimates are $\lesssim10r_g$ in three sources. The best-fit height for NGC 4051 is also consistent with such a low value. Even for Ark 564 and NGC 7314, the data are consistent with an X-ray source height as small as $\sim4r_{\mathrm{g}}$ when we considered a combined model A+B. Figure\[fig4\] shows the our best-fit $h$ values versus the E14 best-fit results. The red dashed line indicates the one-to-one relation. Although most of the points are located above this line, given the large uncertainties, the plot suggests a broad agreement with the results of E14. The direct comparison is complicated because we considered more data sets than E14 for some sources. @2013MNRAS.435.1511A showed that the soft lags of NGC 4051 vary significantly and systematically with source flux. In our case, we cannot fit model B to time-lag spectra estimated from low- and high-flux segments, as the uncertainty on the model parameters will be significantly larger than what we obtain when we fit the overall time-lags. Nevertheless, if this trend is present in all AGN and in the iron line vs continuum time-lags as well, then when we average over data with a wide flux range, segments with the highest flux may dominate the cross-periodogram, as they may be associated with higher amplitude variations [due to the rms-flux relation; @2001MNRAS.323L..26U]. If the data sets we considered exhibit a wider flux variability range than the one in the E14 data sets, differences in the best-fit results may be easier to explain. In conclusion, the soft excess vs continuum time-lags are consistent with the iron line vs continuum time-lags we presented here, in that they both support the hypothesis of disc reflection from an X-ray source that is located very close to the disc and the central BH. ![Comparison between the best-fit X-ray source height obtained by fitting the iron line vs continuum time-lags (vertical axis; this work) with those obtained by fitting the soft excess vs continuum time-lags (horizontal axis; E14).[]{data-label="fig4"}](figures/fig4.pdf){width="\hsize"} Implications for the X-ray reflection scenario {#subsec74} ---------------------------------------------- Except for the source height, we are unable to constrain additional reverberation model parameters such as the BH mass and spin, viewing angle, and the outer disc radius. Accurate determination of these parameters would require a significant reduction in the errors of the time-lag estimates and/or an increase in the frequency range that can be reliable probed. However, this requires a substantial increase in the number of X-ray observations of AGN. For example, to probe frequencies lower by a factor of $\sim5$ (i.e. to reach a low limit of $\sim10^{-5}\,\mathrm{Hz}$), segments with a duration of $\sim100\,\mathrm{ks}$ are required. Assuming the number of segments used for the time-lag estimation remains the same as in the present work, this would require the net *XMM-Newton* exposure times to increase by a factor of $\sim5$ for each source (on average). This will, however, neither decrease the error of the time-lag estimates nor allow allow us to probe higher frequencies, since both require an increase in the number of segments. Extending the high-frequency limit requires an increase of $\nu_{\mathrm{max}}$, which can only be achieved by increasing the number of segments. For example, to probe frequencies $\sim2\times10^3\,\mathrm{Hz}$ for MCG–6-30-15, the critical coherence value has to decrease from its present value of $\sim0.18$ to $\sim0.05$ (see Fig.\[fig1\]). This requires the number of segments to increase from 28 to 115, which corresponds to an increase in the net *XMM-Newton* exposure times by a factor of $\sim4$. This would, in turn, reduce the errors of the time-lag estimates by a factor of $\sim2$. In this case, however, we would be unable to probe lower frequencies, since this requires segments of longer duration. One possibility to extend the frequency range of the observed time-lag spectra would be to use the large volume of available archival data from past and current low-Earth orbit satellites (e.g. *ASCA*, *Chandra*, and *Suzaku*). The idea would be to bin the respective light curves at one orbital period ($\sim96\,\mathrm{min}$) to probe low frequencies, although this requires a large number of long observations. For instance, estimating time-lags at frequencies lower than $\sim10^{-5}\,\mathrm{Hz}$ requires an ensemble of at least ten observations, which will be longer than at least a few days. We are currently investigating this possibility to estimate time-lag spectra over a wider frequency range. Given the quality of the present data sets in the iron line band and the resulting iron line vs continuum time-lag spectra, the need for constructing more sophisticated theoretical disc response functions is questionable. It seems that the best way to test the X-ray reverberation scenario and significantly constrain the model parameters is to focus on the soft excess vs continuum time-lag modelling, where the S/N of the existing light curves in the soft band is much higher than those in the iron line band. This would require considering the ionisation structure of the disc in the construction of appropriate disc response functions. Modelling the energy dependence of the time-lag spectra is another possibility. However, we note that the errors of the resulting time-lag estimates are dictated by the energy band with the lower average count rate. As such, the use of light curves over a broad energy band as a reference should not significantly lower the errors of the resulting time-lag estimates, even at the lowest possible frequencies. We plan to model the energy dependence of the observed time-lag spectra in a future work, where we will also consider *NuSTAR* data to study time-lags between the Compton hump and the X-ray continuum. We thank the referee for his/her suggestions, which significantly improved the quality and clarity of the manuscript. This work was supported by the AGNQUEST project, which is implemented under the Aristeia II Action of the Education and Lifelong Learning operational programme of the GSRT, Greece. The research leading to these results has also received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n$^{\rm o}$ 312789, and by the grant PIRSES-GA-2012-31578 EuroCal. Model CS {#appa} ======== Substituting Eqs.\[eq10\] and \[eq11\] into Eq.\[eq12\], we can compute the CCF between the iron line and continuum bands as follows: $$\begin{aligned} \label{eqa1} \nonumber R_{5-7,2-4}(\tau)= & R^{(\mathrm{c})}_{5-7,2-4}(\tau) \\ \nonumber & +\int_{-\infty}^{\infty}\Psi_{2-4}(t')R^{(\mathrm{c})}_{5-7,2-4}(\tau+t')\mathrm{d}t' \\ \nonumber & +\int_{-\infty}^{\infty}\Psi_{5-7}(t')R^{(\mathrm{c})}_{5-7,2-4}(\tau-t')\mathrm{d}t' \\ \nonumber & +\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Psi_{2-4}(t')\Psi_{5-7}(t'') \\ & \quad\times R^{(\mathrm{c})}_{5-7,2-4}(\tau+t'-t'')\mathrm{d}t'\mathrm{d}t'',\end{aligned}$$ where $R^{(\mathrm{c})}_{5-7,2-4}(\tau)\equiv\mathrm{E}\{[\mathscr{F}^{(\mathrm{c})}_{2-4}(t)-\mu_{2-4}][\mathscr{F}^{(\mathrm{c})}_{5-7}(t+\tau)-\mu_{5-7}]\}$ is the CCF of the continuum emission. Consequently, the intrinsic CS is $$\begin{aligned} \label{eqa2} \nonumber C_{5-7,2-4}(\nu) &\equiv\int_{-\infty}^{\infty}R_{5-7,2-4}(\tau)\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}\tau \\ \nonumber &=\int_{-\infty}^{\infty}R^{(\mathrm{c})}_{5-7,2-4}(\tau)\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}\tau \\ \nonumber & \quad +\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Psi_{2-4}(t')R^{(\mathrm{c})}_{5-7,2-4}(\tau+t')\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}t'\mathrm{d}\tau \\ \nonumber & \quad +\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Psi_{5-7}(t')R^{(\mathrm{c})}_{5-7,2-4}(\tau-t')\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}t'\mathrm{d}\tau \\ \nonumber & \quad +\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Psi_{2-4}(t')\Psi_{5-7}(t'') \\ & \quad\quad \times R^{(\mathrm{c})}_{5-7,2-4}(\tau+t'-t'')\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}t'\mathrm{d}t''\mathrm{d}\tau.\end{aligned}$$ Setting $C^{(\mathrm{c})}_{5-7,2-4}(\nu)\equiv\int_{-\infty}^{\infty}R^{(\mathrm{c})}_{5-7,2-4}(\tau)\mathrm{e}^{-\mathrm{i}2\pi\nu\tau}\mathrm{d}\tau$ and applying the convolution theorem, Eq.\[eqa2\] becomes $$\begin{aligned} \label{eqa3} \nonumber C_{5-7,2-4}(\nu) &=C^{(\mathrm{c})}_{5-7,2-4}(\nu) \\ \nonumber & \quad +\tilde{\Psi}^{*}_{2-4}(\nu)C^{(\mathrm{c})}_{5-7,2-4}(\nu) \\ \nonumber & \quad +\tilde{\Psi}_{5-7}(\nu)C^{(\mathrm{c})}_{5-7,2-4}(\nu) \\ \nonumber & \quad +\tilde{\Psi}^{*}_{2-4}(\nu)\tilde{\Psi}_{5-7}(\nu)C^{(\mathrm{c})}_{5-7,2-4}(\nu) \\ &=C^{(\mathrm{c})}_{5-7,2-4}(\nu)[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}.\end{aligned}$$ Subsequently, the model time-lag spectrum is given by $$\begin{aligned} \label{eqa4} \nonumber \tau_{5-7,2-4}(\nu) &\equiv\frac{1}{2\pi\nu}\mathrm{arg}\{C^{(\mathrm{c})}_{5-7,2-4}(\nu)[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}\} \\ \nonumber &=\frac{\mathrm{arg}[C^{(\mathrm{c})}_{5-7,2-4}(\nu)]+\mathrm{arg}\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}\}}{2\pi\nu} \\ &=\tau_{5-7,2-4}^{(\mathrm{c})}(\nu)+\tau_{5-7,2-4}^{(\mathrm{r})}(\nu),\end{aligned}$$ where we used the property $\mathrm{arg}[z_1z_2]=\mathrm{arg}[z_1]+\mathrm{arg}[z_2]$, for the complex numbers $z_1$, $z_2$. The functions appearing on the right-hand side of Eq.\[eqa4\] are defined as $\tau_{5-7,2-4}^{(\mathrm{c})}(\nu)\equiv(2\pi\nu)^{-1}\mathrm{arg}[C^{(\mathrm{c})}_{5-7,2-4}(\nu)]$, and $\tau_{5-7,2-4}^{(\mathrm{r})}(\nu)\equiv(2\pi\nu)^{-1}\mathrm{arg}\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}\}$. The total time-lag spectrum is therefore equal to the sum of two time-lag spectra; one due to delays between variations of different energy bands in the continuum, $\tau_{5-7,2-4}^{(\mathrm{c})}(\nu)$, and one due to delays between the reprocessed disc emission and the continuum, $\tau_{5-7,2-4}^{(\mathrm{r})}(\nu)$. The function $\tau_{5-7,2-4}^{(\mathrm{r})}(\nu)$ can be written as follows: $$\begin{aligned} \label{eqa5} \nonumber \tau^{(\mathrm{r})}_{5-7,2-4}(\nu) &\equiv\frac{1}{2\pi\nu}\mathrm{arg}\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}_{2-4}(\nu)]^{*}\} \\ &=\frac{1}{2\pi\nu}\mathrm{arctan}\left\{\frac{\Im\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}}{\Re\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}}\right\},\end{aligned}$$ where $$\begin{aligned} \label{eqa6} \nonumber \Re\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}= & 1+\Re[\tilde{\Psi}_{2-4}(\nu)]+\Re[\tilde{\Psi}_{5-7}(\nu)] \\ \nonumber & +\Re[\tilde{\Psi}_{2-4}(\nu)]\Re[\tilde{\Psi}_{5-7}(\nu)] \\ & +\Im[\tilde{\Psi}_{2-4}(\nu)]\Im[\tilde{\Psi}_{5-7}(\nu)],\end{aligned}$$ $$\begin{aligned} \label{eqa7} \nonumber \Im\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}= & \Im[\tilde{\Psi}_{2-4}(\nu)]-\Im[\tilde{\Psi}_{5-7}(\nu)] \\ \nonumber & -\Re[\tilde{\Psi}_{2-4}(\nu)]\Im[\tilde{\Psi}_{5-7}(\nu)] \\ & +\Im[\tilde{\Psi}_{2-4}(\nu)]\Re[\tilde{\Psi}_{5-7}(\nu)],\end{aligned}$$ $\Re[\tilde{\Psi}_{\mathcal{E}}(\nu)]=\int_{-\infty}^{\infty}\Psi_{\mathcal{E}}(t)\cos(2\pi\nu t)\mathrm{d}t$, and $\Im[\tilde{\Psi}_{\mathcal{E}}(\nu)]=-\int_{-\infty}^{\infty}\Psi_{\mathcal{E}}(t)\sin(2\pi\nu t)\mathrm{d}t$. In other words, the model time-lag spectrum depends in a non-trivial way on the Fourier transforms of the iron line and continuum response functions. Disc response functions {#appb} ======================= ![image](figures/figb1.pdf){width="15.0cm"} The top panels in Fig.\[figb1\] show various disc response functions in the energy bands $5-7\,\mathrm{keV}$, $\Psi_{5-7}(t)$ (solid lines), and $2-4\,\mathrm{keV}$, $\Psi_{2-4}(t)$ (dashed lines), in the case when the disc extends to $10^3r_{\mathrm{g}}$, and $M_{\rm BH}=10^6\,\mathrm{M}_{\odot}$. The horizontal axes show time in the observer frame, with the origin ($t=0$) corresponding to the beginning of the primary X-ray flare (which, as we mentioned in Sect.\[subsec44\], ends abruptly at $t=1t_{\mathrm{g}}$). The response functions plotted in these panels are defined such that $\int_{-\infty}^{\infty}\Psi_{\mathcal{E}}(t)\mathrm{d}t$ is equal to the observed ratio of reprocessed-to-continuum photons. The parameters $a$, $\theta$, $h$, and $r_{\mathrm{out}}$ affect both the shape and amplitude of the response function. The parameter $M_{\mathrm{BH}}$ changes its shape (by either uniformly stretching or contracting it in the horizontal direction, depending on whether $M_{\mathrm{BH}}$ is increased or decreased), but not their amplitude. In all cases the response function shows an initial sharp rise at a time $t_{\mathrm{rise}}>0$, followed by a second peak at a later times, and a final gradual decline up to a maximum time $t_{\mathrm{max}}$. The initial rise time corresponds to the instant the observer detects the first reflected emission from the near side of the disc. The second peak appears when the observer detects emission from the far side of the disc. At longer timescales we detect emission from the outer disc radii, where the reflection amplitude is reduced, and hence $\Psi_{\mathcal{E}}(t)$ gradually decreases up until $t_{\mathrm{max}}$, when we detect the last emission from the edge of the far side of the disc, and $\Psi_{\mathcal{E}}(t)$ abruptly drops to zero. $t_{\mathrm{max}}$ depends mainly on $h$ and $r_{\mathrm{out}}$. If $r_{\mathrm{out}}\rightarrow\infty$, then $t_{\mathrm{max}}\rightarrow\infty$ and the response function has a $\sim t^{-2}$ behaviour at long times [e.g. @2013MNRAS.430..247W]. Given our adopted convention, the amplitude of the response function in a given energy band depends on the strength of the reprocessed disc emission relative to the continuum emission in that energy band. Therefore, in almost all cases shown in Fig.\[figb1\], the amplitude of $\Psi_{5-7}(t)$ is larger than the amplitude of $\Psi_{2-4}(t)$ by a factor of $\sim10$. However, for high spin values and small heights, this difference is as small as $\sim4$ (top left panel in Fig.\[figb1\]). Since the amplitude of both $\Psi_{5-7}(t)$ and $\Psi_{2-4}(t)$ affect the model time-lag spectra (see Appendix \[appa\]), $\Psi_{2-4}(t)$ should not be neglected when estimating the theoretical time-lag spectra. The BH spin affects neither the width of the response function, nor $t_{\mathrm{rise}}$ (top left panel in Fig.\[figb1\]). This may seem counter-intuitive at first, as $r_{\mathrm{ISCO}}$ decreases from $6r_{\mathrm{g}}$ for a non-spinning BH, to almost $1r_{\mathrm{g}}$ for a maximally spinning BH, reducing the distance between the X-ray source and the inner disc. However, for the $\theta$ and $h$ values we considered, the reprocessed emission that is initially detected by the distant observer is emitted from a part of the disc that is located towards the observer’s direction at a radius larger than $6r_{\mathrm{g}}$, independent of $a$. Nevertheless, $a$ does affect the amplitude of the response function, especially when $h$ is small. When the X-ray source is located close to the BH, light-bending effects are strong, and most of the continuum emission illuminates the region of the disc close to the BH. For a maximally spinning BH the disc extends to very small radii, and hence the amplitude of the response function when $a=1$ is significantly larger than when $a=0$ (brown and black lines, respectively, in the top left panel of Fig.\[figb1\]). The rise time decreases and the width of the response function increases with increasing inclination (top middle panel in Fig.\[figb1\]). The rise time decrease is due to the decreased light path difference between the continuum and the disc emission with increasing $\theta$. The increase in the width appears because the difference between the light travel time from the near-side of the disc and from the X-ray source increases with increasing $\theta$. The amplitude of the response function decreases with increasing $\theta$, since the projected area of the disc (and hence the observed amount of reflected emission) is proportional to $\cos\theta$. The top right panel in Fig.\[figb1\] shows that $t_{\mathrm{rise}}$ depends mainly on the height of the X-ray source, as it strongly increases with increasing $h$. The reason is the increase in the light travel time between the X-ray source and the disc emission. The width of the response function also increases with increasing $h$. As $h$ increases, the difference between $t_{\mathrm{rise}}$ and the time when the second peak appears increases as well because the light travel time difference between the near and far-side of the disc increases. Effects of finite continuum flare duration on the response function estimation {#appc} ============================================================================== As mentioned in Sect.\[subsec44\], the disc response functions were determined assuming a flare of constant flux continuum emission that lasted for $1t_{\mathrm{g}}$. Our response functions thus formally describe the response of the disc to a flare with a finite duration and a box-like light curve (instead of a delta-function), and we henceforth designate them as $\Psi^{\mathrm{(b)}}_{\mathcal{E}}(t)$. In this appendix we investigate the relation between the Fourier transforms of $\Psi_{\mathcal{E}}(t)$ (i.e. the disc response to an instantaneous continuum flare) and $\Psi^{(\mathrm{b})}_{\mathcal{E}}(t)$. This is necessary, because the model time-lag spectra given by Eq.\[eq15\] depend on the Fourier transforms of $\Psi_{\mathcal{E}}(t)$ in the iron line and continuum bands. We assumed that $\mathscr{F}^{(\mathrm{c})}_{\mathcal{E}}(t)$, as it appears on the right-hand side of Eq.\[eq10\], is equal to $F_0[H(t)-H(t-t_{\mathrm{g}})]t^{-1}_{\mathrm{g}}$, where $H(x)$ is the Heaviside step function ($H(x)$ is defined as being equal to unity when $x\ge0$, and zero otherwise) and $F_0$ is a constant. According to Eq.\[eq10\], the observed flux is then $$\begin{aligned} \label{eqc1} \nonumber \mathscr{F}_{\mathcal{E}}(t) &=F_0[H(t)-H(t-t_{\mathrm{g}})]t^{-1}_{\mathrm{g}}+F_0t^{-1}_{\mathrm{g}}\int_{t}^{t+t_{\mathrm{g}}}\Psi_{\mathcal{E}}(t')\mathrm{d}t' \\ & =F_0[H(t)-H(t-t_{\mathrm{g}})]t^{-1}_{\mathrm{g}}+F_0\Psi^{(\mathrm{b})}_{\mathcal{E}}(t),\end{aligned}$$ where $\Psi^{(\mathrm{b})}_{\mathcal{E}}(t)=t_{\mathrm{g}}^{-1}\int_{t}^{t+t_{\mathrm{g}}}\Psi_{\mathcal{E}}(t')\mathrm{d}t'$. When $t_{\mathrm{g}}\rightarrow0$ (i.e. when the flare becomes instantaneous), $\Psi^{(\mathrm{b})}_{\mathcal{E}}(t)\rightarrow\Psi_{\mathcal{E}}(t)$, as expected. According to Eq.\[eqc1\], $\Psi^{(\mathrm{b})}_{\mathcal{E}}(t)$ is equal to the convolution of $\Psi_{\mathcal{E}}(t)$ with a constant kernel that is non-zero for $0\le t\le t_{\mathrm{g}}$. The relation between the Fourier transforms of the two functions is therefore $$\begin{aligned} \label{eqc2} \nonumber \tilde{\Psi}_{\mathcal{E}}^{(\mathrm{b})}(\nu) &=\int_{-\infty}^{\infty}\left[\frac{1}{t_{\mathrm{g}}}\int_{t}^{t+t_{\mathrm{g}}}\Psi_{\mathcal{E}}(t')\mathrm{d}t'\right]\mathrm{e}^{-\mathrm{i}2\pi\nu t}\mathrm{d}t \\ & =[\mathrm{e}^{\mathrm{i}\pi\nu t_{\mathrm{g}}}\mathrm{sinc}(\pi\nu t_{\mathrm{g}})]\tilde{\Psi}_{\mathcal{E}}(\nu).\end{aligned}$$ In other words, the Fourier transform of $\Psi^{(\mathrm{b})}_{\mathcal{E}}(t)$ has to be divided by the factor $\mathrm{e}^{\mathrm{i}\pi\nu t_{\mathrm{g}}}\mathrm{sinc}(\pi\nu t_{\mathrm{g}})$ to account for the finite width of the continuum flare. This correction term becomes important only at frequencies $\nu\gtrsim t^{-1}_{\mathrm{g}}$, which corresponds to $\gtrsim0.2\,\mathrm{Hz}$ for $M_{\mathrm{BH}}=10^6\,\mathrm{M}_{\odot}$. As explained in Sect.\[sec2\], time-lags cannot be reliably estimated at such high frequencies with current data, hence the correction term has a negligible effect on the model time-lag spectra in our work. Nevertheless, we applied Eq.\[eqc2\] to determine $\tilde{\Psi}_{5-7}(\nu)$ and $\tilde{\Psi}_{2-4}(\nu)$, which were subsequently used to calculate the corresponding model iron line vs continuum time-lag spectrum according to Eq.\[eq15\]. Lamp-post model time-lag spectra {#appd} ================================ The bottom panels in Fig.\[figb1\] (and both panels in Fig.\[figd1\]) show various iron line vs continuum model time-lag spectra, $\tau^{(\mathrm{r})}_{5-7,2-4}(\nu)$, calculated using the parameters that we used to estimate the response functions appearing in the same figure. The time-lags are predominantly negative, meaning that variations in the iron line band are delayed with respect to variations in the continuum band. They all share similar characteristics. They all flatten to a constant, negative plateau, $\tau_{\mathrm{plateau}}$, below a frequency $\nu_{\mathrm{plateau}}$. At higher frequencies they rise to a maximum positive bump, $\tau_{\mathrm{bump}}$, at a frequency $\nu_{\mathrm{bump}}$, followed by sinusoidal behaviour with decreasing amplitude around a zero time-lag value. ![*Top panel*: Model time-lag spectra between the $5-7$ and $2-4\,\mathrm{keV}$ bands for various $M_{\mathrm{BH}}$ values. The remaining model parameters are fixed at $a=0.676$, $\theta=40^{\circ}$, $h=11r_{\mathrm{g}}$, and $r_{\mathrm{out}}=10^3r_{\mathrm{g}}$. *Bottom panel*: Model time-lag spectra between the $5-7\,\mathrm{keV}$ and $2-4\,\mathrm{keV}$ bands for various values of $r_{\mathrm{out}}$. The remaining model parameters are fixed at $a=0.676$, $\theta=40^{\circ}$, $h=11r_{\mathrm{g}}$, and $M_{\mathrm{BH}}=10^6\,\mathrm{M}_{\odot}$.[]{data-label="figd1"}](figures/figd1.pdf){width="\hsize"} As in the case of the response functions, the parameters $a$, $\theta$, $h$, $M_{\mathrm{BH}}$, and $r_{\mathrm{out}}$ affect both the shape and magnitude of the time-lags. The bottom left panel in Fig.\[figb1\] shows that the BH spin has a weak effect on the model time-lag spectra. The time-lags increase slightly in (absolute) magnitude with increasing $a$. The frequency $\nu_{\mathrm{plateau}}$ does not depend on $a$, while $\nu_{\mathrm{bump}}$ and $\tau_{\mathrm{bump}}$ depend weakly on $a$. The dependence of the model time-lag spectra on $a$ is in contrast to the strong dependence of the response function amplitude on the same parameter (top left panel in Fig.\[figb1\]). This result shows that the response function amplitude does not significantly influence the time-lag characteristics. It also highlights the importance of including the reprocessed emission in the continuum band as well (as we did here), since response functions of comparable magnitudes in two bands will lead to a model time-lag spectrum that is close to zero. In general, when $\Psi_{5-7}(t)=\Psi_{2-4}(t)\ne0$ (i.e. when the observed ratio of reprocessed-to-continuum photons is equal in the two energy bands), Eq.\[eqa5\] reduces to $\tau^{(\mathrm{r})}_{5-7,2-4}(\nu)=0$. As we mentioned in Appendix \[appb\], although $\Psi_{5-7}(t)$ increases with increasing $a$, so does the amplitude of $\Psi_{2-4}(t)$, and in fact even more so. For a rapidly spinning BH and X-ray reflection from the innermost part of the disc, the red wing of the iron line is well into the $2-4\,\mathrm{keV}$ band due to strong gravitational effects. Including reprocessed emission in the continuum band has the well-known effect of decreasing, or diluting, the magnitude of the model time-lag spectrum [e.g. @2013MNRAS.430..247W]. This can be seen by taking the limit $\Psi_{2-4}(t)\rightarrow0$ in Eqs.\[eqa6\] and \[eqa7\], in which case $|\Re\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}|$ is minimised, $|\Im\{[1+\tilde{\Psi}_{5-7}(\nu)][1+\tilde{\Psi}^{*}_{2-4}(\nu)]\}|$ is maximised, and hence the $|\tau^{(\mathrm{r})}_{5-7,2-4}(\nu)|$ is maximised according to Eq.\[eqa5\]. The bottom middle panel in Fig.\[figb1\] shows that the time-lags increase in magnitude with decreasing inclination. The reason is that the rise time of the response functions decreases as $\theta$ increases. Similarly to the BH spin, $\nu_{\mathrm{plateau}}$ and $\nu_{\mathrm{bump}}$ have a weak dependence on $\theta$. The (absolute) magnitude of $\tau_{\mathrm{plateau}}$ in the middle bottom panel of Fig.\[figb1\] is larger than the corresponding magnitude in the bottom left panel because we considered a larger X-ray source height in the former case. In a lamp-post geometry, the time-lag spectra are mainly affected by X-ray source height (bottom right panel in the same figure). The (absolute) magnitude of the time-lags and $\tau_{\mathrm{plateau}}$ strongly increase and $\nu_{\mathrm{plateau}}$ and $\nu_{\mathrm{bump}}$ decrease with increasing $h$. Both effects are caused mainly by the fact that $t_{\rm{rise}}$ increases substantially with increasing source height. The (absolute) magnitude of $\tau_{\mathrm{bump}}$ also increases with increasing $h$, although this increase is not as pronounced as in the case of $\tau_{\mathrm{plateau}}$. The BH mass likewise strongly affects the magnitude and shape of the time-lag spectrum, as seen in the top panel of Fig.\[figd1\]. At a given X-ray source height, the magnitude of the time-lags increases while $\nu_{\mathrm{plateau}}$ and $\nu_{\mathrm{bump}}$ decrease with increasing $M_{\mathrm{BH}}$. This is due to the increased light travel time between the X-ray source and the disc, since the physical size of the X-ray source/disc system scales proportionally with $M_{\mathrm{BH}}$. In other words, since time-lags scale with $t_{\mathrm{g}}$ and frequencies scale with $t^{-1}_{\mathrm{g}}$, when $M_{\mathrm{BH}}$ is increased the time-lag spectrum is uniformly stretched and squeezed in the horizontal and vertical direction, respectively. The dependence of the time-lags on $M_{\mathrm{BH}}$ is thus very similar to their dependence on $h$. The bottom panel in Fig.\[figd1\] shows the time-lag spectrum for various values of the outer disc radius. As $r_{\mathrm{out}}$ increases, $\tau_{\mathrm{plateau}}$ increases in magnitude and $\nu_{\mathrm{plateau}}$ decreases. The time-lag spectrum remains unaffected by $r_{\mathrm{out}}$ at frequencies higher than $\nu_{\mathrm{plateau}}$. In other words, $r_{\mathrm{out}}$ sets the level of the constant plateau at low frequencies, while this plateau occurs at increasingly lower frequencies as $r_{\mathrm{out}}$ increases. Our results are in agreement with CY15, who reported a similar dependence of the magnitude and shape of reverberation time-lag spectra on $r_{\mathrm{out}}$. The frequency $\nu_{\mathrm{plateau}}$ is proportional to $t^{-1}_{\mathrm{max}}$, and thus depends on $h$ and $r_{\mathrm{out}}$. Given our discussion above, it is clear that $\nu_{\mathrm{plateau}}$ and $\tau_{\mathrm{plateau}}$, which are the most pronounced features in the theoretical time-lag spectra, depend on $h$, $M_{\mathrm{BH}}$ and $r_{\mathrm{out}}$. Observationally, $\tau_{\rm plateau}$ appears to depend mainly on $M_{\rm BH}$ [e.g. @2013MNRAS.431.2441D], which implies that $h$ and $r_{\mathrm{out}}$ should be approximately the same in all AGN. Even so, the normalisation of this relation cannot directly indicate the height of the X-ray source, as $\tau_{\mathrm{plateau}}$ also depends on $r_{\mathrm{out}}$. The dependence is not as strong as that on $h$, but is present nevertheless. A more detailed theoretical study of the $M_{\mathrm{BH}}-\tau_{\mathrm{plateau}}$ relation is necessary before reaching conclusions based on the observed normalisation of this relation. This conclusion is strengthened by the fact that the discussion above is based on response functions estimated for the lamp-post geometry. If the X-ray source has a finite size, we expect the response function rise time to be altered. Since this directly affects $\tau_{\mathrm{plateau}}$, a study of the response functions from more complicated geometries is necessary to interpret the observations. Expected time-lag bias {#appe} ====================== The best-fit model B parameters listed in Table\[table3\] could significantly differ from their intrinsic values if the bias of the time-lag estimates has a magnitude comparable to, or larger than, their error. To investigate this possibility, we estimated the time-lag bias for $\{a,h\}=\{1,2.3r_{\mathrm{g}}\}$ and $\{0,100r_{\mathrm{g}}\}$ (we assumed $\theta=40^{\circ}$, $M_{\mathrm{BH}}=2\times10^6\,\mathrm{M}_{\odot}$ and $r_{\mathrm{out}}=10^3r_{\mathrm{g}}$ in both cases). These are two extreme cases in the parameter space we considered. @2013MNRAS.435.1511A were the first to quantify the effects of windowing on the time-lag bias. They showed that the time-lag bias can be up to $\sim30\%$ of the intrinsic value. EP16 also studied these effects in detail by exploring a wider parameter space. They showed that a model CS (not just a model time-lag spectrum) needs to be prescribed to estimate the time-lag bias, as the bias is introduced to the cross-periodogram itself. We assumed a model CS given by Eq.\[eqa3\] and that there are no delays between variations of different energy bands in the continuum. To determine the amplitude of the model CS, we assumed that a) the continuum PSD in both energy bands is equal to the characteristic bending power-law shape observed in the X-ray light curves of many AGN , and b) the intrinsic coherence between the two energy bands is unity at all frequencies. This uniquely determines the intrinsic continuum CS, $C^{(\mathrm{c})}_{5-7,2-4}(\nu)$, appearing in Eq.\[eqa3\], which is then given by $$\label{eqe1} C^{(\mathrm{c})}_{5-7,2-4}(\nu)=\frac{\mathscr{A}\nu^{-1}}{1+(\nu/\nu_{\mathrm{b}})^{\alpha-1}},$$ where $\mathscr{A}$ is the amplitude, $\alpha$ the high-frequency slope, and $\nu_{\mathrm{b}}$ the so-called bend-frequency. The typical values for these parameters are $\mathscr{A}\sim0.01$ (in so-called root-mean-square units), $2\lesssim\alpha\lesssim3$, and $\nu_{\mathrm{b}}\sim10^{-5}-10^{-4}\,\mathrm{Hz}$ for $M_{\mathrm{BH}}\sim10^6-10^7\,\mathrm{M}_{\odot}$. We therefore considered the cases $\{\alpha,\nu_{\mathrm{b}}\}=\{2,2\times10^{-4}\,\mathrm{Hz}\}$, $\{3,2\times10^{-4}\,\mathrm{Hz}\}$ and $\{2,2\times10^{-5}\,\mathrm{Hz}\}$ that are appropriate for our sample. We furthermore set $\mathscr{A}=0.01$, as the PSD amplitude was found by EP16 to not affect the time-lag bias. Given our model CS, we finally determined the expected mean of the time-lag estimates computed from $20\,\mathrm{ks}$ segments using Eq.\[eq13\] in EP16. The results are shown in Fig.\[fige1\]. The continuous blue line in the two panels indicates the model time-lag spectrum. Filled black circles, open red squares, and green stars correspond to the expected mean sample time-lag spectrum for different $\{\alpha,\nu_{\mathrm{b}}\}$ values (as noted in the figure). ![image](figures/fige1.pdf){width="15.0cm"} The horizontal axis indicates the widest frequency range for which we were able to obtain reliable time-lag estimates using real data. The range of values in the vertical axis is identical to the corresponding range in Figs.\[fig2\] and \[fig3\] (with the exception of Ark 564). The difference between each point and the corresponding model at a given frequency in Fig.\[fige1\] is equal to the expected time-lag bias, whose magnitude needs to be compared to the error bars in Figs.\[fig2\] and \[fig3\]. As noted by EP16, the mean of the time-lag estimates is always smaller (in magnitude) than their intrinsic values at each frequency. For low X-ray source height values, it is clear that the magnitude of the bias is entirely negligible compared to the time-lag errors and hence should not affect our results. For high X-ray source height values the bias is more significant at low ($\lesssim10^{-4}\,\mathrm{Hz}$) frequencies, although still smaller than the time-lag errors for all the sources in our sample. Perhaps in this case the best-fit height values may slightly underestimate their intrinsic values, although this effect should not be significant.
--- abstract: 'The first class constraints in $N = 1$ supergravity in $2 + 1$ dimensions are used to construct a generator of three gauge symmetries (including a local supersymmetry) that leave the action invariant. The algebra of these symmetries closes. This generator is used to quantize the model; a ghost involving both Bosonic and Fermionic components arises.' author: - 'D.G.C. McKeon' title: 'Derivation of All Symmetries from First-Class Constraints and Quantization in $2+1$ Dimensional Supergravity' --- 10.0in 9.0in -0.60in email: dgmckeo2@uwo.ca\ PACS No.: 11.10Ef\ KEY WORDS: Supergravity, constraints Introduction ============ Gravity in $2 + 1$ dimensions has been extensively treated \[1-3\] and its supersymmetric extension considered \[4\]. In these discussions, the local symmetries of the theory have been treated as being manifest. However, it has long been understood that local symmetries (or “gauge” symmetries) are closely linked with the presence of first-class constraints that arise when using the Dirac constraint formalism to analyze their canonical structure \[5\]. Two approaches have been used to derive these gauge symmetries from the first-class constraints. In the “HTZ” approach \[6\], symmetries in the Hamiltonian form of the action are examined directly while in the “C” approach \[7\] symmetries present in the equations of motion are considered. Normally, only Bosonic symmetries have been determined in either of these approaches, but recently Fermionic (or “super”) symmetries have been shown to follow from the presences of Fermionic first-class constraints \[8\]. The spinning particle was used to demonstrate this; we now will employ the Fermionic and Bosonic first-class constraints present in $2 + 1$ dimensional supergravity to find one Fermionic and two Bosonic symmetries present in the model. The algebra of these gauge symmetries will be shown to close. Quantization is effected through use of the path integral, taking into account both the first-and second-class constraints present. Conventions used are given in the appendix. Supergravity in $2 + 1 D$ ========================= We work with the first order Lagrangian $$\mathcal{L}= \epsilon^{\mu\nu\lambda} \left( b_\mu^i R_{\nu\lambda i} + \overline{\psi}_\mu D_\nu \psi_\lambda\right).$$ There are independent Boson fields $(b_\mu^i , w_\mu^i)$ and the Fermion field $\psi_\mu$ with $$R_{\mu\nu i} = \partial_\mu w_{\nu i} - \partial_\nu w_{\mu i} - \epsilon_{ijk} w_\mu^j w_\nu^k$$ and $$D_\mu = \partial_\mu + \frac{i}{2} \gamma^i w_{\mu i}$$ so that $[D_\mu , D_\nu] = \frac{i}{2} \gamma^i R_{\mu\nu i}$. From eq. (1) it follows that the momenta conjugate to $(b^{0i} \equiv b^i, b^{\alpha i}, w^{0i} \equiv w^i, w^{\alpha i}, \psi_0 \equiv \psi, \psi_\alpha)$ are respectively $$\begin{aligned} p_i &= 0 \\ p_{\alpha i} &= 0 \\ I\!\!P_i &= 0 \\ I\!\!P_{\alpha i} &= 2\epsilon_{\alpha\beta} b^\beta_{\;\,i} \\ \pi &= 0 \\ \pi^\alpha &= -\epsilon^{\alpha\beta}\overline{\psi}_\beta .\end{aligned}$$ The constraints of eqs. (4b,d) and (4f) are obviously second class; they result in the Dirac bracket (DB) $$\begin{aligned} \left\lbrace A,B \right\rbrace^* = \left\lbrace A,B \right\rbrace + \frac{1}{2} \epsilon_{\alpha\beta} & \bigg[ \left\lbrace A, \pi^\alpha + \epsilon^{\alpha\gamma} \overline{\psi}_\gamma \right\rbrace \gamma^0 \left\lbrace \left( \pi^\beta + \epsilon^{\beta \delta} \overline{\psi}_\delta \right)^T, B\right\rbrace \nonumber \\ & + \left\lbrace A, I\!\!P^{\alpha i} - 2 \epsilon^{\alpha\gamma} b_\gamma^{\,i}\right\rbrace \left\lbrace p^\beta_{\;i}, B\right\rbrace \nonumber \\ & - \left\lbrace A, p^{\alpha i} \right\rbrace \left\lbrace I\!\!P^{\beta} _{\;i} - 2 \epsilon^{\beta\gamma} b_{\gamma i}, B \right\rbrace \bigg]\end{aligned}$$ where $A$ and $B$ are dynamical variables. From eq. (5) it follows that $$\begin{aligned} \left\lbrace b_\alpha^i, w_\beta^j \right\rbrace^* &= \frac{1}{2} \eta^{ij} \epsilon_{\alpha\beta} \\ \left\lbrace \psi_\alpha, \overline{\psi}_\beta \right\rbrace^* &= \frac{1}{2} \epsilon_{\alpha\beta}.\end{aligned}$$ The canonical Hamiltonian now is given by $$\begin{aligned} \mathcal{H}_c = \epsilon^{\alpha\beta} \bigg[ -b_i R_{\alpha\beta}^i &- 2w_i \left(\partial_\alpha b_\beta^i - \epsilon^{ijk} w_{\alpha j} b_{\beta k} - \frac{i}{4} \overline{\psi}_\alpha \gamma^i \psi_\beta \right) \nonumber \\ &- 2 \overline{\psi} (D_\alpha \psi_\beta ) \bigg].\end{aligned}$$ With this $\mathcal{H}_c$, the primary constraints of eqs. (4a,c,e) lead to the secondary constraints $$\begin{aligned} \Phi_1^i &= \epsilon^{\alpha\beta} R_{\alpha\beta}^i \\ \Phi_2^i &= \epsilon^{\alpha\beta} \left( \partial_\alpha b_\beta^i - \epsilon^{ijk} w_{\alpha j} b_{\beta k} - \frac{i}{4} \overline{\psi}_\alpha \gamma^i \psi_\beta \right)\\ \intertext{and} \Psi & = \epsilon^{\alpha\beta} D_\alpha\psi_\beta \end{aligned}$$ respectively. These primary and secondary constraints are all first class and there are no higher generation constraints because of the algebra $$\begin{aligned} \left\lbrace \Phi_1^i, \Phi_2^j \right\rbrace^* &= -\frac{1}{2} \epsilon^{ijk} \Phi_{1k} \\ \left\lbrace \Phi_2^i, \Phi_2^j \right\rbrace^* &= -\frac{1}{2} \epsilon^{ijk} \Phi_{2k} \\ \left\lbrace \Psi, \overline{\Psi} \right\rbrace^* &= -\frac{i}{8} \Phi_{1i} \gamma^i \\ \left\lbrace \Psi, \Phi_2^i \right\rbrace^* &= \frac{i}{4} \gamma^i \Psi\end{aligned}$$ with all other DB between constraints vanishing. It is possible to show that the HTZ approach \[6\] to finding gauge symmetries in a theory from first-class constraints can be used even when Fermionic constraints are present. (In this case, these are the primary constraint $\pi = 0$ and the secondary constraint $\Psi = 0$.) The form of the generator of gauge symmetries is $$G = \int d^2x \left[ A_1^i p_i + A_2^i \Phi_{1i} + B_1^i I\!\!P_i + B_2^i \Phi_{2i} + C_1^T\pi^T + \overline{C}_2 \Psi \right]$$ where $(A_1^i, B_1^i, A_2^i, B_2^i)$ are Bosonic and $(C_1, C_2)$ are Fermionic. It follows from the DB $$\begin{aligned} \left\lbrace G, \int d^2y \,\mathcal{H}_c \right\rbrace^* = &\int d^2x \bigg[ \left( A_{1k} + \epsilon_{ijk} \left( A_2^i w^j + \frac{1}{2} B_{2i} b^j\right) - \frac{i}{4} \overline{C}_2 \gamma_k \psi \right)\Phi_1^k \nonumber \\ &+ \left( 2 B_{1k} + \epsilon_{ijk} B_2^i w^j\right)\Phi_2^k \nonumber \\ &+ \left( 2 \overline{C}_1 + \frac{i}{2} \left(B_2^i \overline{\psi} \gamma_i - w_i \overline{C}_2 \gamma^k \right) \right) \Psi \bigg] \end{aligned}$$ that the HTZ equation leads to $$\begin{aligned} & \dot{A}_{2k} + A_{1k} + \epsilon_{k\ell m} (A_2^\ell w^m + \frac{1}{2} B_2^\ell b^m) + \frac{i}{4} \overline{C}_2 \gamma_k \psi = 0 \\ & \dot{B}_2^k + 2B_{1k} + \epsilon_{k\ell m} B_2^\ell w^m = 0 \\ & \dot{\overline{C}}_2 + 2\overline{C}_1 + \frac{i}{2} \left( \overline{\psi} B_2\cdot \gamma - \overline{C}_2 w \cdot \gamma\right) = 0.\end{aligned}$$ From eq. (12), we find now that $G$ is given by $$\begin{aligned} G = \int d^2x & \bigg[ - \left( \dot{A}_k + \epsilon_{k\ell m} (A^\ell w^m + \frac{1}{2} B^\ell b^m) + \frac{i}{4} \overline{C}\, \gamma_k\psi \right)p^k \nonumber \\ & - \frac{1}{2} \left( \dot{B}_k + \epsilon_{k\ell m} B^l w^m \right)I\!\!P^k \nonumber \\ & - \frac{1}{2} \left( \dot{\overline{C}} + \frac{i}{2} (\overline{\psi} B \cdot \gamma - \overline{C} w \cdot \gamma) \right)\gamma^0 \pi^T \nonumber \\ & \hspace{.5cm} + A_k \Phi_1^k + B_k \Phi_2^k + \overline{C} \Psi\bigg].\end{aligned}$$ We now will establish the DB algebra of the generator $G$; that is, if $G_I$ is associated with gauge functions $(A_I, B_I, C_I)$ we wish to compute $\left\lbrace G_I, G_J \right\rbrace^*$. This can be done directly, but it is easier to make use of the following general argument. If one has a set of canonical variables $(Q_i, I\!\!P_i)$, $(q_i, p_i)$ after making use of the second class constraints, and if the canonical Hamiltonian is of the form $$H_c = - Q_i \Phi_i (q,p)$$ with $$\left\lbrace \Phi_i (q,p), \Phi_j (q,p)\right\rbrace^* = c_{ijk} \Phi_k (q,p)$$ then the gauge generators associated with the first class constraints $(I\!\!P_i, \Phi_i(q,p))$ is by the HTZ approach \[6\] $$G_I = \left( -\dot{\Lambda}_{Ii} + c_{ijk} \Lambda_{Ij} Q_k \right) I\!\!P_i + \Lambda_{Ii} \Phi_i (q,p)$$ where $\Lambda_{Ii}(t)$ is the gauge parameter. It may be shown now that upon using the Jacobi identity for the quantities $c_{ijk}$ that follow from eq. (15) that $$\left\lbrace G_I, G_J \right\rbrace^* = G_K$$ where $$\Lambda_{Ki} = c_{ijk} \Lambda_{Ij} \Lambda_{Jk}.$$ The model based on eq. (1) is consistent with eqs. (14, 15); we then find that eq. (18) shows that with $G$ given by eq. (13), then $G_K$ in eq. (17) is given by $$\begin{aligned} A_{Ki} &= -\frac{i}{8} \overline{C}_I \gamma_i C_J + \frac{1}{2} \epsilon_{ijk} \left( A_J^j B_I^k - A_I^j B_J^k\right)\\ B_{Ki} &= -\frac{1}{2} \epsilon_{ijk} B_I^j B_J^k \\ \overline{C}_{K} &= \frac{i}{4} \left( \overline{C}_I B_J \cdot \gamma - \overline{C}_J B_I \cdot \gamma \right)\end{aligned}$$ upon using the structure functions $c_{ijk}$ that follow from eq. (9). The gauge algebra closes without having to introduce auxiliary fields. It follows from eq. (13) that the variations in $\psi_\mu$, $b_\mu^i$ and $w_\mu^i$ that are generated by $G$ are $$\begin{aligned} \delta\psi_\mu &= -\frac{1}{2} \left( D_\mu C - \frac{i}{2} B \cdot \gamma \psi_\mu \right) \\ \delta b_\mu^i &= -\left[ \mathcal{D}_\mu^{ij} A_j - \frac{1}{2} \epsilon^{ijk} b_{j\mu} B_k + \frac{i}{4} \overline{C} \gamma^i \psi_\mu\right] \\ \intertext{and} \delta w_\mu^i &= -\frac{1}{2}\mathcal{D}_\mu^{ij} B_j\;, \end{aligned}$$ where $$\mathcal{D}_\mu^{ij} = \partial_\mu \delta^{ij} - \epsilon^{ipj} w_{p\mu} \left(\left[ \mathcal{D}_\mu , \mathcal{D}_\nu \right]^{ij} =\epsilon^{ijp} R_{\mu\nu p}\right).$$ We now turn to the problem of quantizing this model using the path integral. Quantization ============ In a model with variables $(q_i(t), p_i(t))$ in phase space, and governed by a canonical Hamiltonian $H_c(q_i, p_i)$, quantization is effected through the path integral for the transitional amplitude $$<\text{out}|\text{in}> = \int Dp_i\, Dq_i \,\exp i \int_{-\infty}^\infty dt (\dot{q}_i p_i - H_c)$$ where $q_i(t) \rightarrow (q_i^{\text{out}}, q_i^{\text{in}})$ as $t \rightarrow \pm \infty$ \[9\]. In the presence of first-class \[16\] and second-class constraints \[17\], the measure for this path integral receives the contribution $$M = \det \left\lbrace \phi_i, \gamma_j \right\rbrace \mathrm{det}^{1/2} \left\lbrace \theta_i, \theta_j \right\rbrace \delta (\phi_i) \delta(\gamma_i) \delta(\theta_i)$$ where $\phi_i$ is the set of first-class constraints, $\gamma_i$ are the associated gauge conditions and $\theta_i$ are the set of second-class constraints. With the second class constraints of eqs. (4b,d,f) we see that $\left\lbrace\theta_i, \theta_j \right\rbrace$ is field independent and so in eq. (23) the second-class constraints just serve to rescale $M$ by a constant. (This is unlike the model discussed in ref. \[10\].) The first class constraints which lead to the contributions $\det \left\lbrace\phi_i, \gamma_j\right\rbrace\delta(\phi_i)\delta(\gamma_i)$ in eq. (23) can be handled in an alternate manner that accommodates covariant gauge fixing \[11\]. In this alternate approach, the Faddeev-Popov quantization procedure is adapted so as to be applicable to a path integral in phase space. The constant factor $$\begin{aligned} K = \int DA_i\, & DB_i \,DC\, \delta\left( (\mathcal{D} \cdot b)^i + \left\lbrace (\mathcal{D} \cdot b)^i, G \right\rbrace^* - k_b^i \right)\\ & \delta\left( \partial \cdot w^i + \left\lbrace \partial \cdot w^i, G\right\rbrace^* - k_w^i\right)\nonumber\\ & \hspace{1.5cm} \delta \left( (D \cdot \psi ) + \left\lbrace (D \cdot \psi), G \right\rbrace^* - k_\psi \right)\Delta \nonumber\end{aligned}$$ is first defined. From the changes in $\psi_\mu$, $b_\mu^i$ and $w_\mu^i$ induced by $G$ that are given in eq. (20), it follows that $$\Delta = s\,\det \left( \begin{array}{ccc} (\mathcal{D}^2)^{ij} & -\frac{1}{2}(\mathcal{D}_\mu^{ip})(\epsilon_{pq}^{\;\;\;\;\,\ell} b^{q\mu}) & -\frac{i}{4} \mathcal{D}_\mu^{ip} \overline{\psi}^\mu \gamma_p \\ & &\\ 0 & \frac{1}{2}\partial \cdot \mathcal{D}^{k\ell} & 0 \\ & & \\ 0 & - \frac{i}{2} D^\mu\gamma^\ell\psi_\mu & \frac{1}{2} D^2 \end{array} \right)$$ in order that $K$ be a constant. We now introduce $K$ into the phase-space path integral whose form is given by eq. (22) with the canonical Hamiltonian of eq. (7). The change of variables induced by $-G$ is then performed; this leaves the action in phase space unaltered. Upon introducing the integrals over the field independent quantities $k_b^i$, $k_w^i$ and $k_\psi$ as in ref. \[12\] $$\overline{K} = \int Dk_b^i\,Dk_w^i \, Dk_\psi \exp - \frac{1}{2} \int dx \left[ (k_b^i)^2 + (k_w^i)^2 + \overline{k}_\psi k_\psi \right]$$ and converting the phase space path integral to a configuration space path integral using the approach of ref. \[13\], we are left with the transition amplitude $$\begin{aligned} <\text{out}|\text{in}> = \int Db_\mu^i\, Dw_\mu^i\,& D\psi_\mu \exp i \int dx \bigg[\mathcal{L} - \frac{1}{2}(\partial \cdot w^i)^2 \\ & - \frac{1}{2} (\mathcal{D}^{ij} \cdot b_j)^2 - \frac{1}{2} (\overline{D \cdot \psi})(D \cdot \psi) \bigg] \Delta \nonumber\end{aligned}$$ with $\mathcal{L}$ being given by eq. (1). The functional determinant of eq. (26) can now be exponentiated using Fermionic ghost fields $(\overline{c}_i, d_j)$ and $(\overline{d}_i, d_j)$ and the Bosonic ghost field $\Gamma$. (The Fermionic ghost fields are all vectors and the Bosonic ghost field is a Dirac spinor in the tangent space.) We have $$\Delta = \int D\overline{c}_i\, Dc_j\, D\overline{d}_i\, Dd_j \,D\Gamma \exp i \int dx \left( \overline{c}_i,\, \overline{d}_k, \,\overline{\Gamma} \right) \mathbf{M} (c_j, d_\ell, \Gamma)^T$$ where $\mathbf{M}$ is the supermatrix appearing in eq. (26). The presence of a Fermionic gauge invariance has generalized the functional Faddeev-Popov determinant to being a superdeterminant. Discussion {#discussion .unnumbered} ========== We have examined the canonical structure of $N = 1$ supergravity in $2 + 1$ dimensions, and from the first class constraints that occur, deduced the gauge symmetries that reside in the theory. The model is quantized using the phase space form of the path integral, in conjunction with a means of employing a covariant gauge fixing technique while working in phase space. The form of the path integral appearing in eqs. (27, 28) could have been derived directly by applying the Faddeev-Popov approach to the configuration space form of the path integral. However, the transition from the phase space form of the path integral (which follows from canonical quantization \[9\]) to the configuration space form of the path integral is not always so straight forward as it is here and consequently the Faddeev-Popov quantization procedure is not always viable. This is the case if the model were to have second class constraints with field dependent Poisson Brackets with each other, such as the model of ref. \[10\] or the first order Einstein-Hilbert action in $D > 2$ dimensions \[14\]. If one were to compute loop corrections to the effective action in eqs. (28, 29), operator regularization as employed with Chern-Simons theory should be a convenient technique that could be used \[15\]. The derivation of the gauge generator from the first class constraints in the theory should provide a useful means of uncovering all gauge symmetries in higher dimensional supergravity theories, such as $N = 8$ supergravity in $D = 4$ dimensions. In this model, apparently fortuitous cancellation of divergences in higher loop calculations have been attributed to the presence of symmetries that are not manifest. Acknowledgements {#acknowledgements .unnumbered} ================ R. Macleod had a helpful comment. [99]{} S. Carlip, “Quantum Gravity in $2 + 1$ Dimensions” (Cambridge U. Press, Cambridge 1988). E. Witten, *Nucl. Phys.* **B323**, 113 (1989), ibid **B311** 46 (1988). A.M. Frolov, N. Kiriushcheva and S.V. Kuzmin, *Grav. and Cosmol.* **16**, 181 (2010),\ R. Banerjee and D. Roy, arxiv gr-qc 1110.1720. P.S. Howe and R.W. Tucker, *J. Math. Phys.* **19**, 869 (1978),\ A. Achucarro and P.K. Townsend, *Phys. Lett.* **B180**, 89 (1986),\ H.J. Matschull and H. Nicolai, *Nucl. Phys.* **B411**, 609 (1994). P.A.M. Dirac, “Lectures on Quantum Mechanics” (Dover, Mineoloa, 2001). M. Henneaux, C. Teitelboim and J. Zanelli, *Nucl. Phys.* **B332**, 169 (1990). L. Castellani, *Ann. Phys.* **142**, 357 (1982). D.G.C. McKeon, arxiv hep-th 1203.3156, *Can. J. Phys.* **90**, 701 (2012). S. Weinberg, “The Quantum Theory of Fields Vol. I” (Ch. 9) (Cambridge U. Press, Cambridge, 1995). Farrukh Chishtie and D.G.C. McKeon, arxiv hep-th 1110.1425; *Int. J. Mod. Phys.* **A27**, 1250077 (2012). D.G.C. McKeon, arxiv hep-th 1112.3646, *Can. J. Phys.* **90**, 249 (2012). G. ’t Hooft, *Nucl. Phys.* **B33**, 173 (1971). W. Garczynski, *Rep. Math. Phys.* **25**, 73 (1987);\ *Phys. Lett.* **B198**, 367 (1987). Farrukh Chishtie and D.G.C. McKeon, arxiv 1207.2302; *Cl. Q. Gr.* **29**, 235016 (2012). D.G.C. McKeon and C. Wong, *Int. J. Mod. Phys.* **A10**, 2181 (1995). L. Faddeev, *Theor. Math. Phys.* **1**, 1 (1970). P. Senjanovic, *Ann. Phys.* (N.Y.) **100**, 227 (1976). Appendix-Conventions {#appendix-conventions .unnumbered} ==================== We use the metric $\mathrm{diag}\; \eta^{ij} = (+ - -)$ with $\epsilon^{012} = +1$. Dirac matrices $\gamma^i$ are related to Pauli spin matrices by $\gamma^0 = \sigma_2,$ $\gamma^1 = i\sigma_3 \, \gamma^2 = i\sigma_1$ so that $$\gamma^i \gamma^j = \eta^{ij} + i\epsilon^{ijk} \gamma_k \eqno(A.1)$$ and $$\gamma^0 \gamma^i\gamma^0 = -\gamma^{iT} = \gamma^{i\dagger}.\eqno(A.2)$$ Latin indices $(i,j,k \ldots )$ are used for the target space, Greek indices $(\mu , \nu , \lambda \ldots)$ are used for space-time indices $(0, 1, 2)$ while early Greek indices $(\alpha, \beta, \gamma \ldots)$ are used for space indices $(1,2)$. We take $\epsilon^{12} = \epsilon^{012} = \epsilon_{12} = 1$. All spinors $\psi$ are taken to satisfy the Majorana condition $\psi = C\overline{\psi}^T$ where $\overline{\psi} = \psi^\dagger\gamma^0$ and $C = -\gamma^0$ so that $\psi = \psi^*$ is real. We use the equations $$\overline{\chi}\phi = \overline{\phi}\chi,\quad \overline{\chi}\gamma^i\phi = - \overline{\phi}\gamma^i\chi\,. \eqno(A.3a,b)$$ For Grassmann variable $\theta_a$, we use the left derivative so that $$\frac{d}{d\theta_a} (\theta_b\theta_c) = \delta_{ab} \theta_c - \delta_{ac} \theta_b; \quad \frac{d}{dt} F(\theta(t)) = \dot{\theta}(t) F^\prime (\theta(t)). \eqno(A.4a,b)$$ For Bosonic fields $B_1$ and Fermionic fields $F_i$, we define Poisson brackets with respect to Bosonic canonical pairs $(q_i, p_i = \frac{\partial L}{\partial \dot{q}_i})$ and Fermionic canonical pairs $(\psi_i,\pi_i = \frac{\partial L}{\partial \dot{\psi}_i})$ by $$\left\lbrace B_1, B_2 \right\rbrace = \left( B_{1,q} B_{2,p} - B_{2,q} B_{1,p}\right) + \left( B_{1,\psi} B_{2,\pi} - B_{2,\psi} B_{1,\pi}\right) = - \left\lbrace B_2, B_1 \right\rbrace \eqno(A.5a)$$ $$\left\lbrace B, F \right\rbrace = \left( B_{,q} F_{,p} - F_{,q} B_{,p}\right) + \left( B_{,\psi} F_{,\pi} + F_{,\psi} B_{,\pi}\right) = - \left\lbrace F, B \right\rbrace \eqno(A.5b)$$ $$\left\lbrace F, B \right\rbrace = \left( F_{q,} B_{,p} - B_{,q} F_{,p}\right) - \left( B_{,\psi} F_{,\pi} + F_{,\pi} B_{,\pi}\right) = - \left\lbrace B, F \right\rbrace \eqno(A.5c)$$ $$\left\lbrace F_1, F_2 \right\rbrace = \left( F_{1,q} F_{2,p} + F_{2,q} F_{1,p}\right) - \left( F_{1,\psi} F_{2,\pi} + F_{2,\psi} F_{1,\pi}\right) = \left\lbrace F_2, F_1\right\rbrace \eqno(A.5d)$$ where $B_{,q} F_{,p} = \displaystyle{\sum_i} \frac{\partial B}{\partial q_i} \,\frac{\partial F}{\partial p_i}$ etc. It follows that $$\left\lbrace XY, Z \right\rbrace = X\left\lbrace Y, Z \right\rbrace + (-1)^{\epsilon{_y}\epsilon{_z}} \left\lbrace X, Z \right\rbrace Y\eqno(A.6a)$$ $$\left\lbrace X,YZ \right\rbrace = (-1)^{\epsilon{_x}\epsilon{_y}} Y\left\lbrace X, Z \right\rbrace + \left\lbrace X, Y \right\rbrace Z\eqno(A.6b)$$ where $\epsilon_x = 1$ if $X$ is Fermionic and $\epsilon_x = 0$ if $X$ is Bosonic. The Hamiltonian is given by $$H(q_i, p_i, \psi_i ,\pi_i) = \dot{q}_i p_i + \dot{\psi}_i \pi_i - L (q_i, \dot{q}_i, \psi_i, \dot{\psi}_i) \eqno(A.7)$$
--- abstract: 'Aluminum hydride (alane) AlH$_3$ is an important material in hydrogen storage applications. It is known that AlH$_3$ exists in multiply forms of polymorphs, where $\alpha$-AlH$_3$ is found to be the most stable with a hexagonal structure. Recent experimental studies on $\gamma$-AlH$_3$ reported an orthorhombic structure with a unique double-bridge bond between certain Al and H atoms. This was not found in $\alpha$-AlH$_3$ or other polymorphs. Using density functional theory, we have investigated the energetics, and the structural, electronic, and phonon vibrational properties for the newly reported $\gamma$-AlH$_3$ structure. The current calculation concludes that $\gamma$-AlH$_3$ is less stable than $\alpha$-AlH$_3$ by 2.1 KJ/mol. Interesting binding features associated with the unique geometry of $\gamma$-AlH3 are discussed from the calculated electronic properties and phonon vibrational modes. The binding of H-s with higher energy Al-$p,d$ orbitals is enhanced within the double-bridge arrangement, giving rise to a higher electronic energy for the system. Distinguishable new features in the vibrational spectrum of $\gamma$-AlH$_3$ were attributed to the double-bridge and hexagonal-ring structures.' author: - Yan Wang - 'Jia-An Yan' - 'M. Y. Chou' title: 'Electronic and Vibrational Properties of $\gamma$-AlH$_3$ ' --- Introduction ============ Aluminum hydride AlH$_3$ (alane) is among the most promising metal hydrides for the hydrogen storage medium, containing a usable hydrogen fraction of 10.1 wt.$\%$ with a density of 1.48 g/ml. The enthalpy of the reaction is found to be low, resulting in minimal heat exchange for both charging and discharging reactions. This material is thermodynamically unstable near ambient conditions, but it is kinetically stable without releasing much hydrogen for years. However, extremely high hydrogen pressure (exceeding 25 kbar) is required to achieve charging. The decomposition of AlH$_3$ occurs in a single step: $$AlH_3 \rightarrow Al + \frac{3}{2}H_2.$$ How to overcome the kinetic barriers and to find new routes to synthesize AlH$_3$ for reversibility have been the focus of many research activities recently. Early studies identified seven polymorphs of AlH$_3$: $\alpha$, $\alpha'$, $\beta$, $\delta$, $\varepsilon$, $\gamma$, and $\xi$.[@brower76] It was suggested experimentally that the $\alpha$ phase is the most stable, followed by the $\beta$ and $\gamma$ phases.[@reilly1; @reilly2] The decomposition of the $\gamma$ and $\beta$ phases is faster than that of $\alpha$ phase. An reaction enthalpy of 7.1 KJ/mol-H$_2$ and 11.4 KJ/mol-H$_2$ for $\gamma$ and $\alpha$, respectively, is reported.[@reilly1; @reilly2] The exothermic transition to the $\alpha$ phase from the $\gamma$ phase is also reported. Only the $\alpha$ polymorph AlH$_3$ has been experimentally investigated extensively, including structural characterization,[@alpha] thermodynamic measurements,[@therm1; @therm2] and thermal and photolytic kinetic studies.[@therm2; @kinet1; @kinet2; @kinet3; @kinet4; @kinet5; @kinet6] Theoretical calculations[@cal1; @cal2; @cal3] of $\alpha$-AlH$_3$ from first principles have been performed to study the structural stability and electronic and thermodynamic properties. Limited studies have been conducted on other polymorphs and their properties. Recently, the crystal structure of $\gamma$-AlH$_3$ is reported by two separate groups using synchrotron X-ray powder diffraction[@gamma] and powder neutron diffraction,[@gamma2] respectively. A unique feature of double-bridge bonds involving Al-2H-Al is identified in addition to the normal bridge bond of Al-H-Al as found in $\alpha$-AlH$_3$. In the present study, using density functional theory and the linear response approach, we investigate the electronic properties and phonon spectra for the newly published $\gamma$-AlH$_3$ crystal structure.[@gamma; @gamma2] The phase stability and interesting binding characteristics in $\gamma$-AlH$_3$ are presented and compared with that of $\alpha$-AlH$_3$. The origin of the cohesive energy difference in these two phases is discussed. Distinct phonon vibrational modes arising from the double-bridge and hexagonal ring structures are identified. Calculational procedures ========================= The calculations are based on density functional theory.[@dft] The Kohn-Sham equations are solved in a plane-wave basis using the Vienna [*ab initial*]{} simulation package (VASP).[@met1; @met2] For the exchange-correlation functional, the generalized gradient approximation (GGA) of Perdew and Wang (PW91)[@met3] is employed. The electron-ion interaction is described by ultrasoft pseudopotentials (USPP).[@metus] The $k$ space integrals are evaluated using the sampling generated by the Monkhorst-Pack procedure.[@kpoint] The calculation for both $\alpha$- and $\gamma$-AlH$_3$ structures are performed with a $k$-point mesh of $7\times 7 \times 7$. The relaxations of cell geometry and atomic positions are carried out using a conjugate gradient algorithm until the Hellman-Feynman force on each of the unconstrained atoms is less than 0.01 eV/$\AA$. The nuclear coordinates are first allowed to relax while the cell volume is fixed at the experimental value. Then simultaneous relaxations of the cell volume, shape and atom coordinates are conducted. For the total energy calculations, the plane-wave energy cutoff is 600 eV. The self-consistent total energy converges to within $10^{-5}$ eV/cell. The vibrational properties are studied with density functional perturbation theory within the linear response.[@linear] The dynamical matrices are obtained for a uniform grid of $q$ vectors of $4\times 4 \times 4$ and $3\times 3\times 3$ over the Brillouin zone of $\alpha$- and $\gamma$-AlH$_3$, respectively. This dynamical matrix is then Fourier-transformed to real space and the force-constant matrices are constructed, which are used to obtain the phonon frequencies. The PWSCF numerical code[@pwscf] was used in our calculations for the zero-point energies and phonon density of states. Results and Discussions ======================== structure and Energetics ------------------------ The structural characterization from an earlier high-resolution synchrotron X-ray diffraction[@alpha] study concluded that $\alpha$-AlH$_3$ has a rhombohedral lattice of space group $R\bar{3}c$ (No. 167). Recent diffraction experiments[@gamma; @gamma2] determined that $\gamma$-AlH$_3$ has an orthorhombic unit cell with space group $Pnnm$ (No. 58). The unit cell for $\gamma$-AlH$_3$ is shown in Figure \[fig:struct\](a), compared with the hexagonal $\alpha$-AlH$_3$ in Figure \[fig:struct\](c). The experimentally reported lattice constants are listed in Table I. The building element for both phases is the AlH$_6$ octahedron, where one Al atom is surrounded by six H atoms. However, the packing scheme in $\gamma$-AlH$_3$ is more complex than that in $\alpha$-AlH$_3$. The AlH$_6$ octahedra are connected simply by sharing vertices in $\alpha$-AlH$_3$, as illustrated in Figure \[fig:struct\](c). The network of these octahedra produces only one type of Al-H-Al bridge bond which has a bond angle and a bond length of 142$^\circ$ and 1.712 ${\AA}$, respectively. In contrast, in $\gamma$-AlH3 the AlH$_6$ octahedra are connected not only by sharing vertices as in $\alpha$-AlH$_3$, but also by sharing edges, as shown in Figure \[fig:struct\](a). As a consequence, two nonequivalent Al atoms, Al1 and Al2, are created. Four nonequivalent H atoms (H1-H4)[@gamma] are also identified as shown in Figure \[fig:struct\](a), while only one type H atom exists in $\alpha$-AlH$_3$. Al1 is involved with the normal bridge bond with H, which is similar to that in $\alpha$-AlH$_3$ but with a slight difference in the bond length and angle. However, Al2 involves a new type of double-bridge configuration Al2-2H3-Al2, as shown near the center of unit cell in Figure \[fig:struct\](a). This double-bridge bond gives a smaller distance of $2.60$ ${\AA}$ between the two Al2 atom compared with the Al-Al separation of 3.24 ${\AA}$ in $\alpha$-AlH$_3$ and 2.86 ${\AA}$ in Al metal. In addition to the new double-bridge configuration, a hexagonal-ring structure is found consisting of two Al2, two Al1, and four H4 atoms, as shown in Figure \[fig:struct\](b). The four Al atoms are on one plane parallel to the c-axis while the H4 atoms have a slight displacement ($\sim$0.14 $\AA$) out of this plane. These rings are connected to form linear chains along c-axis. The structures of double bridge in the ab-plane and the hexagonal-ring along c-axis are unique to $\gamma$-AlH$_3$ and have not been found in any other hydrogen-containing aluminum compounds. Furthermore, the $\alpha$ phase possesses a higher order of symmetry than that of the $\gamma$ phase, with a smaller primitive trigonal unit cell. The total number of formula units (f.u.) in a unit cell is six and two for the $\gamma$ and $\alpha$ phases, respectively. The molecular volume (the density) for the $\gamma$ phase is found to be higher (lower) than that of the $\alpha$ phase by 11$\%$ (10$\%$). ![(Color Online) Crystal structures of (a) orthorhombic $\gamma$-AlH$_3$, (b) hexagonal-ring structure in $\gamma$-AlH$_3$, and (c) hexagonal $\alpha$-AlH$_3$. The large and small spheres denote Al and H atoms, respectively. []{data-label="fig:struct"}](gAlH3.eps "fig:"){width="5cm"} ![(Color Online) Crystal structures of (a) orthorhombic $\gamma$-AlH$_3$, (b) hexagonal-ring structure in $\gamma$-AlH$_3$, and (c) hexagonal $\alpha$-AlH$_3$. The large and small spheres denote Al and H atoms, respectively. []{data-label="fig:struct"}](ring.eps "fig:"){width="5cm"} ![(Color Online) Crystal structures of (a) orthorhombic $\gamma$-AlH$_3$, (b) hexagonal-ring structure in $\gamma$-AlH$_3$, and (c) hexagonal $\alpha$-AlH$_3$. The large and small spheres denote Al and H atoms, respectively. []{data-label="fig:struct"}](aAlH3.eps "fig:"){width="5cm"} ------------------- -------- ----------------- -------- -------- --------- ------------------ ----------------- ------------------ Configuration Cohesive Energy eV/(f.u.) a(Å) b(Å) c(Å) $\alpha(^\circ)$ $\beta(^\circ)$ $\gamma(^\circ)$ $\alpha$-AlH$_3$: Current work -14.052 4.49 4.49 11.80 90.0 90.0 120.0 Previous work (Ref.) 4.42 4.42 11.80 90.0 90.0 120.0 (Ref.) 4.489 4.4489 11.820 90.0 90.0 120.0 Experiment (Ref.) 4.449 4.449 11.813 90.0 90.0 120.0 $\gamma$-AlH$_3$: Current work -14.028 5.43 7.40 5.79 90.0 90.0 90.0 Experiment (Ref.) 5.3806 7.3555 5.77509 90.00 90.00 90.00 (Ref.) 5.3672 7.3360 5.7562 90.00 90.00 90.00 ------------------- -------- ----------------- -------- -------- --------- ------------------ ----------------- ------------------ : Calculated cohesive energies and structural parameters for $\gamma$-AlH$_3$ and $\alpha$-AlH$_3$ compared with values from experiments and previous calculations. \[eorder\] Bond Calc. Ref.16/Ref.17 Bond Calc. Ref.16/Ref.17 Angle Calc. Ref.16/Ref.17 --------- ------- --------------- -------- ------- --------------- ------------ ------- --------------- Al1-Al2 3.18 3.1679/3.155 Al2-H1 1.69 1.668/1.657 Al1-H4-Al2 134.7 124.0/133.88 Al2-Al2 2.62 2.602/2.585 Al2-H2 1.70 1.664/1.678 Al1-H2-Al2 168.8 171.0/169.99 Al1-H2 1.76 1.769/1.764 Al2-H3 1.72 1.70/1.755 Al2-H1-Al2 180.0 180.0/179.99 Al1-H4 1.72 1.784/1.696 Al2-H4 1.72 1.790/1.733 Al2-H3-Al2 97.3 100.7/97.53 H3-H3 2.30 2.16/2.266 H4-Al1-H4 85.9 76.34/85.00 H4-Al2-H4 176.3 170.63/176.84 : Interatomic distances ($\AA$) and bond angles (deg) obtained from the fully relaxed structure of $\gamma$-AlH$_3$. The values from the synchrotron X-ray diffraction[@gamma] and the powder neutron diffraction[@gamma2] are also included for comparisons. \[gdistance\] Using the experimentally established space groups and unit cells as shown in Figure \[fig:struct\], the total energies and structural parameters for both the $\alpha$ and $\gamma$ phases are studied from first-principles calculations. The calculated results for the fully relaxed structures are summarized in Table I. The theoretical lattice parameters are in good agreement with the observed values for both $\gamma$- and $\alpha$-AlH$_3$ and are consistent with the results from previous $\alpha$-AlH$_3$ calculations.[@cal1; @cal2] The calculated cohesive energies indicate that $\gamma$-AlH$_3$ is energetically less stable than $\alpha$-AlH$_3$ by 2.3 KJ/mol, which is in fair agreement with the measured value of 4.3 KJ/mol.[@reilly1; @reilly2] In addition, in $\gamma$-AlH$_3$ various bond lengths for non-equivalent Al and H atoms are evaluated and compared with recent experimental results of synchrotron X-ray powder diffraction[@gamma] and powder neutron diffraction[@gamma2] in Table II. Good agreements are found between the calculated and observed values. Without including the zero point energy, the calculated reaction enthalpies for equation (1) of $\alpha$ and $\gamma$ phases are 9.2 and 10.8 KJ/mol-H$_2$, respectively, which are also consistent with the measured values.[@reilly1; @reilly2] ![Difference charge density plots (see text) for $\gamma$-AlH$_3$ in the planes containing Al and H atoms: (a) the (002) plane and (b) the (001) plane. The solid squares and circles represent Al and H positions, respectively. The density contour interval is 0.001 electrons/([Å]{}$^3$). Charge deficiency is represented by dashed lines, while the density increase near the hydrogen atoms is represented by solid lines. []{data-label="fig:gchg"}](2d002.eps "fig:"){width="3in"} ![Difference charge density plots (see text) for $\gamma$-AlH$_3$ in the planes containing Al and H atoms: (a) the (002) plane and (b) the (001) plane. The solid squares and circles represent Al and H positions, respectively. The density contour interval is 0.001 electrons/([Å]{}$^3$). Charge deficiency is represented by dashed lines, while the density increase near the hydrogen atoms is represented by solid lines. []{data-label="fig:gchg"}](2d001.eps "fig:"){width="3in"} ![Charge-density difference plot for $\alpha$-AlH$_3$ in a plane containing Al and H atoms. The solid square and circle represent Al and H positions, respectively. The density contours in an interval of 0.001 electrons/([Å]{}$^3$) are presented for the (010) plane. Charge deficiency is represented by dashed lines, while the density increase near the hydrogen atoms is represented by solid lines.[]{data-label="fig:achg"}](a2d010.eps){width="5in"} Electronic Structure -------------------- The electronic structure of the $\gamma$ - AlH$_3$ is first analyzed by examining the charge distribution and charge transfer. The difference charge density $\Delta \rho({\bf r})$ is the difference between the total charge density of the solid and a superposition of the atomic charge with the same spatial coordinates as in the solid: $$\Delta \rho({\bf r})=\rho_{solid}({\bf r}) - \sum_{i} \rho_{atom}^{i}({\bf r - R_i}),$$ where the sum is over all the atoms. Figure \[fig:gchg\] shows such plots in the planes containing two different types of Al-H configurations in $\gamma$-AlH$_3$. The solid squares and circles represent Al and H positions, respectively. The (002) plane shown in Figure \[fig:gchg\](a) contains one double-bridge bond Al2-2H3-Al2 and two other H atoms (H1 and H2), while the (001) plane shown in Figure \[fig:gchg\](b) contains a normal bridge H2-Al1-H2. The difference density plot shows the positive values (solid contours) at the H positions, indicating a charge transfer from Al to H. As a result, aluminum is positively and hydrogen is negatively charged. The maximum of contour lines is in the order of 0.014 electrons/([Å]{})$^3$ for both (a) and (b). The minimum is about -0.002 electrons/([Å]{})$^3$ and the step size of the contours is 0.001 electrons/([Å]{})$^3$. The zero difference charge density line forms a closed contour around H, leaving a negative charge density in the interstitial regions. In Figure \[fig:gchg\](a), note that the positive contours of difference charge density are not exactly centered at the two H3 sites. The local maximum near each H3 is slightly shifted toward each other and the zero difference charge density line encloses both H3, showing some interactions between the H3 pair. In contrast, this is not seen for other H atoms, nor in the normal H-Al-H structure in $\alpha$-AlH$_3$. The separation between two H3 atoms in a double-bridge configuration is 2.3 [Å]{}, which is slightly smaller than that of other neighboring H pairs. The distance of the two Al2 atoms (2.62 [Å]{}) is also small compared with that in Al metal (2.86 [Å]{}). The contour lines in Figure \[fig:gchg\](a) also shows enhanced interactions between H-$s$ and Al-$d$ electrons. For comparison, the difference charge density for $\alpha$-AlH$_3$ is also calculated and plotted in Figure \[fig:achg\] for the (010) plane containing normal Al-H bonds. It is found that the normal bridge bond in $\gamma$-AlH$_3$ \[Figure \[fig:gchg\](b)\] has similar characteristics as that in $\alpha$-AlH$_3$ (Figure \[fig:achg\]). The binding between Al and H atoms involves a charge transfer in both $\alpha-$ and $\gamma$-AlH$_3$. ![(Color Online) Calculated electronic density of states (DOS) projected onto (a) H and (b) Al in $\alpha$-AlH$_3$, as well as the total DOS for the (c) $\alpha$-AlH$_3$ and (d) $\gamma$-AlH$_3$. The Fermi level is at zero.[]{data-label="fig:apdos"}](apdos.eps){width="5in"} ![(Color Online) Calculated electronic density of states projected onto non-equivalent atoms in $\gamma$-AlH$_3$. The Fermi energy is set to be zero.[]{data-label="fig:gpdos"}](gpdos.eps){width="5in"} ![Total charge density for the lowest two energy bands projected onto the (002) plane, the same plane as shown in Figure \[fig:gchg\](a). The solid circles represent H positions. The maximum, minimum and the interval of the contours are 0.85, 0.002, and 0.1 electrons/([Å]{}$^3$),respectively.[]{data-label="fig:pchg2d"}](parchg2d.eps){width="5in"} ![(Color Online) Total charge density isosurface for the lowest two energy bands in $\gamma$-AlH$_3$. The cell structure is the same as shown in Figure \[fig:struct\](a). The constant charge density surface shown is for 0.45 electrons/$(\AA)^3$. The charge accumulation within the hexagonal-ring structure defined in Figure \[fig:struct\](b) is clearly shown.[]{data-label="fig:pchg"}](pchg.eps){width="6in"} The total electronic densities of states (DOS) for both structures are calculated and given in Figure \[fig:apdos\]. The projected DOS onto H and Al for $\alpha$-AlH$_3$ are also presented in Figure \[fig:apdos\]. The angular-momentum projected DOS is evaluated by integrating over a sphere centered at each atom with a radius of 1.0 [Å]{} for Al and 0.9 [Å]{} for H, respectively. The choice of the H radius is based on the charge distribution around H in order to catch the charge transfer to H. The radius of Al is then chosen according to the Al-H bond length in order to cover as much of the interstitial region as possible without overlapping the spheres. The Fermi energy is set to zero in these plots. The current results for $\alpha$-AlH$_3$ are comparable with previous calculations.[@cal1; @cal2; @cal3] The H-s component spans the whole energy range. The lowest peak from $\sim$ -8.7 to -4.0 eV is found to correspond to H-s and Al-s states, while the higher-energy features are composed of H-s and Al-p or -d states. The total bond width is similar in both $\alpha$- and $\gamma$-AlH$_3$. A difference between the two is clearly seen near -7 eV, where a small band gap is created in the low energy region in $\gamma$-AlH$_3$. This new gap gives rise to a broad, separated DOS peak from -8.7 to -7.0 eV. In order to understand the new feature found in the DOS of $\gamma$-AlH$_3$, the projected density of states onto non-equivalent Al and H atoms are analyzed in Figure \[fig:gpdos\]. The same radius values used in $\alpha$-AlH$_3$ are adopted for integrating over the spheres around Al and H atoms. The valence states below the Fermi energy can be grouped into three energy regions: the low( -8.7 to -7.0), the middle(-6.8 to -4.0), and the high( -4.0 to 0 eV) energy regions. The broad peak in the low energy region of -8.7 to -7.0 is composed of considerable H3-s states, with some s contributions from other H and two Al atoms. This broad peak consists of the first two energy bands separated from the other bands at higher energies by a small gap. In Figure \[fig:pchg2d\], the total charge density for these two bands is calculated and plotted for the (002) plane, the same plane shown in Figure \[fig:gchg\](a). The maximum, minimum and interval of the contours are 0.85, 0.002, and 0.1 electrons/$(\AA)^3$, respectively. It shows noticeable charge accumulation around the H3-H3 pair. The three-dimensional isosurface of the total charge density for the same lowest two bands is shown in Figure \[fig:pchg\], which has a density value of 0.45 electrons/$(\AA)^3$. Again, the interaction between the H3-H3 pair is illustrated. In addition, considerable charge is found around four H4 atoms within the same hexagonal-ring as defined in Figure \[fig:struct\](b). Therefore, the broad peak in the low energy region is attributed mainly to the unique double-bridge and hexagonal-ring structures in $\gamma$-AlH$_3$. In fact, the projected DOS associated with H3 is reduced in the middle energy region (-6.8 to -4.0 eV) compared with other H atoms. The missing amplitude is shifted to both the low (-8.7 to -7.0 eV) and high (-4.0 to 0 eV) energy regions. The former contains an increased H3-H3 interaction, the H-s and Al2-s coupling, while the latter exhibits an enhanced interaction between H3-s and Al2-p and -d states. The stronger interaction of H3-s states with higher-energy Al2-p and -d states due to geometry gives rises to a higher electronic energy for the system. Vibrational Properties ---------------------- It is expected that the high vibrational frequencies of light hydrogen atoms may result in a significant zero-point energy, which needs to be included in studying the ground-state properties. In order to consider the zero-point energy $E_{zpt}$ contributions to the energetics between the two AlH$_3$ phases, the phonon density of states (DOS) are evaluated for both $\alpha$- and $\gamma$-AlH$_3$. Although the absolute value of E$_{zpt}$ is noticeable for the two phases, the difference turns out to be small $\Delta E_{zpt} = E_{zpt}(\alpha) - E_{zpt}(\gamma) = 0.2$ KJ/mol. By including this correction to the total energies previously listed in Table I, the $\gamma$ phase has a slightly higher energy than the $\alpha$ phase by 2.1 KJ/mol. ![(Color Online) Total and partial phonon density of states (DOS) for $\alpha$-AlH$_3$.[]{data-label="fig:alphonon"}](alphapdos.eps){width="5in"} ![(Color Online) Total and partial phonon density of states (DOS) for $\gamma$-AlH$_3$.[]{data-label="fig:gphonon"}](gammapdos.eps){width="6in"} The partial phonon density of states (DOS) for atom $\tau$ is defined as: $$\rho_{\tau}(\omega)=\sum_{q}\sum_{j=1}^{3N}|{\bf e}_{\tau}({\bf q}, j)|^2 \delta(\omega - \omega({\bf q},j)),$$ where the N is the total number of atoms per unit cell, ${\bf q}$ is the phonon momentum, $j$ labels the phonon branch, ${\bf e}_{\tau}({\bf q}, j)$ is the phonon polarization vector for atom $\tau$, and $\omega({\bf q},j)$ is the phonon frequency. The total and projected phonon density of states for $\alpha$- and $\gamma$- AlH$_3$ are presented in the Figures \[fig:alphonon\] and \[fig:gphonon\], respectively. Our calculated phonon DOS for $\alpha$-AlH$_3$ is in good agreement with previous calculations,[@cal1; @cal2] showing three distinct groups in the whole frequency range. The decomposed phonon DOS indicates that the low frequency modes below 350 cm$^{-1}$ are mainly from Al atoms, while the middle and high frequency modes are H- dominated. This is due to the large mass ratio between H atom to Al atom. The high-frequency phonons above 1550 cm$^{-1}$ are associated with H motion in the Al-H bond stretching modes. No vibrational modes was found in the frequency region from 1025 to 1550 cm$^{-1}$, yielding gap of 525 cm$^{-1}$ in $\alpha$-AlH$_3$. The phonon DOS for $\gamma$-AlH$_3$ is more complex than that for $\alpha$-AlH$_3$ due to the structural difference. The $\gamma$- phase unit cell has much more atoms (24) than the $\alpha$-phase (8), therefore the calculated phonon DOS for $\gamma$-AlH$_3$ in Figure \[fig:gphonon\] exhibit more features. Comparing with the phonon DOS of $\alpha$-AlH$_3$, although the modes located in the three frequency regions remain, multiple new additional peaks appear in the spectrum. In order to understand how different atomic species contribute to these vibrations, the partial phonon DOS is calculated for the four types of H and two types of Al atoms, as illustrated in Figure \[fig:gphonon\]. The new features labeled in Figure \[fig:gphonon\] can be summarized as follows: (1) four new vibrational peaks appear within the gap region of the $\alpha$-AlH$_3$ spectrum in the frequency range of 1200 to 1500 cm$^{-1}$. These are vibrational modes of H3 in the plane of the Al2-2H3-Al2 complex. It consists of both bond stretching and bending motions by H3 on this plane. (2) A new strong narrow peak is introduced in the middle frequency region around a frequency of 875 cm$^{-1}$, corresponding to the H3 vibrations perpendicular to the Al2-2H3-Al2 plane. (3) An additional broaden peak appears in the low frequency region from 375 to 475 cm$^{-1}$, which is dominated by Al2 in-plane vibrational modes, coupled with some contributions from H1 and H2 atoms with little contribution from H3. The displacements of the two Al2 atoms have the same magnitude but opposite directions, indicating a paired motion between them. As a consequence, these modes have higher frequencies than the other Al modes. The first three new features are largely related to the motions of the two H3 and the two Al2 atoms in the double-bridge structure. (4) A set of two peaks around 1550 cm$^{-1}$ are associated with the out-of-plane motion of four H4 atoms in the hexagonal-ring structure shown in Figure \[fig:struct\](b). (5) The set of high-frequency peaks around 2012 cm$^{-1}$ are associated with H1 atoms which connect two Al2 atoms in opposite directions. The angle Al2-H1-Al2 is 180$^\circ$. The vibrational modes of H1 are in the $ab$ plane and along the Al2-H1-Al2 line. These new features displayed in the $\gamma$-AlH$_3$ phonon DOS can serve as indicators of these special Al and H arrangements. Conclusion ========== We have performed pseudopotential density-functional calculations to study the electronic and phonon vibrational properties for the newly reported aluminum hydride $\gamma$-AlH$_3$ structure. The calculated structural parameters are in good agreement with results from the diffraction experiments. Our energetic study for the AlH$_3$ system indicates that $\gamma$-AlH$_3$ is less stable than $\alpha$-AlH$_3$ by $\sim$ 2 KJ/mol. The unique double-bridge configuration in $\gamma$-AlH$_3$ is investigated by examining the electronic properties and phonon vibrational modes. It was found that the double-bridge arrangement modified the binding between H and Al atoms. The projected DOS indicates that interactions exist between the double-bridge H3 atoms and that the interaction between H3-$s$ and higher-energy Al2-$p$ and -$d$ states is enhanced. The latter yields a higher electronic energy in $\gamma$-AlH$_3$. New features in phonon vibrational spectrum associated with double-bridge bonds and hexagonal-ring complex were also identified. Acknowledgment ============== We thank Dr. V. A. Yartys for bringing Ref. to our attention and stimulating discussions. This work is supported by the Department of Energy under grant No. DE-FG02-05ER46229. [10]{} bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , , , , , ****, (). , , ****, (). , , ****, (). , , , ****, (). , , , , ****, (). , , ****, (). , , ****, (). , , , ****, (). , , ****, (). , , , ****, (). , , ****, (). , , , , , , ****, (). , , , ****, (). , , , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , ****, (); , , ****, (). , , ****, (). , , ****, (). , , , , , , , ****, (). , ****, (). , , ****, (). , , , , ****, (). , , , , .
--- abstract: 'We investigate how superpositions of motional coherent states naturally arise in the dynamics of a two-level trapped ion coupled to the quantized field inside a cavity. We extend our considerations including a more realistic set up where the cavity is not ideal and photons may leak through its mirrors. We found that a detection of a photon outside the cavity would leave the ion in a pure state. The statistics of the ionic state still keeps some interference effects that might be observed in the weak coupling regime.' author: - 'F. L. Semião' - 'A. Vidiella-Barranco' bibliography: - 'cat.bib' title: Coherent states superpositions in cavity quantum electrodynamics with trapped ions --- Introduction ============ There has been a great deal of interest in the coherent manipulation of simple quantum systems [@blatt_rev; @demille; @haroche_rev], mainly to the high degree of control necessary for the implementation of quantum information processing tasks [@cirac_zoller; @zoller2; @molmer]. In particular, the study of trapped ions interacting with laser beams has attracted much attention due to the significant experimental advances in the generation of quantum states in such a system [@wingen; @blagen]. The interaction of trapped ions with laser beams is well understood in terms of a semiclassical model with the electromagnetic field being treated as a c-number, but new features mikght be revealed due to the field quantization. The entanglement between photons and ions is a remarkable consequence of that quantization and its potential applications heve been motivating the experimental work in cavity quantum electrodynamics with trapped particles [@blacav]. For instance, there have been reported schemes for the generation of specific entangled states such as Greenberger-Horne-Zeilinger (GHZ) states [@knight] as well as Bell states [@bell]. One of the reasons for interest in studying and experimentally coupling photons with material particles comes from the fact that in order for the quantum information processing to be used in its full extent, one should be able to inter-convert stationary and flying qubits and also faithfully transmit the flying qubits between given positions. Those two statements are part of what it are know as DiVincenzo’s requirements for the physical implementation of quantum computation and information [@Divincenzo]. The entanglement present in the system consisting of cavities and trapped ions may be useful in the propagation of information carried by photons between two distant locations [@networks]. It is not just entangled states involving either two level systems or Fock states of the electromagnetic field that find applications in quantum information. Another interesting class of nonclassical states with high potential applications is the one formed by linear superpositions of coherent states. This class of states has been considered for quantum teleportation [@hirota_tele; @xiao_tele], logic gates implementation [@kim1; @kim2] and tests of local realism [@sanders_local], for instance. In this paper, we show that superpositions of motional coherent states may be generated in the framework of cavity electrodynamics with trapped ions by letting the system evolve in the resonant carrier dynamics and by performing a measurement of the internal state of the ion. We apply the formalism of quantum jumps to study the non ideal case including damping in the cavity and show that the detection of photons outside the cavity could be used to generate nonclassical states of the motion of the trapped ion. More precisely, we show that the statistics of the generated state keeps track of the coherence displayed on the oscillatory behavior of the phonon number distribution and the variation of its width from Poissonian to sub or super-Poissonian [@phase]. Although that is not a deterministic protocol (it depends on the random event of the leaking of a photon from the cavity), it might be of interest because it could be implemented in current experimental systems. Experiments involving trapped ions and optical cavity fields have been performed only in the weak coupling regime in which the cavity damping is stronger than the ion-cavity coupling [@blacav]. Model Hamiltonian ================= In this work we consider a single two-level ion trapped in a Paul trap and placed inside an optical cavity. The cavity mode couples to the ionic internal degrees of freedom {$|e\rangle$,$|g\rangle$} and the system Hamiltonian is given by [@zeng] $$\begin{aligned} \hat{H}&=&\hbar\nu \hat{a}^{\dagger}\hat{a} + \hbar\omega\hat{b}^{\dagger}\hat{b} +\hbar\frac{\omega_0}{2}\hat{\sigma}_z \nonumber\\&&+ \hbar g(\hat{\sigma}_+ + \hat{\sigma}_-)(\hat{b}^{\dagger}+ \hat{b})\cos\eta(\hat{a}^{\dagger}+\hat{a}), \label{H}\end{aligned}$$ where $\hat{a}^{\dagger}(\hat{a})$ denotes the creation (annihilation) operator of the center-of-mass vibrational motion of the ion (frequency $\nu$), $\hat{b}^{\dagger}(\hat{b})$ is the creation (annihilation) operator of photons in the field mode (frequency $\omega$), $\hat{\sigma}$ operators are the usual Pauli matrices for the two internal levels of the ion, $\omega_0$ is the atomic frequency transition, $g$ is the ion-field coupling constant, and $\eta=2\pi a_0/\lambda$ is the Lamb-Dicke parameter, being $a_0$ the amplitude of the harmonic motion and $\lambda$ the wavelength of the cavity field. For our purposes here we may work in the Lamb-Dicke regime ($\eta\ll 1$), i.e., the situation in which the spatial extend of the motion of the trapped ion is much smaller than the wavelengh of the cavity field. In this regime, we may perform an approximation that simplifies the original Hamiltonian (\[H\]) as follows $$\cos\eta(\hat{a}^{\dagger}+\hat{a})\approx 1-\frac{\eta^2(1+ 2\hat{a}^{\dagger}\hat{a})}{2}-\frac{\eta^2(\hat{a}^\dagger{}^2+\hat{a}^2)}{2}. \label{expan}$$ If we tune the light field so that it exactly matches the atomic transition, i.e., $\omega_0-\omega=0$ (carrier transition), we obtain the interaction Hamiltonian in the Lamb-Dicke regime, which, after discarding rapidly oscillating terms reads $$\hat{H}_I= \hbar g \left[1-\frac{\eta^2(1+2\hat{a}^{\dagger}\hat{a})}{2}\right] (\hat{\sigma}_- \hat{b}^{\dagger} + \hat{\sigma}_+ \hat{b}). \label{hamilint}$$ The resulting Hamiltonian in equation (\[hamilint\]) is similar to the Jaynes-Cummings Hamiltonian but having an effective coupling constant which in our case depends on the excitation number of the ionic oscillator, $\hat{m}=\hat{a}^{\dagger}\hat{a}$. Such a dependence on the intensity has already been demonstrated [@inten] to be related to the occurrence of super-revivals (revivals taking place at long times) of the atomic inversion. Results ======= Generation of superpositions of motional states ----------------------------------------------- We now consider that the system is initially prepared in a way that the ion is in its excited state $|e\rangle$ (internal level), the cavity in the vacuum state $|0\rangle_c$, and the vibrational motion in the coherent state $|\alpha\rangle_v$, i.e. $|\psi(0)\rangle=|\alpha\rangle_v|0\rangle_c|e\rangle$. Under the Hamiltonian (\[hamilint\]), the state $|\psi(0)\rangle$ evolves to $$\begin{aligned} |\psi(t)\rangle&=&\cos\left(gt\left[1-\eta^2(1+2\hat{a}^{\dagger}\hat{a})/2 \right]\right)|\alpha\rangle_v|0\rangle_c|e\rangle\nonumber\\ &&-i\sin\left(gt\left[1-\eta^2(1+2\hat{a}^{\dagger}\hat{a})/2 \right]\right)|\alpha\rangle_v|1\rangle_c|g\rangle.\nonumber\\ \label{ppsi}\end{aligned}$$ We still have to apply the functions of the operator $\hat{a}^{\dagger}\hat{a}$ in the coherent state $|\alpha\rangle_v$. This may be easily done by moving to the Fock basis and the result is given by $$\begin{aligned} |\psi(t)\rangle&=&[\cos(\omega_{\eta}t)\,|\Phi_+\rangle_v-i\sin(\omega_{\eta}t)\,|\Phi_-\rangle_v]\, |0\rangle_c|e\rangle \nonumber\\ &&+\,[\cos(\omega_{\eta}t)\,|\Phi_-\rangle_v-i\sin(\omega_{\eta}t)\,|\Phi_+\rangle_v]\,|1\rangle_c|g\rangle,\nonumber\\ \label{ent1}\end{aligned}$$ where $\omega_{\eta}\equiv g(1-\eta^2/2)$ and we $|\Phi_\pm\rangle_v$ are general superpositions of coherent states given by $$|\Phi_\pm\rangle_v\equiv\frac{|\alpha\:e^{i\phi}\rangle_v\pm|\alpha\:e^{-i\phi}\rangle_v}{2}, \label{ent2}$$ where we defined the time dependent real phase $\phi=\eta^2gt$. The state (\[ent1\]) is an entangled state involving superpositions of motional coherent states of the trapped ion, its internal electronic states, and Fock states of the cavity field. It is noteworthy that for interaction times given by $t_k=k\pi$, with $k$ being an integer number, the state of the system reduces to $$|\psi(t)\rangle=|\Phi_+\rangle_v|0\rangle_c|e\rangle +|\Phi_-\rangle_v|1\rangle_c|g\rangle. \label{ent3}$$ One could then obtain a disentangled motional state by performing a measurement on the internal state of the ion. The experimental discrimination between the two electronic levels may be done using the very efficient electron shelving method [@shelving]. Depending on the measurement outcome, the collapsed motional state may be either $|\Phi_+\rangle_v$ or $|\Phi_-\rangle_v$. One of the main interesting characteristics of those superposition states is that their statistics are strongly sensitive to the value of the phase $\phi$. The trivial case takes place when $\phi=0$, what leads the distribution $P_m=|\langle m|\Phi_\pm\rangle_v|^2$ (phonon statistics) to be Poissonian. However, it is well known that there are domains in which it can be either sub or super-Poissonian. As pointed out in [@phase], when the statistics is super-Poissonian, the distribution $P_m$ displays an oscillatory behavior, being this a direct consequence of interference in phase space. Such a behavior is analogous to the oscillatory photon statistics of highly squeezed states [@osc]. Although similar superposition states may also be generated using classical fields [@gerry], the possibility of entanglement with light is a unique feature related to the quantum nature of the electromagnetic field. The scheme proposed here relies on a not very demanding initial preparation of the system. It requires the initial field to be in the vacuum state $|0\rangle_c$, i.e., there is no need to prepare or inject a coherent field state into the cavity. Additionally, the vibrational motion of the ion has to be prepared in a coherent state $|\alpha\rangle_v$, whose experimental realization for a $^{9}\mathrm{Be}^{+}$ ion trapped in a RF (Paul) trap has been already reported [@wingen]. Regarding the internal ionic states, they need to be prepared in the excited state which can be achieved by the application of laser pulses, for instance. We would like to point out that the linear dependence of the ion-field coupling constant on the operator $\hat{a}^{\dagger}\hat{a}$ is crucial for the generation of the states $|\Phi_\pm\rangle_v$. Therefore, it is very important to be aware of the limits where the parameter $\eta$ and the initial magnitude $\alpha$ may be varied and still having the approximation (\[expan\]) valid. This limit is set by keeping the product $\eta^2\overline{\hat{a}^\dagger\hat{a}}$ small enough, what allows us to neglect higher order terms in the cosine expansion. If the Lamb-Dicke approximation was not performed, it would be necessary to work with the full nonlinear coupling constant $\lambda\equiv\langle m|\cos\eta(\hat{a}^{\dagger}+\hat{a})|m\rangle=e^{-\eta^2/2}L_m^0(\eta^2)$. For convenient values of the product $\eta^2\overline{\hat{a}^\dagger\hat{a}}$ this coupling constant reduces to $\lambda_{LD}=1-\eta^2(1+2m)/2$, that is the coupling constant we have used so far (Lamb-Dicke regime). In figure \[val\] we show the ratio between the exact and the approximate coupling constants, $R(\eta,m)\equiv\lambda/\lambda_{LD}$. We see that there are ranges of values of $\eta$ and $m$ for which $R\approx 1$. Under such circumstances, the Lamb-Dicke approximation is valid and the generation protocol proposed here is applicable. ![\[val\]Ratio between the exact coupling constant and the approximate one in the Lamb-Dicke regime. The approximation is valid on the region $R(\eta,m)\approx 1$ where $\eta$ and $m$ are small enough.](valid.eps){width="8.cm"} Phonon statistics and continuous observation -------------------------------------------- We are now interested in the more realistic setting where the cavity is not ideal and one could detect photons leaking through its mirrors. The set up we have in mind is depicted in Fig.\[cavity\]. We still have a two-level trapped ion interacting resonantly with the cavity field but now we consider the cavity to be lossy, decaying at a rate $\kappa$. We assume that a detector $D$ is placed outside the cavity in a way that it may monitor the cavity decay. ![\[cavity\]Schematic experimental setup. The system consists of a single trapped ion placed in a lossy cavity having a decay rate $\kappa$. The detector $D$ continuously monitors this decay channel.](cavity.eps){width="10.cm"} The description of damping in quantum optical systems is usually described using master equations and its solution gives the time evolution of the system when the decay is not observed. However, the time evolution under continuous observation of photon counts may be adequately described by a pure state that evolves according to a non-Hermitian Hamiltonian. This approach is known as quantum jumps or quantum trajectories [@qjreview] formalism. The idea of continuous observation of decaying channels in systems consisting of atoms or ions and cavities has proved itself to be useful to perform legitimate information processing tasks as teleportation [@martin_tele] and maximally entangled state generation [@martin1; @martin2] or quantum gates [@qc_cont], for instance. We saw that the time evolution of the system under Hamiltonian (\[hamilint\]), and the realization of a measurement on the electronic state may be used to generate the states (\[ent2\]). Now, instead of measuring the atomic state, we will show that a measurement of the photon outside the cavity collapses the state of the system in a state that keeps much of the characteristics of the state (\[ent2\]), namely oscillatory behavior of the distribution $P_m$ as well as its narrowing and broadening [@phase]. For the sake of simplicity, we assume that the detector $D$ is perfect. Otherwise we would just have to account for a finite probability that the detector fails in detecting an event of leaking of a photon, what would lead us to a description in terms of density matrices rather than state vectors. The time evolution of the system conditioned to a no photon decay is given by $$i\hbar\frac{d|\psi\rangle}{dt}=\hat{H}_{\rm{eff}}|\psi\rangle,\label{eq}$$ where $$\begin{aligned} \hat{H}_{\rm{eff}}=-i\hbar\frac{\,\kappa\,\hat{b}^{\dagger}\hat{b}}{2}+\hbar g \left[1-\frac{\eta^2(1+2\hat{a}^{\dagger}\hat{a})}{2}\right] (\hat{\sigma}_- \hat{b}^{\dagger} + \hat{\sigma}_+ \hat{b}).\label{heff}\nonumber\\\end{aligned}$$ It is worth noticing that once the Hamiltonian (\[heff\]) is not Hermitian, the norm of $|\psi(t)\rangle$ is not constant in time. So, it must be normalized in order to allow one to correctly evaluate any property of the system. It is clear that if the initial state of the system is the same as before, namely, $|\psi(0)\rangle=|\alpha\rangle_v|0\rangle_c|e\rangle$, the solution of equation (\[eq\]) may be written as $$|\psi(t)\rangle=\sum_m^\infty a_m(t)|m,0,e\rangle+b_m(t)|m,1,g\rangle\label{ev}$$ Substituting (\[ev\]) and (\[heff\]) into (\[eq\]) one obtains two coupled differential equations that may be easily solved and the result is given by $$\begin{aligned} a_m(\tau)&=& c_m(0)\,e^{-\Gamma \tau/4}\,\left(C(\tau)+\frac{\Gamma }{\sqrt{\Gamma^2-16\lambda_{LD}^2}}\,S(\tau)\right)\nonumber\\ b_m(\tau)&=& -4\,ic_m(0)\,e^{-\Gamma \tau/4}\,\frac{\lambda_{LD}}{\sqrt{\Gamma^2-16\lambda_{LD}^2}}\,S(\tau), \label{coef}\end{aligned}$$ where $c_m(0)$ are the coefficients of the expansion of the initial coherent state in the Fock basis, $\Gamma=\kappa/g$, $\tau=gt$, and $$\begin{aligned} C(\tau)&=&\cosh(\sqrt{\Gamma^2-16\lambda_{LD}^2}\,\tau/4)\\ S(\tau)&=&\sinh(\sqrt{\Gamma^2-16\lambda_{LD}^2}\,\tau/4)\end{aligned}$$ Now, we suppose that one photon is detected outside the cavity. This event would correspond to the destruction of one photon leading the system to state $\hat{b}|\psi(\tau)\rangle$. Again, we remember that since the time evolution is not unitary the state must be normalized after this jump. In our case, the resulting state would be $|\psi(\tau)\rangle_{d}=|\Phi(\tau)\rangle_v|0\rangle_c|e\rangle$, i.e., a disentangled state having a normalized motional part given by $$|\Phi(\tau)\rangle_v=\sum_{m=0}^\infty \frac{b_m(\tau)}{\sqrt{\sum_{p=0}^\infty |b_p(\tau)|^2}} |m\rangle_v.\label{state}$$ Before investigating the statistical properties of that state, it is important to calculate the probability for a photon emission because it is related to our probability of success in generating $|\Phi(\tau)\rangle_v$. The probability that at least one jump occurs between the initial instant $0$ and the subsequent instant $\tau$ is given by $P(\tau)=1-\langle \psi(\tau)|\psi(\tau)\rangle$, where $|\psi(\tau)\rangle$ is the state in equation (\[ev\]) with the coefficients (\[coef\]). In figure \[prob\] we have a plot showing the behavior of $P(\tau)$ using parameters that are close to the ones in a current experimental situation, i.e. the weak coupling regime. ![\[prob\]Probability of detection of one photon outside the cavity. The system parameters are $\Gamma=1$, $\eta=0.05$, and $\alpha=2$. This probability tends to one for higher values of $\tau$.](prob.eps){width="8.cm"} Let us now start the analysis of the statistical properties of the vibrational state $|\Phi(\tau)\rangle_v$. The ion started in a coherent state which has a Poissonian distribution. We can describe its narrowing (or widening) via the normalized variance (also know as the [*Fano factor*]{}) defined as $\sigma^2=(\overline{m^2}/\bar{m})-\bar{m}$ where $\bar{m}$ and $\overline{m^2}$ are the first and second moments of the distribution $P_m=|\langle m|\Phi(\tau)\pm\rangle_v|^2$, respectively. Values of $\sigma<1$ indicate sub-Poissonian, $\sigma> 1$ super-Poissonian, and $\sigma=1$ Poissonian statistics. The time evolution of $\sigma(\tau)$ is shown in figure \[sigma\]. The original Poissonian distribution naturally evolves to either sub or super-Poissonian values. These changes in the width of the distribution could be observed even in a bad cavity that has a decay rate $\kappa$ comparable to the coupling constant $g$, as we can see in figure \[sigma\]. We would like also to show that strong signatures of nonclassical behavior, such as the oscillations in the phonon distribution $P_m$ at times when the statistics is Poissonian, still persist in the weak coupling regime. This may be seen in figure \[osc\] where we show the distribution at a time $\tau=3.29$ and with $\eta=0.05$. ![\[sigma\]Time evolution of the normalized variance. The system parameters are $\Gamma=1$, $\eta=0.05$, and $\alpha=2$.](sigma.eps){width="8.cm"} ![\[osc\]Phonon distribution at $\tau=3.29$ with system parameters $\Gamma=1$, $\eta=0.05$, and $\alpha=2$.](dist.eps){width="8.cm"} Based on those considerations we conclude that general properties of coherent states superpositions, which arise in the lossless case, would still persist in our more realistic setup. This means that our proposal could be useful for the experimental investigation of certain nonclassical features. conclusions =========== We have investigated several aspects of the dynamics of a trapped ion inside a cavity. Firstly we have considered a situation in which the unitary time evolution leads to a global entangled state involving superposed motional coherent states, Fock photon states and the two internal electronic states. After the measurement of the internal state of the ion in a specific interaction time, the generation of quantum superposition of coherent states of motion of the ion is accomplished. Two different states may be generated (either $|\Phi_+\rangle$ or $|\Phi_-\rangle$) depending on the result of the measurement of the internal ionic state. The main requirement for such generation is the strong coupling regime where the system may perform Rabi oscillations in the lifetime of the cavity photon. In the second part of our paper we consider the influence of cavity decay in the ionic dynamics. In fact that represents itself a generation method, since a nonclassical state results from the dissipative evolution even with a photon decay rate of the same order as the ion-cavity coupling (weak coupling regime). The cavity is continuously monitored by a detector, what causes the state of the system to be pure at any time. The measurement of the internal electronic state in the former suggestion is replaced now by the counting of a photon leaking out of the cavity. This collapses the entangled global state of the system onto a product state. Even though the cavity is not ideal, the ionic motional state still retains (after the photon decay) important nonclassical features that characterize quantum superposition of coherent states, such as, for instance, changes in the variance of the phonon distribuion (sub or super-Poissonian statistics) as well as its oscillatory behavior. We would like to thank Martin Plenio for reading the manuscript and giving valuable suggestions. This work is partially supported by CNPq (Conselho Nacional para o Desenvolvimento Científico e Tecnológico), and FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) grant number 02/02715-2, Brazil.
--- abstract: 'The Carnegie-Chicago Hubble Program seeks to anchor the distance scale of Type Ia supernovae via the Tip of the Red Giant Branch (TRGB). Based on deep *Hubble Space Telescope* ACS/WFC imaging, we present an analysis of the TRGB for the metal-poor halo of [NGC1365]{}, a giant spiral galaxy in the Fornax Cluster that is host to the supernova SN 2012fr. We have measured its extinction-corrected TRGB magnitude to be [$\mathrm{F814W}=\trgbredcorrval\pm {0.03}_{stat}\pm {0.01}_{sys}~\mathrm{mag}$]{}. In advance of future direct calibration by *Gaia*, we set a provisional TRGB luminosity via the Large Magellanic Cloud and find a true distance modulus [$\mu_0 = \truetrgbdmod \pm\dmodcombinedstaterr_{stat} \pm\dmodcombinedsyserr_{sys}~\mathrm{mag}$]{}or [$D = \truetrgbdmodMpc\pm\truetrgbdmodMpcstaterr_{stat}\pm\truetrgbdmodMpcsyserr_{sys}$ Mpc]{}. This high-fidelity measurement shows excellent agreement with recent Cepheid-based distances to [NGC1365]{}and suggests no significant difference in the distances derived from stars of Population I and II. We revisit the error budget for the path to the Hubble Constant based on this analysis of one of our most distant hosts, finding a 2.5% measurement is feasible with our current sample.' author: - In Sung Jang - Dylan Hatt - 'Rachael L. Beaton' - Myung Gyoon Lee - 'Wendy L. Freedman' - 'Barry F. Madore' - 'Taylor J. Hoyt' - 'Andrew J. Monson' - 'Jeffrey A. Rich' - Victoria Scowcroft - Mark Seibert bibliography: - 'ms.bib' title: '*The Carnegie-Chicago Hubble Program.* III. THE DISTANCE TO [NGC1365]{}via the Tip of the Red Giant Branch[^1]' --- Introduction ============ The aim of the Carnegie-Chicago Hubble Program () is a direct route to [$H_0$]{}using Type Ia supernovae ([SNe Ia]{}) calibrated entirely via Population (Pop) II stars. The [SNe Ia]{}zero point is determined using a distance ladder built from RR Lyrae (RRL) and the Tip of the Red Giant Branch (TRGB) distances to Local Group galaxies. This zero point is then applied to the full sample of [SNe Ia]{}in the smooth Hubble flow to arrive at a local, direct estimate of [$H_0$]{}. Eventually, the TRGB will be calibrated in the Galaxy based on *Gaia* trigonometric parallaxes for a three step route to the Hubble constant. Since this path is independent of the traditional Pop I Cepheid distance scale that currently sets the [SNe Ia]{}zero point, it has the potential to provide insight into the growing (now &gt;3-$\sigma$) difference in the value of [$H_0$]{}as determined by direct [the distance ladder; e.g. @fre12; @rie16] and indirect methods [via modeling of the Cosmic Microwave Background; e.g. @kom11; @planck16]. Cepheids have long been in use as primary distance indicators: understanding their systematics remains a critical goal. Current uncertainties include the metallicity dependence of the Leavitt Law, the impact of crowding on mean magnitudes, and how best to measure and remove the effects of interstellar extinction [@bea16 Paper I]. Some, but not all, of these problems relate to the physical location of Cepheids as Pop I stars within the spiral arms of their parent galaxy. By moving to distance indicators based on Pop II stars, the aims to bypass these issues of the traditional Cepheid-based extragalactic distance scale by using the intrinsically low-density, metal-poor, and low-extinction regions of galaxies. In , the motivations and full scope of the were explained in detail. Taking into account current and projected calibrations of the Pop II distance scale, estimated a 2.9% measurement of the Hubble Constant was feasible at the conclusion of the assuming $\sim$0.1 mag precision of TRGB-based distance measurements in [SNe Ia]{}host galaxies. With direct calibration of the TRGB with *Gaia*, the end precision in the Hubble Constant will be 2.3% (still assuming 0.1 mag precision of the TRGB). In @hatt17 [Paper II], the methods for image processing, photometry, and measuring the TRGB for targets were described in detail and applied to the nearby dwarf irregular galaxy, IC1613. In that study, both the TRGB and RRL were used in concert to measure the distance to the galaxy to precisions of 2.3% and 3.2%, respectively. Due to its low surface density and proximity to the Galaxy, IC1613 represents an ideal case for the application of the tools used in the . It is the purpose of this paper to apply the TRGB methodology to one of the most distant [SNe Ia]{}-host galaxies in the sample, [NGC1365]{}. [NGC1365]{}is the brightest spiral galaxy in the Fornax Cluster. Cepheids in this galaxy were discovered for the first time [@sil99; @mad99] as part of the *Hubble Space Telescope* ([*HST*]{}) $H_0$ Key Project. The distance to [NGC1365]{}derived from Cepheids was adopted as the distance to the Fornax Cluster wherein both the fundamental plane and Tully-Fisher relationships were calibrated [@mad98; @fre01]. In 2012, [NGC1365]{}became even more important for the extragalactic distance scale with the discovery of a [SNe Ia]{}, SN 2012fr [@klotz_2012]. SN 2012fr was discovered and classified sufficiently early to receive extensive follow-up with 594 photometric and 144 spectroscopic data points included in the Open Supernova Database[^2] [@gui16]. SN 2012fr was also extensively monitored by the optical+NIR Carnegie Supernova Project (CSP), with a detailed analysis of SN 2012fr to be presented by Contreras et al. (in prep). A high-fidelity distance to [NGC1365]{}is therefore a key component of the calibration of the extragalactic distance scale as defined by multiple techniques. The structure of the paper is as follows. In Section \[sec:data\] the observations and data processing are described. In Section \[sec:TRGB\], we describe the detection of the TRGB in [NGC1365]{}, estimate the uncertainties in our measurement, and determine the distance to [NGC1365]{}by adopting a provisional TRGB luminosity. In Section \[sec:discussion\], we compare our TRGB methodology to other approaches, compare our distance to those derived from Cepheids, and discuss the implications of our measurement in the context of the goals of the . The primary results of this work are summarized in Section \[sec:conclusion\]. Detailed comparisons of the methods used in this paper as compared to similar works are given in the Appendix. ![image](f1.eps){width="80.00000%"} [ccccccc]{} 2014-09-17 & F606W & 12 & $03^h 33^m 51.4^s$ & $-36^\circ 12\arcmin 05.0\arcsec$ & $3.37\arcmin\times 3.37\arcmin\xspace$ & $\sim1200$\ 2014-09-21 & F814W & 10 & …& …& …& …\ 2014-09-25 & F814W & 10 & …& …& …& …\ Data {#sec:data} ==== The image processing and photometry are performed identically to that described by and will be summarized in the subsections to follow. A detailed description of the image analysis and photometry pipeline will be presented in a forthcoming work (Beaton et al. in prep). Observations and Image Preparation {#ssec:obs} ---------------------------------- We obtained optical imaging over 16 orbits on 2014 September 17, 21, and 25 using the ACS/WFC instrument aboard [*HST*]{}[PID:GO13691, PI: Freedman; @cchp2proposal]. Six and ten orbits were used for the F606W and F814W filters, respectively. Pointings were centered on $\mathrm{RA}=3^h 33^m 52.4^s$ and $\mathrm{Dec}=-36^h 12^m 05.0^s$, which is $5\farcm0$ southeast of the NGC 1365 center. The field was selected to be safely in the stellar halo of [NGC1365]{}and care was taken to place the pointing sufficiently far from the spiral arm by inspection of *WISE* and *GALEX* imaging as described in . Figure \[fig:f1\]a shows the ACS/WFC pointing relative to the galaxy using a wide-area ($11' \times 11'$) $JHK$ composite image from the FourStar NIR-imager on the Magellan-Baade telescope taken as part of the CSP follow-up campaign for SN 2012fr [Contreras et al. in prep; for a description of the instrument see @persson_2013]. Figure \[fig:f1\]b is a color image of the ACS/WFC observations based on a ‘drizzled’ co-add, and Figure \[fig:f1\]c is a $10\arcsec \times 10\arcsec$ region of the ACS/WFC image where individual RGB stars are circled. Figure \[fig:f1\]c illustrates that the RGB stars in our halo pointing are well isolated from neighboring sources. Exposure times were designed to have a signal-to-noise ratio of 10 in F814W at the anticipated apparent magnitude of the TRGB predicted by previous distances estimates to [NGC1365]{}. The F606W signal-to-noise is lower (typically by a factor of 3), but the color is only used to remove contaminants and the lower quality does not strongly affect the TRGB itself. This strategy provides reliable photometry to a depth of at least one magnitude below the anticipated TRGB to meet sampling requirements for robust TRGB identification as defined in @mad95. Individual exposures were $\sim1200$ sec each for total exposure times 14,676 sec and 24,396 sec for F606W and F814W, respectively. A summary of these observations, split by the three *HST* visits, are given in Table \[tbl:obs\_sum\]. Individual ACS/WFC images were obtained through the *Mikulski Archive for Space Telescopes* archive. We use the FLC data type, which are calibrated, flat-fielded, and CTE-corrected in the STScI `CALACS` pipeline. The non-uniform pixel area due to ACS/WFC geometric distortions was corrected using the STScI provided Pixel Area Maps[^3]. All further analysis is conducted on these pixel-area corrected FLC frames. Photometry {#sec:phot} ---------- Instrumental magnitudes were derived for individual FLC images via point-spread-function (PSF) fitting in the <span style="font-variant:small-caps;">DAOPHOT</span> software [@1987PASP...99..191S]. We used <span style="font-variant:small-caps;">DAOPHOT</span> to model the PSF for F606W and F814W on synthetic Tiny Tim based star grids (a detailed description of this process will be given in Beaton et al. in prep.). A direct test of the Tiny Tim PSFs against direct frame-by-frame PSF modeling with isolated, bright stars is described in and was found to agree within the photometric uncertainties. Images were aligned using <span style="font-variant:small-caps;">DAOMATCH</span>/ <span style="font-variant:small-caps;">DAOMASTER</span> that operate on preliminary catalogs [@1987PASP...99..191S]. We then use a co-add of all images to determine a ‘master source list’ that is used to simultaneous photometer each individual frame using the <span style="font-variant:small-caps;">ALLFRAME</span> software [@1994PASP..106..250S]. This latter procedure was established for increasing the depth of individual frame photometry in the Key Project [@fre01; @allframecookbook]. Calibration of HST photometry {#sssec:hstcal} ----------------------------- We transformed the instrumental magnitudes to the ACS Vega magnitudes following equations 2 and 4 of [@2005PASP..117.1049S]. A correction from the PSF magnitudes to the 05 aperture magnitudes for each CCD chip was determined by comparing the curve of growth generated from aperture magnitudes to the PSF magnitude (also measured at a 05 radius). We find aperture corrections of –0.044 (chip 1) and –0.058 (chip 2) mag for F814W and –0.037 (chip 1) and –0.047 (chip 2) mag for F606W. We used photometric zero-point values 26.412 mag for F606W and 25.524 mag for F814W, which were provided for a given observation date by the online STScI ACS Zeropoints Calculator[^4]. The 05 to infinite aperture correction values are 0.095 mag for F606W and 0.098 mag for F814W [@boh16]. Intensity mean magnitudes for each filter were computed from the individual frame magnitudes with a median based $\sigma$-clip algorithm setting the clip at 2-$\sigma$. We additionally apply an image-quality cut using the ‘sharpness’ parameter to isolate stellar sources using the average value determined from the individual image <span style="font-variant:small-caps;">ALLFRAME</span> photometry. ![CMD of resolved stars in the HST/ACS field of [NGC1365]{}. An arrow represents the approximate position of the TRGB and the blue shaded region indicates the color range adopted for the red giant branch locus.[]{data-label="fig:f2"}](f2.eps){width="\columnwidth"} Color-Magnitude Diagram ----------------------- The final color-magnitude diagram (CMD) is shown in Figure \[fig:f2\]. A change in the source density between $27.3 \lesssim \mathrm{F814W} \lesssim 27.4 $ mag, corresponding to the TRGB, is visible to the eye and highlighted by an arrow. The stars brighter than the TRGB stars are likely thermally pulsing asymptotic giant branch (TP-AGB) stars or blended RGB stars. We perform an additional step of source filtering by visually inspecting the individual sources in the CMD $\pm$0.5 mag around the TRGB and remove $\sim150$ spurious sources that were components of background galaxies or fringes of bright stars, or $\sim2\%$ of the sources within this magnitude range. To determine the reliability and completeness of our photometry, we used extensive artificial star tests spanning a range of (F606W-F814W) colors and across the full magnitude range of the CMD in Figure \[fig:f2\]. We input stars with uniform sampling over the range of 25 mag &lt;F814W &lt;30 mag and having (F606W-F814W) colors of 0.4, 1.2 and 2.0 mag that span the range of the RGB in our data. We perform photometry in an identical manner to that described previously and compare the input and output star magnitudes. Figure \[fig:f3\]a shows the completeness of the artificial RGB stars as a function of F814W magnitude for F606W-F814W colors of 0.4, 1.2, and 2.0 mag. As anticipated by the F606W signal-to-noise, the photometry is less complete for redder stars, but for $0.4\lesssim \mathrm{F606W}-\mathrm{F814W} \lesssim 1.2$ mag the photometry is 80% complete at an input magnitude of F814W = 28 mag, well fainter than the visually identified TRGB in Figure \[fig:f2\]. In Figures \[fig:f3\]b and \[fig:f3\]c the recovered photometry is compared to the input photometry for $\mathrm{F606W}-\mathrm{F814W}=1.2$ mag for the F814W magnitude and the $\mathrm{F606W}-\mathrm{F814W}$ color, respectively. We find that to input magnitude of $\mathrm{F814W} = 28$ mag, the recovered photometry and colors are in strong agreement. We note that the completeness of our artificial stars drops below $\sim70\%$ for $\mathrm{F814W}\gtrsim28.0$ mag. We estimate that the slope of the RGB branch in the [*HST*]{}flight magnitude system is -6 mag color$^{-1}$, which is steeper than the slope of the $VI$ Johnson-Cousins RGB found in from ground-based imaging, but is similar to the slope derived from fields in M31 in the same photometric system (Hatt et al. in prep.). We also note that there is noticeable contamination of the measured RGB in Figure \[fig:f2\], which makes it difficult to determine the width of the RGB empirically. We manually adjust a color-magnitude cut until we have visually maximized the number of RGB stars encompassed by the cut. The resulting region is shaded in Figure \[fig:f2\]. The measured RGB, bounded by this color-magnitude range, consists of $\sim$$4,300$ stars, well above the minimum RGB population limits discussed in @mad95 for a robust detection of the TRGB. In the next section, we make our measurement of the TRGB and estimate its associated uncertainties. ![The reliability of the [NGC1365]{}photometric catalogs. (a) Recovery rate versus F814W magnitude for F606W–F814W = 0.4 (dashed line), 1.2 (solid line), and 2.0 (dot-dashed line) derived from artificial star experiments. (b) Difference between input and output F814W magnitudes (input minus output) versus input F814W magnitude. Circles with error bars represent the mean values. (c) Same as in (b) but for the F606W–F814W color. A vertical shaded region in each panel indicates the TRGB level of NGC 1365. []{data-label="fig:f3"}](f3.eps){width="\columnwidth"} The Tip of the Red Giant Branch {#sec:TRGB} =============================== We now estimate a distance to NGC 1365 based on the TRGB method. The TRGB is the discontinuity in the RGB luminosity function (LF) caused by the sudden lifting of degeneracy in the He-burning cores of RGB stars [a theoretical overview of RGB evolution can be found in @iben_1984; @salaris_1997]. The sequence of stars ascending the RGB during core He-burning are thus truncated at this magnitude as they rapidly evolve away from the RGB sequence. As first shown empirically for a sample of nearby galaxies by @lee93, the TRGB is well-delineated and effectively flat for metal-poor populations in the $I$-band, which is equivalent to the F814W filter in the [*HST*]{}flight magnitude system. The algorithmic approach to measuring the TRGB has been refined and expanded since its initial implementation in [@lee93]. A review of published techniques since that time is given in . In this study, we follow the method outlined in . The general approach to our TRGB measurement is as follows: First, the RGB LF is binned in 0.01 F814W mag bins, where we have isolated stars using color-magnitude and image-quality cuts (Figure \[fig:f2\]). The finely binned LF is then smoothed using GLOESS (Gaussian-windowed, Locally-Weighted Scatterplot Smoothing), which is a data smoothing technique first introduced in an astrophysical context by @per04 for Cepheid light curves and described in more detail in @monson_2017 for RR Lyrae light curves. The technique uses a smoothing window around a reference point in the input discrete function and applies a Gaussian weighting function based on the distance to neighboring data points, which is set by a scaling parameter, $\sigma_{s}$. This smoothed LF is then convolved with an edge detection kernel and, as in , we use the Sobel filter, $[-1, 0, +1]$, which is derived from finite-difference methods and is a simple approximation to the first-derivative of a discrete function. The edge detector will produce the largest response when the change in the LF is the greatest, i.e. at the discontinuity present at the TRGB. As discussed , there are practical considerations for application of this technique that must be statistically modeled for a given dataset. The primary concern is the selection of an optimal size for the Gaussian scaling parameter $\sigma_{s}$, which is determined as the value for which the combination of the statistical and systematic uncertainties associated with the TRGB edge-detection are minimized. describes a procedure using artificial star tests to empirically derive $\sigma_{s}$ and the associated uncertainties, which we apply to [NGC1365]{}in Section \[sec:trgb\_optimize\]. In Section \[sssec:trgb\_meas\] we then measure the [NGC1365]{}TRGB and determine our final distance in Section \[sec:distance\]. Optimizing the TRGB Detection {#sec:trgb_optimize} ----------------------------- In order to make a robust measurement of the TRGB, we seek the optimal leveling of smoothing in the LF that reduces the statistical and systematic errors. Sections \[sssec:aslf\] and \[sssec:edge\_sim\] describe the creation of an artificial star luminosity function (ASLF) and simulations to model the properties of our GLOESS smoothing function and the \[-1,0,+1\] kernel. ### Artificial Stars and Luminosity Functions {#sssec:aslf} We created an artificial star luminosity function (ASLF) to estimate the systematic bias and completeness of our photometry. We assumed that the luminosity function (LF) for the RGB has a of slope $0.3\pm0.04$ dex mag$^{-1}$ [see @men02]. Our ASLF begins at the estimated tip magnitude $\mathrm{F814W}\approx19.33$ mag in the instrumental magnitude system, or $\mathrm{F814W}=27.36$ in the ACS Vega system, and it extends to $\mathrm{F814W}\approx20.33$ mag or $\mathrm{F814W}=28.36$ mag in the instrumental and ACS Vega systems, respectively. We assign a fixed color of $\mathrm{F814W}-\mathrm{F606W}$ = 1. One-thousand stars were sampled at random from this ASLF distribution and placed into each individual FLC frame at pixel coordinates uniformly distributed in $X$ and $Y$. These stars were manually added to the ‘master list’ of sources and the <span style="font-variant:small-caps;">ALLFRAME</span> photometry was performed as previously described. The artificial star process was repeated 50 times, producing a total of $50,000$ artificial RGB stars for which $\sim$$42,500$ were successfully measured (85% completeness over the RGB magnitude range). Figure \[fig:art\_stars\]a shows the input and output ASLFs as yellow and blue histograms, respectively. While the input ASLF has a hard bright edge to represent the TRGB, the output ASLF illustrates both incompleteness across the LF and broadening of the TRGB due to measurement uncertainties. ### Simulating TRGB edge detections {#sssec:edge_sim} We now quantify the statistical and systematics errors associated with GLOESS and our $[-1,0,+1]$ edge detection kernel. We restrict our ASLF to the color-magnitude constraints visualized in Figure \[fig:f2\] (blue box), as described in the previous section, and randomly select $4,300$ stars with replacement from this sample to simulate the sample size defining the TRGB in our [NGC1365]{}data. We construct a LF using 0.01 mag bins and apply GLOESS with a fixed value for [$\sigma_s$]{}. We apply the Sobel kernel on the smoothed LF and we select the bin of greatest response as the TRGB. We repeat this process 10,000 times each for 0.01 &lt; [$\sigma_s$]{} &lt; 0.25 mag in 0.01 mag increments. We use the distribution of TRGB measurements to estimate the intrinsic uncertainties of the GLOESS smoothing and Sobel kernel. The displacement of the detected edge, $\mu_{\mathrm{TRGB}}$, for a given [$\sigma_s$]{}is defined as the mean offset from the TRGB edge (Figure \[fig:art\_stars\]a) and serves as an estimate of the systematic uncertainty for a given [$\sigma_s$]{}. The dispersion of estimates, $\sigma_{\mathrm{TRGB}}$, is the $\pm1\sigma$ standard deviation of all realizations and serves as our estimate of the random (statistical) uncertainty for the [$\sigma_s$]{}. Figure \[fig:art\_stars\]b gives the results for all [$\sigma_s$]{}. At [$\sigma_s$]{}$\approx0.17$ mag the combined error (the quadrature sum of $\mu_{\mathrm{TRGB}}$ and $\sigma_{\mathrm{TRGB}}$) is minimized and this represents the ‘optimal’ smoothing scale (e.g., the scale that yields the smallest total uncertainty). Figure \[fig:art\_stars\]c shows the distribution of measured TRGB value for this [$\sigma_s$]{}. To measure the uncertainties with this smoothing scale, we fit a Gaussian to the resulting distribution of TRGB measurements and adopt the offset from the input and the width as the systematic and random uncertainties [0.01]{}  and [0.03]{}, respectively. These errors associated with the measurement of the idealized [NGC1365]{}LF are remarkably small. In , it was shown that the TRGB magnitude for IC1613 could be constrained to only $\approx0.02$ mag. If the uncertainties were based solely on the photometric errors for its TRGB stars, one might expect that the [NGC1365]{}TRGB measurement would have larger uncertainties since IC1613 lies at $\sim730$ kpc compared to the anticipated [NGC1365]{}distance $\sim 18$ Mpc. However, although the overall uncertainty in the TRGB measurement for IC1613 is comparable to the photometric errors of its individual TRGB stars, the number of stars defining TRGB also plays a large role in its detectability: the greater the sample of stars contributing to the tip, the more readily it is detected. The [NGC1365]{}RGB in this study is over three times more populated than IC1613 in . We are undertaking a series of simulations to further explore and quantify these issues, to be published in future. ![\[fig:art\_stars\] Artificial star tests are used to determine to determine the optimal LF smoothing scale as well as derive the statistical and systematic uncertainties in our TRGB measurement. (a) The input (orange) and recovered (blue) artificial luminosity functions. (b) Displacement of the input TRGB ($\mu_{\mathrm{TRGB}}$, open blue squares), the dispersion in measured TRGB ($\sigma_{\mathrm{TRGB}}$, red plus symbols), and the quadrature sum of these values (purple filled circles). The systematic and random (statistical) uncertainties in our edge-detection are represented by $\mu_{\mathrm{TRGB}}$ and $\sigma_{\mathrm{TRGB}}$, respectively. The optimal [$\sigma_s$]{}yields minimum total uncertainty and is marked by a vertical dashed line. (c) Distribution of maximal Sobel kernel responses for our 10,000 realizations of the ASLF for the optimal smoothing scale of $\sigma_s=0.17$ mag (blue histogram) with the Gaussian model of the distribution over plotted (red line). A vertical dashed line marks the input TRGB magnitude.](f4.eps){width="\columnwidth"} Measurement of the TRGB {#sssec:trgb_meas} ----------------------- Figure \[fig:n1365trgbmeas\]a presents the final CMD used to determine the distance to [NGC1365]{}. We apply the color-magnitude restrictions, described in the previous sections, that isolate the RGB and are indicated by the blue shading in Figure \[fig:n1365trgbmeas\]a. These limits coincide with the color range over which the TRGB magnitude is known to be flat with color [@lee93; @jan17a]. Figure \[fig:n1365trgbmeas\]b is the resulting LF for stars in the blue shaded region after smoothing using the GLOESS algorithm and our optimal scaling parameter, $\sigma_{s} =$ 0.17 mag, as determined in the previous section. Figure \[fig:n1365trgbmeas\]c is the result of applying the \[-1,0,+1\] Sobel kernel to the LF, which shows a strong peak at [27.371]{} mag (indicated by the dashed lines in Figures \[fig:n1365trgbmeas\]a and \[fig:n1365trgbmeas\]b). Based on the simulations in the previous subsection, we assign a statistical uncertainty of [0.03]{} mag and a systematic uncertainty of [0.01]{} mag. Our final TRGB determination is [$\mathrm{F814W}=\trgbobsvalrounded\pm {0.03}_{stat}\pm {0.01}_{sys}~\mathrm{mag}$]{}, before correcting for line-of-sight reddening. ![image](f5.eps){width="70.00000%"} TRGB Reddening and Distance {#sec:distance} --------------------------- The Milky Way foreground extinction is estimated to be small: [0.051]{} mag for F606W and [0.031]{} mag for F814W, or $E(\mathrm{F606W}-\mathrm{F814W})$ = 0.020 mag [@sch11 retrieved from NED]. Applying these estimates to our TRGB measurement from the previous subsection, we find an extincted-corrected TRGB magnitude of F814W =  mag. Currently, the absolute magnitude of the TRGB has no direct trigonometric calibration, though *Gaia* parallaxes will provide one in the near future. In the interim, we have chosen to adopt an absolute magnitude for the TRGB for the analyses. The derivation of this value will be presented in a forthcoming analysis of the main body of the Large Magellanic Cloud (LMC) and is anchored to both Cepheids and eclipsing binaries (Freedman et al. in prep). The zero point is [$M_{I}^\mathrm{TRGB}={-3.95}\pm{0.03}_{stat}\pm{0.05}_{sys}$]{}and was used in for the distance to IC1613. The adoption of a provisional zero-point is a strategy similar to that employed in the mid-stages of the Key Project for the purpose of internal consistency. This provisional calibration is also consistent to within $\pm$1-$\sigma$ with the canonical TRGB calibration based on globular clusters, $M_{I}^\mathrm{TRGB}\approx-4$ mag, and has held up in more detailed calibration efforts [e.g., @riz07; @jan17a among others]. We note that @jan17a find $M_{I}^\mathrm{TRGB}$ = –3.970 $\pm$ 0.102 mag in the LMC using a similar process (e.g., spanning a similar color range). Applying this provisional zero-point to our TRGB apparent magnitude, we find a true distance modulus to [NGC1365]{}of [$\mu_0 = \truetrgbdmod \pm\dmodcombinedstaterr_{stat} \pm\dmodcombinedsyserr_{sys}~\mathrm{mag}$]{}, or a distance of [$D = \truetrgbdmodMpc\pm\truetrgbdmodMpcstaterr_{stat}\pm\truetrgbdmodMpcsyserr_{sys}$ Mpc]{}. Table \[tab\_distance\] summarizes the values for the TRGB magnitude, the distance modulus, its uncertainties, and the adopted reddening value. [lccc]{} TRGB F814W magnitude & & [0.03]{}& [0.01]{}\ $A_{\mathrm{F814W}}$ & [0.031]{}& &\ Provisional $M_I^{\mathrm{TRGB}}$ & [-3.95]{}& [0.03]{}& [0.05]{}\ True distance modulus \[mag\] & & &\ [**Distance**]{} \[Mpc\] & & &\ Discussion {#sec:discussion} ========== In this section we provide context for our TRGB measurement with regard to existing Cepheid-based distances and the goals of the . First, we compare the methods to recent TRGB studies at a similar distance to [NGC1365]{}in Section \[ssec:trgbcomp\]. Next, we compare the TRGB distance to those determined by Cepheids in Section \[ssec:distcomp\]. Lastly, in Section \[ssec:cchpgoals\] we discuss the how the results of this study impact the goals of the . Comparison to Other TRGB Studies {#ssec:trgbcomp} -------------------------------- The objective of the CCHP is the measure of the Hubble constant to high fidelity, minimizing systematics by observing and applying a homogeneous analysis of the TRGB in galaxies spanning 10 magnitudes in distance modulus . We have developed a data-reduction strategy that can be applied to galaxies spanning this wide range in distance. As a result, the data processing, treatment of the sloped TRGB, and edge-detection strategies differ from similar studies using the TRGB at these distances. In the subsections to follow, we compare our methods to those used in other studies. ### Data Processing {#ssec:technique} Previous studies using the TRGB method at the [NGC1365]{}distance [e.g., @jan17a; @jan17b among others] have utilized stacks produced by the STScI `DrizzlePac` software [@drizzle2015], from which photometry is derived using point-spread function fitting to bright stellar sources in the image. These stacks provide image products that can be optimized in resolution and provide higher signal-to-noise than analyses performed on individual frames, but come at the cost of producing image products that vary based on the observing strategy employed. We provide a detailed comparison to photometry derived identically to @jan17b in Appendix \[app:redux\]. We find our photometry to be statistically identical over the magnitude range of interest. Moreover, the same TRGB magnitude is obtained within the statistical uncertainties. Thus, we find no bias due to our reduction strategy. ### Rectification of Sloped TRGB {#ssec:retify} Recent studies applying the TRGB at a similar distance to [NGC1365]{}[e.g., @jan17a; @jan17b] have employed a technique that allows the higher-metallicity stars, for which the tip magnitude is fainter as a function of metallicity, to be used in the TRGB detection and thereby have better statistics at the tip magnitude. The form of this correction is a normalization of the tip magnitude as a function of color that effectively rectifies the slanted or curving, metal-rich portion of the TRGB into a sharp edge. The form of the rectification is either linear [@mad09; @riz07] or quadratic [@jan17a] with color. Because the program has specifically designed pointings to target the metal-poor halos of galaxies and the signal-to-noise in our F606W is 1/3 that in the F814W, we opt to not rectify the F814W magnitudes. We do, however, provide a detailed comparison to the application of these methods, and to the body of work summarized in @jan17b, in Appendix \[app:retified\]. We find the results using the rectified magnitudes to be identical to our non-rectified magnitudes within the uncertainties, and find no bias due to our choice to limit the color range used in our LF. ### Edge Detectors {#ssec:edge} We have followed a simple edge detection methodology for the TRGB in this work, modeled after , for the ease of estimating the uncertainties associated with our measurement as well as avoiding previous algorithmic complications such as binning and over-smoothing data. As with , we compare results using several of the different approaches in Appendix \[app:edge\]. We find that there is good agreement with the TRGB measurement presented in this study. ### Summary {#ssec:sum} In this subsection, we compare the methods adopted in this work (and in ) to those commonly used in the literature, in particular the body of work encompassed by @jan17b. This comparison was completed in three phases: (i) the image processing and photometry, (ii) correcting the metal-rich slope of the TRGB, and (iii) testing alternate edge detectors. For all tests, we find our methods to agree within the statistical uncertainties and thus conclude that our techniques are both sufficient for the goals of the and consistent with techniques used by other authors. Comparison to Cepheid Distances {#ssec:distcomp} ------------------------------- Previously published distance modulus estimates to [NGC1365]{}based on Cepheids, Type II supernova (SN 2001du), and the Tully-Fisher relation (NED-D) range from $\mu_0=29.52$ mag to 32.09 mag with a mean and median of 31.20 mag and 31.26 mag, respectively. Cepheids are the only fully independent measure of distance to [NGC1365]{}, and we therefore focus our distance comparison on them. There are roughly 30 distance estimates for NGC 1365 based on Cepheids in NED circa 2017, though nearly all of these estimates are based on the same image dataset that was obtained for the Hubble Key Project: 12 epochs of F555W and 4 epochs of F814W taken with the WFPC2 instrument [@fre01; @sil99], with later works updating the calibration of the original results. These updated distance moduli show a large range from $\mu_0=31.18$ to 32.09, resulting primarily from uncertainties in the color and metallicity dependency of the Cepheids. Because of the uncertainty in the Cepheid calibration, and the bias introduced by comparing the results of consecutive publications differing only in zero-points, we have chosen the @fre01 result to represent the results from this ensemble of publications, consistent with the approach of . We have further considered a recent analysis by @rie16, who analyze new NIR photometry for a subset of the Cepheids originally discovered within the Key Project [@fre01]. The final KP distance was $\mu_0$=31.27 $\pm$ 0.05 $\pm$ 0.14 [@fre01] and was anchored to the LMC. @rie16 find $\mu_0$=31.307 $\pm$ 0.057 mag using NIR Cepheids and anchoring the zero-point of the PL to a number of different techniques in the Galaxy, M31, and NGC4258[^5]. Figure \[fig:distance\_estimates\] illustrates the consistency of the two independent Cepheid distances with that derived in this study. The sample error on the mean is only 0.03 mag, and gives no indication of a significant difference in the distances derived from stars of Pop I and II for [NGC1365]{}. The weighted-average of these results suggests a true distance modulus [$\langle\mu_0\rangle=31.30\pm0.03$ mag]{}, which is statistically indistinguishable from the TRGB measurement presented here based on the provisional TRGB luminosity in the LMC (Freedman et al. in prep). ![Comparison of Cepheid distances to [NGC1365]{}and the TRGB distance of this study. Vertical dashed line and dotted lines show the weighted average distance and $\pm\,$1-$\sigma$ confidence intervals, respectively, or [$\langle\mu_0\rangle=31.30\pm0.03$ mag]{} mag. The results of independent analyses of [NGC1365]{}agree remarkably well.[]{data-label="fig:distance_estimates"}](f6.eps){width="\columnwidth"} Evaluating the {#ssec:cchpgoals} --------------- ### Comparing the Pop I and Pop II Scales A primary goal of the is to provide a test of the systematics of the Cepheid-based distance calibration for [SNe Ia]{}. In , we found consistency between the Pop I (Cepheids) and the Pop II (RRL and TRGB) based on distances to the Local Group dwarf irregular galaxy, IC1613. The Cepheids in IC1613 represent a sample with low crowding, low metallicity [12+$\log$(O/H)=7.90; @bresolin_2007], and low internal reddening. Thus, in addition to being an ideal case for the TRGB, IC1613 is also an ideal galaxy for application of the Leavitt Law. In contrast, [NGC1365]{}presents more challenges for accurately measuring Cepheids; it has high crowding in the spiral arms, is of solar or super-solar metallicity [8.33&lt;12+$\log$(O/H)&lt;8.71; @bresolin_2005], and the internal reddening is larger than that of IC1613. Thus, we can provide an initial assessment on the impact of these effects on the Cepheid distance scale. As discussed previously, we find broad agreement between the Pop I and Pop II distance indicators for both the simple case of IC1613 and for the more complicated case of [NGC1365]{}(this work). The LMC has an intermediate metallicity, 12+$\log$(O/H) = 8.26, [estimated using identical techniques to those in [NGC1365]{}and IC1613 by @berg_2012]. In the LMC, the Cepheid and (geometric) eclipsing binary distances agree to better than 1%. This early agreement between Pop I and Pop II scales suggests that the oft-cited concerns regarding Cepheids of crowding, metallicity, and extinction cannot be fully responsible for the current impasse between direct and indirect paths to the Hubble Constant. As described in , over the course of the we will provide additional direct tests of Cepheids, RRL, and the TRGB in three Local Group galaxies (with RRL and TRGB tested in an additional three galaxies) and between Cepheids and the TRGB for a total of five [SNe Ia]{}hosts. ### The TRGB Error Budget In , we used literature studies to provide an estimate for the error budget. We adopted a TRGB measurement uncertainty of $\sigma$=0.10 mag, which was justified as twice the uncertainty quoted by @riz07 (to account for increased magnitude uncertainties for our more distant objects) and the uncertainty determined by @caldwell_2006 for a sample of Virgo dwarf galaxies. To this we added in quadrature a term for the ‘blurring’ of the TRGB due to multi-metallicity populations of $\sigma_{[Fe/H]}$ = 0.028 mag. With results from [NGC1365]{}and IC1613 in hand, we can evaluate these estimates. As is demonstrated by our comparison of rectified and non-rectified TRGB magnitudes (Section \[ssec:retify\]), the metallicity term is likely unnecessary if the color range is sufficiently restricted (as is done here). Our total TRGB measurement uncertainty (Table \[tab\_distance\]) is 0.03 mag, a factor of three smaller than that assumed in . This can be understood largely in the context of the larger sample size populating the TRGB, as was originally described by @mad95. With 4300 stars populating the LF below the TRGB, we are able to detect it much more precisely than the @caldwell_2006 study of dwarf galaxies at a similar distance. Moreover, we obtain measurement uncertainties of the same level as @riz07 for their more nearby objects; we are able to cover a larger physical area in the halos of our more distant galaxies and this makes up for the loss in photometric accuracy due to the larger distance. If we assume that measurement uncertainties of 0.05 mag can be obtained for each of our nine [SNe Ia]{}host galaxies (all of which are no more distant than [NGC1365]{}), then this is a 50% reduction in the uncertainty in our initial error budget . The TRGB uncertainty is added in quadrature to the 0.120 mag intrinsic scatter of the [SNe Ia]{}[@folatelli_2010] and this results in a total uncertainty for an individual measurement of the [SNe Ia]{}absolute magnitude of 0.130 mag and an uncertainty of 0.038 mag (1.73%) for that term in the averaged zero point for the 12 [SNe Ia]{}in our sample. Assuming no other changes to the error budget (the analyses of align with the predictions), this suggests a 2.5% measure of the Hubble constant pre-*Gaia* in the RRL-TRGB hybrid path will be feasible. A direct calibration of the TRGB skips the RRL rung in the and the uncertainty in the Hubble constant then approaches the 2% level. The dominant term in the error budget remains the number of independently calibrated [SNe Ia]{}and efforts to expand that number, and in turn provide greater insight into the intrinsic scatter of [SNe Ia]{}, will provide the greatest impact on the end precision from this route to the Hubble constant. Conclusion {#sec:conclusion} ========== As part of the , we have measured a TRGB distance to the Fornax Cluster galaxy, [NGC1365]{}. We have resolved old, metal-poor RGB stars in the halo of [NGC1365]{}with photometry obtained from deep F606W and F814W images taken with the ACS/WFC instrument aboard [*HST*]{}. We have undertaken an extensive comparison of the different techniques in use for measuring the TRGB, and find that the technique we have adopted for the is consistent to within the uncertainties. We have measured an extinction-corrected TRGB [$\mathrm{F814W}=\trgbobsvalrounded\pm {0.03}_{stat}\pm {0.01}_{sys}~\mathrm{mag}$]{}, which, using a provisional value for the TRGB absolute magnitude, corresponds to a true distance modulus of [$\mu_0 = \truetrgbdmod \pm\dmodcombinedstaterr_{stat} \pm\dmodcombinedsyserr_{sys}~\mathrm{mag}$]{}or a physical distance of [$D = \truetrgbdmodMpc\pm\truetrgbdmodMpcstaterr_{stat}\pm\truetrgbdmodMpcsyserr_{sys}$ Mpc]{}(Table \[tab\_distance\]). Our distance estimate is consistent with the existing, independent measurements using the Cepheid Leavitt Law in the optical and near-infrared bands. Taken in the context of similar agreement for IC1613 from , we find broad agreement between the Pop I and Pop II scales over a large span of Cepheid metallicity, crowding, and internal extinction. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIP) (No. 2012R1A4A1028713). Support for program \#13691 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. We thank the Carnegie Institution for its continued support of this program over the past 30 years. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This paper includes data gathered with the $6.5$m Magellan Telescopes located at Las Campanas Observatory, Chile. Comparison of and Literature Techniques {#app} ======================================= We undertake comparisons at three stages of the data reduction and analysis: (i) image processing and photometry (Section \[app:redux\]), (ii) rectification of the color-sensitivity of the RGB (Section \[app:redux\]), and (iii) testing of other edge-detection techniques (Section \[app:edge\]). Comparison of FLC and drizzled photometry {#app:redux} ----------------------------------------- The RGB stars measured in this study are as faint as F814W $\approx$ 28 mag and F606W $\approx$ 29 mag. In individual frames (20 for F814W and 12 or F606W), these stars are measured at low signal to noise. There are two independent approaches for producing photometry for these sources: 1. Generate a master source list from a high $S/N$ median image and use it as an input to force-photometer individual frames (as is done in the [ALLFRAME]{} software). The photometry is completed on the flc image products and we will refer to this technique as [flc]{}. 2. Directly photometer co-added images, defining an empirical PSF based on high $S/N$ sources in the median image. The photometry is completed on a drc image product and we will refer to this technique as [drc]{}. The former ([flc]{}) is the technique described in the main text, for which we utilize the theoretical Tiny Tim PSFs [@2011SPIE.8127E..0JK]. It has the disadvantage of the stellar full-width-at-half-maximum ([fwhm]{}) being under-sampled (though we note that because stellar crowding is low, our stellar profile fitting is not limited to the stellar [fwhm]{}). The latter technique ([drc]{}) has been used more broadly in the literature for TRGB-based analyses at these distances [e.g., @jan17b and references therein] and comes with the advantage of producing stellar profiles that are Nyquist sampled within the stellar [fwhm]{}. In this Section, we provide quantitative comparisons between the [flc]{} and [drc]{} methods. We adopt the [flc]{} photometry from the main text and the [drc]{} photometry is produced as follows. Drizzled image stacks are constructed using [DrizzlePac]{} [@fru02]. We carefully selected $\sim$100 relatively bright sources in each CCD chip and used them to refine image alignment with the [Tweakreg]{} task; the mean residual RMS for the $X$ and $Y$ shifts determined with [Tweakreg]{} were smaller than 0.1 pixel. We then used [Astrodrizzle]{} to make a combined drizzled image for each filter with [final\_pixfrac]{} = 0.8 and [final\_scale]{} = 0.03 arcsec pixel$^{-1}$. The output drizzled images have stellar [FWHM]{}s of $\sim$3 pixels, corresponding to $\sim$0.09$\arcsec$. PSF photometry on the drizzled images and standard calibration were performed following the method described in main text with the exception of the PSF modeling. We generated empirical PSFs with [DAOPHOT]{} that were constructed from $\sim15$ bright isolated stars in each of the F606W and F814W images. The F814W source catalog is used as the ‘master catalog’ and the two frames are simultaneously photometered in [ALLFRAME]{} [@1994PASP..106..250S]. The magnitudes are calibrated identically as described in the main text. Figures \[fig:drizzle\]a and \[fig:drizzle\]b provide star-by-star comparisons of the [flc]{} and [drc]{} photometry in the F606W and F814W filters, respectively. Bright stars with F606W $\lesssim 24$ mag and F814W $\lesssim 23$ mag are in excellent agreement with median offsets smaller than 0.01 mag for both filters. However, we measure small systematic offsets for the fainter stars. At the TRGB magnitude (F814W$\approx$ 27.4 mag and F606W $\approx$ 28.7 mag), median offsets are measured to be 0.04 mag in F606W and 0.03 mag in F814W. The precise origin of the offsets for fainter sources remains unclear, but could be due to (i) the relatively small number of sources used to determine the empirical PSF[^6] or (ii) documented differences between the Tiny Tim and empirical magnitudes for faint sources that were described by @2011SPIE.8127E..0JK. For 282 sources between 27.34 &lt; F814W &lt; 27.40 mag the median magnitude uncertainty is 0.068 mag for F814W and 0.13 mag for F606W (the latter measurement is for the same stars in the F814W range). Thus, the differences identified in Figures \[fig:drizzle\]a and \[fig:drizzle\]b for the fainter sources are within the magnitude uncertainties. While some star-to-star differences are demonstrated in Figures \[fig:drizzle\]a and \[fig:drizzle\]b, a more relevant question is the results from the TRGB detection. Thus, we employ the same techniques described in the main text to the [drc]{} photometry. The result is given in Figure \[fig:drizzle\]c. The CMD shows a well defined RGB with a visible discontinuity TRGB at F814W $\sim$ 27.4 mag. A visual comparison to Figure \[fig:f2\] reveals that the drizzle based CMD looks more well populated (i.e., more complete) and this is consistent with having performed PSF photometry on a higher signal-to-noise image. We selected stars in the shaded region (identical to that of Figure \[fig:f2\] in the main text), construct a luminosity function, apply the GLOESS smoothing, and, lastly, apply the \[-1,0,1\] Sobel filter. The edge detection response is shown in red in Figure \[fig:drizzle\]c and the maximal response is $=27.39\pm0.03$ mag. Derived TRGB magnitude is statistically consistent with the value from the individual frame photometry, F814W = [27.371]{}  $\pm$ [0.03]{} mag (Table \[tab\_distance\]). From the comparisons given in the panels of Figure \[fig:drizzle\], we conclude that the two reduction procedures are statistically identical for both their output photometry and in their TRGB measurements. Thus, the choice to use individual frame photometry for the project, motivated by the need for a homogeneous image processing strategy for both nearby and distant [SNe Ia]{}Ia hosts, is consistent with the body of work derived from drizzled photometry [e.g., @jan17b]. ![image](f7ab.eps){width="40.00000%"} ![image](f7c.eps){width="40.00000%"} ![[NGC1365]{}color-magnitude diagrams using the rectified TRGB method in the (a) $T_{F814W,F606W}$ and (b) $QT_{F814W,F606W}$ systems described in the text. The edge detection response from the Sobel filter, $[-1, 0, +1]$ applied to the GLOESS smoothed LF is shown by red solid lines in each panel. []{data-label="fig:fb1"}](f8.eps){width="70.00000%"} Rectified TRGB Magnitudes {#app:retified} ------------------------- A great benefit of the TRGB as a distance indicator is that the metallicity sensitivity of the absolute magnitude is projected into the color of the star. Furthermore, for metal-poor stars that populate the ‘blue’ edge of the TRGB, $(V-I)_0\lesssim2$, the $I$ magnitude of the TRGB is relatively insensitive to metallicity [i.e., is flat with color; see @lee93; @riz07; @mad09; @jan17a among others]. Thus, with only an optical color-cut (as is done in Figure \[fig:f2\]), the $I$, and by proxy the F814W TRGB, requires no correction for metallicity to convert to an absolute magnitude system. The color dependence of the $I$ TRGB, however, is not everywhere negligible; in particular, for the color range $(V-I)_0 \gtrsim 2.0$ the $I$ magnitude becomes noticeably fainter. @mad09 presented an empirical technique to rectify or transform the TRGB magnitudes for metal-rich sources to the metal-poor (flat) portion of the TRGB. The general form of a transformation into the $T_{\lambda_{1},\lambda_{2}}$ (or TRGB) magnitude system is defined as $$T_{\lambda_{1},\lambda_{2}}=m_{\lambda_{1}} - \beta_{\lambda_{1},\lambda_{2}} [(m_{\lambda_{1}}-m_{\lambda_{2}})-\gamma_{\lambda_{1},\lambda_{2}}]$$ where the $T_{\lambda_{1},\lambda_{2}}$ is the initial magnitude ($m_{\lambda_{1}}$) corrected for the slope of the TRGB ($\beta_{\lambda_{1},\lambda_{2}}$) to a fiducial color $\gamma_{\lambda_{1},\lambda_{2}}$. In the standard Johnson-Cousins system used in @mad09, the slope is $\beta_{I,V} = 0.2$ and the fiducial color was $\gamma_{I,V} = 1.5$ to produce $T_{I,V}$ magnitudes from $I$ photometry. The parameter values were determined from a linear approximation to the TRGB predicted by theoretical models described by @bel01 [@bel04]. A standard edge-detection algorithm can be applied to the rectified CMD and the distance modulus is computed as $(m-M)_0=T-M_{TRGB}$, where $M_{TRGB}$ is defined at the fiducial color, $\gamma_{\lambda_{1},\lambda_{2}}$. For use in this work, we convert the @mad09 $T$ magnitude system into the ACS/WFC system for $\lambda_{1}$=F814W with $\lambda_{2}$=F606W and $\lambda_{2}$=F555W utilizing the photometric transformations from the flight magnitude system to the Johnson-Cousins system given in [@2005PASP..117.1049S]. We find $\beta_{F814W,F606W} = 0.27$ mag color$^{-1}$ and the fiducial color is $\gamma_{F814W,F606W} = 1.18$ mag and $\beta_{F814W,F555W} = 0.19$ mag color$^{-1}$ and the fiducial color is $\gamma_{F814W,F555W} = 1.59$ mag. These conversions are approximate only and should be measured directly from color-magnitude diagrams in these filters. @jan17a investigated the color dependence of the TRGB from the HST/ACS photometry of eight nearby galaxies and find that the run of the $I$ TRGB with the $V-I$ color can be described with two components: a flat one for the blue color range ($V-I\lesssim1.9$) and a steep component for the red color range ($V-I\gtrsim1.9$). From this, they introduced the $QT$ magnitude, a quadratic form of the TRGB magnitude corrected for the color dependence of the TRGB. $QT_{\lambda_{1},\lambda_{2}}$ is given by $$QT_{\lambda_{1},\lambda_{2}}=m_{\lambda_{1}} - \beta_{\lambda_{1},\lambda_{2}} [(m_{\lambda_{1}}-m_{\lambda_{2}})-\gamma_{\lambda_{1},\lambda_{2}}] - \alpha_{\lambda_{1},\lambda_{2}} [(m_{\lambda_{1}}-m_{\lambda_{2}})-\gamma_{\lambda_{1},\lambda_{2}}]^2 \\$$ where $\alpha_{F814W,F606W}=0.159\pm0.010$ mag color$^{-2}$, $\beta_{F814W,F606W}=-0.047\pm0.020$ mag color$^{-1}$, and $\gamma_{F814W,F606W}=1.1$ mag. We applied the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ magnitude transformations to the [NGC1365]{} [flc]{} photometry and the resulting color-magnitude diagrams are shown in Figures \[fig:fb1\]a and \[fig:fb1\]b, respectively. To construct a LF, we apply the color-magnitude restriction indicated by the blue shading in Figures \[fig:fb1\]a and \[fig:fb1\]b, which is identical to that applied in Figures \[fig:f2\] and \[fig:n1365trgbmeas\]a in the main text. We use GLOESS smoothing with our idealized [$\sigma_s$]{}and use the Sobel filter, $[-1, 0, +1]$. The edge-detection response function is shown in Figures \[fig:fb1\]a and \[fig:fb1\]b and has strong peaks at $T\simeq QT\simeq 27.4$ mag. The TRGB magnitude and uncertainties are derived following the procedure outlined in @jan17b, which uses the results of bootstrap re-sampling to define the true TRGB tip and its uncertainty. We obtain TRGB magnitudes: $T_{F814W,F606W} = 27.32\pm0.03$ mag and $QT_{F814W,F606W} = 27.34\pm0.03$ mag, which agree within their mutual uncertainties. Comparing the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ results to that from the main text, F814W = [27.371]{}  $\pm$ [0.03]{} mag (Table \[tab\_distance\]), we find agreement within the quoted uncertainties. We note that the absolute magnitude of the TRGB is shifted systematically fainter at the $\sim$0.01 mag level for the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ systems, which also brings the measurements into better agreement [we refer the reader to @jan17a for details]. As was mentioned in the previous section, the median magnitude uncertainty is 0.068 mag for F814W and 0.13 mag for F606W at the TRGB, which means that the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ magnitudes, themselves, are at significantly larger uncertainties and will scatter (preferentially brighter in the form of the transformation). This is visually apparent by comparing Figure \[fig:f2\] to Figures \[fig:fb1\]a and \[fig:fb1\]b; in particular, the visible density of stars near the tip does not look to be significantly improved by moving into the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ systems. We further note that our color-magnitude restrictions largely avoid the regions of F606W-F814W color where the $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ magnitudes are expected to provide the most benefit by bringing these fainter TRGB sources to the same magnitude of the bluer TRGB. Thus, we conclude that our results from the raw F814W magnitudes are fully consistent with those determined with the rectified $T_{F814W,F606W}$ and $QT_{F814W,F606W}$ systems. ![image](f9.eps){width="75.00000%"} Comparison of Edge Detectors {#app:edge} ---------------------------- The detection of the apparent magnitude of the TRGB is one of the most critical steps in the TRGB distance estimation. Broadly, two independent approaches have been developed for identifying the TRGB: 1. Direct edge detection algorithms as in @lee93 [@mad95; @sak96; @men02; @mag08; @mad09] that typically make use of a form of the Sobel Filter that is an approximation to the first derivative of a discrete function. These can take on discrete ($N$) and continuous forms ($\phi$) based on the smoothing that is applied to the LF before application of the edge detection algorithm. 2. Template fitting as in @cio00 [@men02; @fra03; @mcc04; @mou05; @mak06; @con11]. Based on a review of the literature, we have selected seven forms of TRGB edge detection in addition to that adopted by the . These are similar to those applied in Appendix B of . We have applied the eight Sobel filters to the luminosity functions of the selected stars in NGC 1365 and plot them in Figure \[fig:fc1\]. We used a 0.05 mag bin to construct the LF for those edge-detectors that operate directly on the histogram (e.g., Figures \[fig:fc1\]a, \[fig:fc1\]b, \[fig:fc1\]e, \[fig:fc1\]f, and \[fig:fc1\]g). In the case of the continuous forms of Sobel filters, we used a bin width of 0.001 mag for deriving the Gaussian smoothed luminosity functions (e.g., Figures \[fig:fc1\]c and \[fig:fc1\]d). Figure \[fig:fc1\]h is a re-visualization of the algorithm applied in the main text. The magnitude of the TRGB are determined by choosing the maximum edge-detection response in Figures \[fig:fc1\]c, \[fig:fc1\]d, and \[fig:fc1\]h) and via the @jan17b bootstrap resampling method in Figures \[fig:fc1\]a, \[fig:fc1\]b, \[fig:fc1\]e, \[fig:fc1\]f, and \[fig:fc1\]g. Qualitatively, all eight edge-detection responses in the panels of Figure \[fig:fc1\] have peaks at F814W$\sim27.4$ mag. The results of the quantitative tip detection are as follows in the panels of Figure \[fig:fc1\] as follows: in Figure \[fig:fc1\]a the TRGB = 27.36 mag from the @lee93 algorithm, in Figure \[fig:fc1\]b the TRGB = 27.38 mag from the @mad95 algorithm, in Figure \[fig:fc1\]c the TRGB = 27.49 mag from the @sak96 algorithm, in Figure \[fig:fc1\]d the TRGB = 27.41 mag from the @men02 algorithm, in Figure \[fig:fc1\]e the TRGB = 27.42 mag from the @mag08 algorithm, in Figure \[fig:fc1\]f the TRGB = 27.43 mag from the @mad09 algorithm, in Figure \[fig:fc1\]g the TRGB = 27.42 mag from the @jan17b algorithm, in Figure \[fig:fc1\]h the TRGB = 27.37 mag from the algorithm adopted in the main text. In each panel of Figure \[fig:fc1\], the TRGB is indicated by the vertical dashed line in each panel. While there is qualitative agreement, the the techniques produce results that vary over a range of 0.13 mag, which is four times larger than the quoted uncertainty on our measurement of 0.03 mag. In , a thorough discussion of the advantages and disadvantages of the wide-range of edge detectors was presented and for brevity we will only discuss the implications that can be interpreted from the panels of Figure \[fig:fc1\]. First, the use of large bins is problematic since not only the size of the bin, but also the starting point of the bins have an effect on the quantitative Sobel response. Thus, in our discretely binned LFs, an additional random and systematic uncertainty of $\sim$0.03 (50% of a bin) should be added to the algorithmic measurement uncertainty (e.g., the $\sim$0.03 mag uncertainty derived from bootstrap resampling). These two components arise due to the inability to distinguish the location of the peak within the set binning strategy and this must be applied to the results in Figures \[fig:fc1\]a, \[fig:fc1\]b, \[fig:fc1\]e, \[fig:fc1\]f, and \[fig:fc1\]g. Allowing for these additional uncertainties, all of these values are consistent with our measurement (Figure \[fig:fc1\]h). Second, many of the edge-detection algorithms themselves employ smoothing directly into the algorithm itself. If applied to a ‘raw’ LF, then this is not problematic, but many of the algorithms are applied to LFs that have already been smoothed. This is clearly evident in Figure \[fig:fc1\]c, which has not only a heavily smoothed LF, but also a heavily smoothed algorithm. This ‘double smoothing’ results in the most deviant of the TRGB values (27.49 mag) and the response of the edge-detection is not a peak, but a plateau that reduces the precision of the tool. The bias in the @sak96 algorithm is evident by comparing Figures \[fig:fc1\]c and \[fig:fc1\]d, that while having nearly identical LFs, have very different edge responses. Lastly, there are algorithms that attempt to model the uncertainties in the data (both magnitude uncertainties and completeness), but these rely critically on the ability to assess these values well for a dataset. After being modeled, these uncertainties are folded into the detection algorithm itself, instead of applying modifications to the LF directly. The difficulty with this approach is that it is not fully reproducible by an independent team. As has been shown in previous sections of this Appendix, there are quantitative differences at the 0.04 mag level between photometry derived from the same underlying images due to subtle choices in the data processing. We have demonstrated that our algorithms for the LF and for the edge-detection are robust to these differences, but algorithmic approaches that use the photometry characterizations directly from one’s own photometry would not be reproducible by an independent process. This is particularly concerning for the template-fitting strategies that rely on (i) the input idealized model of the LF to be well matched to the actual intrinsic luminosity function for the field of interest and (ii) are highly sensitive to the completeness in both bands of the photometry (not just the band used for the LF). In conclusion, we see quantitative differences between our adopted strategy for smoothing the LF and for applying an edge-detection algorithm (Figure \[fig:fc1\]). As we have shown, these differences can be understood within the true uncertainties of the various techniques. As discussed in depth in , our LF binning, edge-detection algorithm, and our modeling of the uncertainties are explicitly designed to be reproducible by others and to take into account the full scale photometric uncertainties. [^1]: Based in part on observations made with the NASA/ESA *Hubble Space Telescope*, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \#13691. [^2]: Data are available at: <https://sne.space/sne/SN2012fr/> [^3]: <http://www.stsci.edu/hst/acs/analysis/PAMS> [^4]: <https://acszero-points.stsci.edu/> [^5]: We refer the reader to that work for the full description of their anchoring process and tests thereof. [^6]: Using a small number of sources limits the ability of the [DAOPHOT]{}-based PSF model to properly account for the PSF variation across the frame due to residual distortion or true variation. Moreover, the PSF is more susceptible to non-stellar contaminants and other non-ideal features in the profile.
--- abstract: 'We discuss three related models of scale-free networks with the same degree distribution but different correlation properties. Starting from the Barabasi-Albert construction based on growth and preferential attachment we discuss two other networks emerging when randomizing it with respect to links or nodes. We point out that the Barabasi-Albert model displays dissortative behavior with respect to the nodes’ degrees, while the node-randomized network shows assortative mixing. These kinds of correlations are visualized by discussig the shell structure of the networks around their arbitrary node. In spite of different correlation behavior, all three constructions exhibit similar percolation properties.' address: - '$^1$Institut für Physik, Humboldt Universität zu Berlin, Invalidenstraße 110, D-10115 Berlin, Germany' - '$^2$Theoretische Polymerphysik, Universität Freiburg, Hermann Herder Str. 3, D-79104 Freiburg, Germany' author: - 'R. Xulvi-Brunet$^1$, W. Pietsch$^1$, and I.M. Sokolov$^{1,2}$' title: 'Correlations in Scale-Free Networks: Tomography and Percolation' --- [2]{} Introduction {#introduction .unnumbered} ============ Scale-free networks, i.e. networks with power-law degree distributions, have recently been widely studied (see Refs. [@baretalb; @dorogov] for a review). Such degree distributions have been found in many different contexts, for example in several technological webs like the Internet [@int; @past], the WWW [@www2; @WWW], or electrical power grids [@Wa], in natural networks like the network of chemical reactions in the living cell [@Oltvai; @Fell; @Mason] and also in social networks, like the network of human sexual contacts [@sex], the science [@New1; @New2] and the movie actor [@Am; @Alb] collaboration networks, or the network of the phone calls [@phonecall]. The topology of networks is essential for the spread of information or infections, as well as for the robustness of networks against intentional attack or random breakdown of elements. Recent studies have focused on a more detailed topological characterization of networks, in particular, in the degree correlations among nodes [@past; @Newm; @Berg; @Egui; @Bo; @mas; @Vaz; @Goh; @Bog; @NE; @Serr]. For instance, many technological and biological networks show that nodes with high degree connect preferably to nodes with low degree [@past; @mas], a property referred to as disassortative mixing. On the other hand, social networks show assortative mixing [@Newm; @NE], i.e. highly connected nodes are preferably connected to nodes with high degree. In this paper we shall study some aspects of this topology, specifically the importance of the degree correlations, in three related models of scale-free networks and concentrate on the two important characteristics: the tomography of shell structure around an arbitrary node, and percolation. The Models {#the-models .unnumbered} ========== Our starting model is the one of Barabasi and Albert (BA) [@BA-model], based on the growth algorithm with preferential attachment. Starting from an arbitrary set of initial nodes, at each time step a new node is added to the network. This node brings with it $m$ proper links which are connected to $m$ nodes already present. The latter are chosen according to the preferential attachment prescription: The probability that a new link connects to a certain node is proportional to the degree (number of links) of that node. The resulting degree distribution of such networks tends to [@Redner; @degdis; @Kra]: $$P(k)=\frac{2m(m+1)}{k(k+1)(k+2)} \sim k^{-3}. \label{degdis}$$ Krapivsky and Redner [@Kra] have shown that in the BA-construction correlations develop spontaneously between the degrees of connected nodes. To assess the role of such correlations we shall randomize the BA-network. Recently Maslov and Sneppen [@mas] have suggested an algorithm radomitzing a given network that keeps the degree distribution constant. According to this algorithm at each step two links of the network are chosen at random. Then, one end of each link is selected randomly and the attaching nodes are interchanged. However, in case one or both of these new links already exits in the network, this step is discarded and a new pair of edges is selected. This restriction prevents the apparearance of multiple edges connecting the same pair of nodes. A repeated application of the rewiring step leads to a randomized version of the original network. We shall refer to this model as link-randomized (LR) model. The LR model can be compared with another model which is widely studied in the context of scale-free networks, namely with the configuration model introduced by Bender and Canfield [@cand; @mollreed]. It starts with a given number $N$ of nodes and assigning to each node a number $k_i$ of “edge stubs” equal to its desired connectivity. The stubs of different nodes are then connected randomly to each other; two connected stubs form a link. One of the limitations of this “stub reconnection” algorithm is that for broad distribution of connectivities, which is usually the case in complex networks, the algorithm generates multiple edges joining the same pair of hub nodes and loops connecting the node to itself. However, the cofiguration model and the LR model get equivalent as $ N\rightarrow \infty $. One can also consider a node-randomized (NR) counterpart of the LR randomize procedure. The only difference to the link-radomized algorithm is that instead of choosing randomly two links we choose randomly two nodes in the network. Then the procedure is the same as in the LR model. As we proceed to show, the three models have different properties with respect to the correlations between the degrees of connected nodes. While the LR (configuration) model is random, the genuine BA prescription leads to a network which is dissortative with respect to the degrees of connected nodes, and the NR model leads to an assortative network. This fact leads to considerable differences in the shell structure of the networks and also to some (not extremely large) differences in their percolation characteristics. We hasten to note that our simple models neglect many important aspects of real networks like geography [@Soki; @geog] but stress the importance to consider the higher correlations in the degrees of connected nodes. Tomography of the Networks {#shell structure .unnumbered} ========================== Referring to spreading of computer viruses or human diseases, it is necessary to know how many sites get infected on each step of the infection propagation. Thus, we examine the local structure in the network. Cohen et al. [@tomography] examined the shells around the node with the highest degree in the network. In our study we start from a node chosen at random. This initial node (the root) is assigned to shell number 0. Then all links starting at this node are followed. All nodes reached are assigned to shell number 1. Then all links leaving a node in shell 1 are followed and all nodes reached that don’t belong to previous shells are labelled as nodes of shell 2. The same is carried out for shell 2 etc., until the whole network is exhausted. We then get $N_{l,r}$, the number of nodes in shell $l$ for root $r$. The whole procedure is repeated starting at all $% N $ nodes in the network, giving $P_{l}(k)$, the degree distribution in shell $l$. We define $P_{l}(k)$ as: $$P_{l}(k)=\frac{\sum_{r} N_{l,r}(k)}{\sum_{k,r} N_{l,r}(k)}. \label{aa}$$ We are most interested in the average degree $\langle k\rangle _{l}=\sum_{k}kP_{l}(k)$ of nodes of the shell $l$. In the epidemiological context, this quantity can be interpreted as a disease multiplication factor after $l$ steps of propagation. It describes how many neighbors a node can infect on average. Note that such a definition of $P_{l}(k)$ gives us for the degree distribution in the first shell: $$P_{1}(k)=\frac{\sum_{r} N_{1,r}(k)}{\sum_{k,r} N_{1,r}(k)}= \frac{kN_k}{\sum_k kN_k}=\frac{kP(k)}{\langle k \rangle}, \label{bb}$$ where $P(k)$ and $N_k$ are the degree distribution and the number of nodes with degree $k$ in the network respectively. We bear in mind that every link in the network is followed exactly once in each direction. Hence, we find that every node with degree $k$ is counted exactly $k$ times. From Eq.($\ref{bb}$) follows that $\langle k\rangle _{1}=\langle k^{2}\rangle / \langle k\rangle$. This quatity, that plays a very important role in the percolation theory of networks [@cohetal], depends only on the first and second moment of the degree distribution, but not on the correlations. Of course $P_0(k)=P(k)$. Note that as $N\rightarrow \infty $ we have $\langle k^{2}\rangle \rightarrow \infty $: for our scale-free constructions the mean degree in shell 1 depends significantly on the network size determining the cutoff in the degree distribution. However, the values of $\langle k\rangle _{1}$ are the same for all three models: The first two shells are determined only by the degree distributions. In all other shells the three models differ. For the LR (configuration) model one finds for all shells in the thermodynamic limit $P_{l}(k)=P_{1}(k)$. However, since these distributions do not possess finite means, the values of $\langle k\rangle _{l}$ are governed by the finite-size cutoff, which is different in different shells, since the network is practically exhausted within the first few steps, see Fig.1. In what follows we compare the shell structure of the BA, the LR and the NR models. We discuss in detail the networks based on the BA-construction with $m=2$. For larger $m$ the same qualitative results were observed. In the present work we refrain from discussion of a peculiar case $m=1$. For $m=1$ the topology of the BA-model is distinct from one for $m\geq 2$ since in this case the network is a tree. This connected tree is destroyed by the randomization procedure and is transformed into a set of disconnected clusters. On the other hand, for $m\geq 2$ the creation of large separate clusters under randomization is rather unprobable, so that most of the nodes stay connected. Fig. \[fig1\] shows $\langle k\rangle $ as a function of the shell number $l$. Panel (a) corresponds to the BA model, panel (b) to the LR model, and panel (c) to the NR model. The different curves show simulations for different network sizes: $N=3,000$; $N=10,000$; $N=30,000$; and $N=100,000$. All points are averaged over ten different realizations except for those for networks of 100,000 nodes with only one simulation. In panel (d) we compare the shell-structure for all three models at $N=30,000$. The most significant feature of the graphs is the difference in $\langle k \rangle _{2}$. In the BA and LR models the maximum is reached in the first shell, while for the NR model the maximum is reached only in the second shell: $\langle k\rangle_{2,BA}<\langle k\rangle _{2,LR}<\langle k\rangle _{2,NR}$. This effect becomes more pronounced with increasing network size. In shells with large $l$ for all networks mostly nodes with the lowest degree $2$ are found. The inset in graph (a) of Fig. \[fig1\] shows the relation between average age $\eta$ of nodes with connectivy $k$ in the network as a function of their degree for the BA model. The age of a node $n$ and of any of its proper links is defined as $\eta (n)=(N-t_{n})/N$ where $t_{n}$ denotes the time of birth of the node. For the randomized LR and NR models age has no meaning. The figure shows a strong correlation between age and degree of a node. The reasons for these strong correlations are as follows: First, older nodes experienced more time-steps than younger ones and thus have larger probability to acquire non-proper bonds. Moreover, at earlier times there are less nodes in the network, so that the probability of acquiring a new link per time step for an individual node is even higher. Third, at later time-steps older nodes already tend to have higher degrees than younger ones, so the probability for them to acquire new links is considerably larger due to preferential attachment. The correlations between the age and the degree bring some nontrivial aspects into the BA model based on growth, which are erased when randomizing the network. Let us discuss the degree distribution in the second shell. In this case we find as that every link leaving a node of degree $k$ is counted $k-1$ times. Let $P(l|k)$ be a probability that a link leaving a node of degree $k$ enters a node with degree $l$. Neglecting the possibility of short loops (which is always appropriate in the thermodynamical limit $N \rightarrow \infty$) and the inherent direction of links (which may be not totally appropriate for the BA-model) we have: $$P_{2}(l)=\frac{\sum_k kP(k)(k-1)P(l|k)}{\sum_{k}kP(k)(k-1)}. \label{P2} %P_{2}(l)=\sum_{k}\left[ \frac{kP(k)(k-1)P(l|k)}{\sum_{k}kP(k)(k-1)}% %\right] . \label{P2}$$ [2]{} The value of $\langle k\rangle _{2}$ gives important information about the type of mixing in the network. To study mixing in networks one needs to divide the nodes into groups with identical properties. The only relevant characteristics of the nodes that is present in all three models, is their degree. Thus, we can examine the degree-correlations between neighboring nodes, which we compare with the uncorrelated LR model, where the probability that a link connects to a node with a certain degree is independent from whatever is attached to the other end of the link: $P(k | l)=kP(k)/\langle k\rangle =kP(k)/2m$. All other relations would correspond to assortative or disassortative mixing. Qualitatively, assortativity then means that nodes attach to nodes with similar degree more likely than in the LR-model: $P(k | l)>P(k | l)_{LR}=kP(k) / \langle k\rangle$ for $k\approx l$. Dissortativity means that nodes attach to nodes with very different degree more likely than in the LR-model: $P(k | l)> kP(k) / \langle k\rangle $ for $k\gg l$ or $l\gg k$. Inserting this in Eq.(\[P2\]), and calculating the mean, one finds qualitatively that $\langle k \rangle _{1}=\langle k\rangle _{2,LR}<\langle k\rangle _{2}$ for assortativity, and $\langle k\rangle _{1}>\langle k\rangle _{2}$ for dissortativity. In the following we show where the correlations of the BA and NR model originate. A consequence of the BA-algorithm is that there are two different types of ends for the links. Each node has exactly $m$ proper links attached to it at the moment of its birth and a certain number of links that are attached later. Since each node receives the same number of links at its birth, towards the proper nodes a link encounters a node with degree $k$ with probability $P(k)$. To compensate for this, in the other direction a node with degree $k$ is encountered with the probability $\frac{(k-m)P(k)}{m} =2\frac{kP(k)}{\langle k\rangle }-P(k)$, so that both distributions together yield $kP(k)/\langle k\rangle $. On one end of the link nodes with small degree are predominant: $P(k)<kP(k)/\langle k\rangle $ for small $k$. On the other end nodes with high degree are predominant: $(k-m)P(k)/m>kP(k)/2m$ for $k$ large. This corresponds to dissortativity. Actually the situation is somewhat more complex since in the BA model these probability distributions also depend on the age of the link. Assortativity of the NR model is a result of the node-randomizing process. Since the nodes with smaller degree are predominant in the node population, those links are preferably chosen that have on the end with the randomly chosen node a node with a smaller degree ($P(k)>kP(k)/\langle k\rangle $ for $k$ small). Then the randomization algorithm exchanges the links and connects those nodes to each other. This leads to assortativity for nodes with small degree, which is compensated by assortativity for nodes with high degree. Percolation {#percolation .unnumbered} =========== Percolation properties of networks are relevant when discussing their vulnerability to attack or immunization which removes nodes or links from the network. For scale-free networks random percolation as well as vulnerability to a deliberate attack have been studied by several groups [@cohetal; @ben; @je; @cohetal2; @callnew]. One considers the removal of a certain fraction of edges or nodes in a network. Our simulations correspond to the node removal model; $q$ is the fraction of removed nodes. Below the percolation threshold $q<q_{c}$ a giant component (infinite cluster) exists, which ceases to exist above the threshold. A giant component, and consequently $q_{c}$ is exactly defined only in the thermodynamic limit $ N\rightarrow \infty $: it is a cluster, to which a nonzero fraction of all nodes belongs. In [@mollreed] and [@cohetal] a condition for the percolation transition in random networks has been discussed: Every node already connected to the spanning cluster is connected to at least one new node. Ref. [@cohetal] gives the following percolation criterion for the configuration model: $$1-q_{c}=\frac{\langle k\rangle }{\langle k^{2}\rangle -\langle k\rangle }, \label{condperc}$$ where the means correspond to an unperturbed network ($q=0$). For networks with degree distribution Eq.($\ref{degdis}$), $\langle k^{2}\rangle $ diverges as $N \rightarrow \infty$. This yields for the random networks with a such degree distribution a percolation threshold $q_{c}=1$ in the thermodinamic limit, independent of the minimal degree $m$; in the epidemiological terms this corresponds to the absence of herd immunities in such systems. Crucial for this threshold is the power-law tail of the degree distribution with an exponent $\leq 3$. Moreover, Ref. [@ben] shows that the critical exponent $\beta $ governing the fraction of nodes $M_{\infty }$ of the giant component, $M_{\infty }\propto (q_{c}-q)^{\beta }$, diverges as the exponent of the degree distribution approaches $-3$. Therefore $M_{\infty }$ approaches zero with zero slope as $q\rightarrow 1$. In Fig. \[fig2\] we plotted for the three models discussed $M_{\infty }$ as a function of $q$. The behavior of all three models for a network size of $300,000$ nodes is presented in panel (a). In the inset the size of the giant component was measured in relation to the number of nodes remaining in the network $(1-q)N$ and not to their initial number $N$. Other panels show the percolation behavior of each of the models at different network sizes: Panel (b) corresponds to the BA model, (c) to the LR model, and (d) to the NR model. For the largest networks with $N=300,000$ nodes we calculated 5 realizations for each model, for those with $30,000$; $10,000$; and $3,000$ nodes averaging over 10 realization was performed. For all three models within the error bars the curves at different network sizes coincide. This shows that even the smallest network is already close to the thermodynamical limit. R. Albert et al. have found a similar behavior in a study of BA-networks [@je]. They analyze networks of sizes $N=1000, 5000$ and $20000$ concluding “that the overall clustering scenario and the value of the critical point is independent of the size of the system”. In the simulations we find two regimes: for moderate $q$ we find, that the sizes of the giant components of the BA, LR, and NR model obey the inequalities $M_{\infty ,BA}>M_{\infty ,LR}>M_{\infty ,NR}$ , while for $q$ close to unity the inequalities are reverted: $M_{\infty ,BA}<M_{\infty ,LR}<M_{\infty ,NR}$. However, in this regime the differences between $M_{\infty ,BA},M_{\infty ,LR}$ and $M_{\infty ,NR}$ are subtle and hardly resolved on the scales of Fig. 2. We note that similar situation was observed in Ref. [@Newm]. However, there the size of the giant cluster was measured not as a function of $q$ but of a scaling parameter in the degree distribution. The observed effects can be explained by the correlations in the network. For $q=0$ one has $M_{\infty,BA}=M_{\infty ,LR}=M_{\infty ,NR}$. Now, the probability that single nodes loose their connection to the giant cluster depends only on the degree distribution, and not on correlations. So, the difference in the $M_{\infty}$ must be explained by the break-off of clusters containing more than one node. The probability for such an event is smaller in the BA than in the LR model, since dissortativity implies that one finds fewer ’regions’, where only nodes with low degree are present. However, when we get to the region of large $q$, as nodes with low degree act as ’bridges’ between the nodes with high degree, the connections between the nodes with high degree are weaker in the case of the BA model than in the case of the LR model. So, the probability that nodes with high degree break off is higher for the BA model than for the LR model. There is no robust core of high-degree nodes in the network [@Newm]. The correlation effects for the NR model, when compared with the LR model, are opposite to those for the BA model. [2]{} Conclusion {#conclusion .unnumbered} ========== We consider three different models of scale-free networks: the genuine Barabasi-Albert construction based on growth and preferential attachment, and two networks emerging when randomizing it with respect to links or nodes. We point out that the BA model shows dissortative behavior with respect to the nodes’ degrees, while the node-randomized network shows assortative mixing. However, these strong differences in the shell structure lead only to moderate quantitative difference in the percolation behavior of the networks. Acknowledgment {#acknowledgment .unnumbered} ============== Partial financial support of the Fonds der Chemischen Industrie is gratefully acknowledged. R. Albert and A.-L. Barabási, Rev. Mod. Phys. **74**, 47 (2002). S.N. Dorogovtsev and J.F.F. Mendes, Adv. Phys. **51**, 1079 (2002). M. Faloutsos, P. Faloutsos, and C. Faloutsos, Comput. Commun. Rev. **29**, 251 (1999). R. Pastor-Satorras, A. Vazquez, and A. Vespignani, Phys. Rev. Lett. **87**, 258701 (2001). R. Albert, H. Jeong, and A.-L. Barabasi, Nature (London) **401**, 130 (1999). A. Broder, R. Kumar, F. Maghoul, P. Raphavan, S. Rajagopalan, R. Stata, A. Tomkins, and J. Wiener, Comput. Netw. **33**, 309 (2000). D. J. Watts, and S. H. Strogatz, Nature (London) **393**, 440 (1998). H. Jeong, B. Tombor, R. Albert, Z.N. Oltvai, and A.-L. Barabási, Nature [**407**]{}, 651 (2000). D.A. Fell, and A. Wagner, Nat. Biotechnol. [**18**]{}, 1121 (2000). H. Jeong, S.P. Mason, A.-L. Barabási, and Z.N. Oltvai, Nature [**411**]{}, 41 (2001). F. Liljeros, C. Edling, L.A.N. Amaral, H.E. Stanley, and Y. [Å]{}berg, Nature **411**, 907 (2001). M.E.J. Newman, Proc. Natl. Acad. Sci. [**98**]{}, 404 (2001). M.E.J. Newman, Phys. Rev. E [**64**]{}, 016131 (2001). A. L. N. Amaral, M. Barthélémy, and H. E. Stanley, Proc. Natl. Acad. Sci. [**97**]{}, 11149 (2000). R. Albert, and A.-L. Barabási, Phys. Rev. Lett. [**85**]{}, 5234 (2000). J. Abello, A. Buchsbaum, and J. Westbrook, Lect. Notes Comput. Sci. **1461**, 332 (1998). M. E. J. Newman, Phys. Rev. Lett. [**89**]{}, 208701 (2002). J. Berg, and M. Lässig, Phys. Rev. Lett. [**89**]{}, 228701 (2002). V. M. Egu[í]{}luz, and K. Klemm, Phys. Rev. Lett. [**89**]{}, 108701 (2002). M. Boguñá, and R. Pastor-Satorras, Phys. Rev. E [**66**]{}, 047104 (2002). S. Maslov, and K. Sneppen, Science **296** 910 (2002). A. Vázquez, and Y. Moreno, Phys. Rev. E [**67**]{}, 015101 (2003). K.-I. Goh, E. Oh, B.Kahng, and D. Kim, Phys. Rev. E [**67**]{}, 017101 (2003). M. Boguñá, R. Pastor-Satorras, and A. Vespignani, Phys. Rev. Lett. [**90**]{}, 028701 (2003). M. E. J. Newman, Phys. Rev. E [**67**]{}, 026126 (2003). M. A. Serrano, and M. Boguñá, e-print cond-mat/0301015. A.-L. Barabási, and R. Albert, Science **286**, 509 (1999). P. L. Krapivsky, S. Redner, and F. Leyvraz, Phys. Rev. Lett. [**85**]{}, 4629 (2000). S.N. Dorogovtsev, J.F.F. Mendes, and A.N. Samukhin, Phys. Rev. Lett. **85**, 4633 (2000). P. L. Krapivsky, and S. Redner, Phys. Rev. E [**63**]{} 066123 (2001). E. A. Bender, and E. R. Canfield, Journal of Combinatorial Theory (A) [**24**]{}, 296 (1978). M. Molloy, and B. Reed, Random Struct. Algorithms **6**, 161 (1995). L.M. Sander, C.P. Warren, and I.M. Sokolov, Phys. Rev. E **66**, 056105 (2002). D. ben-Avraham, A.F. Rozenfeld, R. Cohen, and S. Havlin, e-print cond-mat/0301504. R. Cohen, D. Dolev, S. Havlin, T. Kalisky, O. Mokryn, and Y. Shavitt, Leibniz Center Technical Report 2002-49 R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin, Phys. Rev. Lett. **85**, 4626 (2000). R. Cohen, D. ben-Avraham, and S. Havlin, Phys. Rev. E **66**, 036113 (2002). R. Albert, H. Jeong, and A.-L. Barabási, Nature [**406**]{}, 378 (2000). R. Cohen, K. Erez, D. ben-Avraham, and S. Havlin, Phys. Rev. Lett. **86**, 3682 (2000). D. Callaway, M. Newman, S. Strogatz, and D. Watts, Phys. Rev. Lett. **85**, 5468 (2000).
[**Eclipsing binaries in the open cluster NGC 2243 - II. Absolute properties of NV CMa [^1]**]{}\ $^3$Carnegie Institution of Washington, 813 Santa Barbara Street, Pasadena, CA 91101, USA\ e-mail: (ian@ociw.edu)\ [Stars: binaries: eclipsing, binaries – stars: individual: NV CMa – open clusters and associations: individual: NGC 2243]{} Introduction ============ The field of the intermediate-age open cluster NGC 2243 contains 5 known detached eclipsing binaries. Four of these are likely cluster members based on their photometric properties (Kaluzny et al. 2006; hereafter KKTS). The components of these systems should have the same age, metallicity and heliocentric distance and the determination of their absolute parameters can provide an interesting test of evolutionary models of low mass stars. Moreover, through the use of the surface brightness method, one may obtain a direct measure of the cluster distance. This paper is focused on the determination of the absolute properties of NV CMa, an eclipsing binary located in the central area of NGC 2243. In addition we use spectroscopic observations to check on the membership status of 4 other binaries in the cluster field. Spectroscopic Observations and Reductions ========================================= Spectroscopic observations of NGC 2243 stars were carried out with the MIKE echelle spectrograph (Bernstein et al. 2003) on the Magellan II (Clay) telescope of the Las Campanas Observatory. The data were collected during observing runs in 2004 October and 2005 September. For this analysis we use data obtained with the blue channel of MIKE covering the range from 400 to 500 nm with a resolving power of $\lambda / \Delta \lambda \approx 38,000$. All of the observations were obtained with a $0.7\times 5.0$ arcsec slit and with $2\times 2$ pixel binning. At 4380 Å the resolution was $\sim$2.7 pixels at a scale of 0.043 Å/pixel. The seeing ranged from 0.7 to 1.1 arcsec. The spectra were first processed using a pipeline developed by Dan Kelson following the formalism of Kelson (2003, 2006) and then analysed further using standard tasks in the IRAF/Echelle package[^2]. Each of the final individual spectra consisted of two 600 s exposures interlaced with an exposure of a thorium-argon lamp. We obtained 12 spectra of NV CMa. The average signal-to-noise ratios range from 25 at shorter wavelengths to 50 at longer wavelengths. In addition to observations of NV CMa we also obtained single spectra of the eclipsing binaries V4=NS CMa, V5=NX CMa, V7 and V9 (names of variables as in KKTS), and for the red giants H3110 and H4115 (Hawarden 1975) located in the central part of the cluster field. Spectroscopic Orbit of NV CMa ----------------------------- Radial velocities of the components of NV CMa were measured by cross-correlation with the FXCOR task in IRAF, using observations of HD 33256 as a template. According to Nordstrom et al. (2004), HD 33256 has $V_{rad}=10.1\pm 0.2~km~s^{-1}$ and a projected rotational velocity $V \sin i=10~km~s^{-1}$. With $B-V=0.45$ and $[{\rm Fe/H}]=-0.5$, it provides a good match for the color index and metallicity of the binary. The template was observed with the same instrumental configuration as the variable. The correlation peaks were measured to have a FWHM of about 70 $~km~s^{-1}$. All spectra showed the peaks to be separated by more than 1.4 FWHM, and the peaks were measured simultaneously with the FXCOR package. The correlation was measured only from the metal lines, excluding the H$\beta$, H$\gamma$, and H$\delta$ hydrogen lines. Our velocity measurements for NV CMa are listed in Table 1 and are shown in Fig. 1. A Keplerian orbit was fitted to the observations by fixing the period and epoch based on the precise ephemeris established by KKTS: $$Min I = HJD~244 8663.70748(5) + 1.18851590(2)$$ We assumed a circular orbit based on the photometric data. This assumption is further supported by the relatively short orbital period of the variable compared to the cluster age of about 3.8 Gyr (Anthony-Twarog, Atwell & Twarog 2005). For an age of 3.8 Gyr all NGC 2243 binaries with periods shorter than a few days are expected to have circularized orbits (Mathieu 2005). The adjustable parameters in the orbital solution were the velocity semi-amplitudes ($K_{1}$ and $K_{2}$) and the systemic velocity ($\gamma$). The fit was performed using the GAUSSFIT task within IRAF/STSDAS. The derived parameters of the spectroscopic orbit are listed in Table 2[^3]. The systemic velocity of NV CMa agrees within the measurement errors with the radial velocity of the cluster as estimated in the next subsection. Phase HJD-2453000 $RV_{1}$ $\sigma_{\rm RV1}$ $RV_{2}$ $\sigma_{\rm RV2}$ ${\rm (O-C)_{A}}$ ${\rm (O-C)_{B}}$ -------- ------------- ---------- -------------------- ---------- -------------------- ------------------- ------------------- 0.2634 724.7208 -67.20 2.77 192.23 2.69 -0.81 0.12 0.3178 636.8353 -56.61 1.50 182.27 1.51 -1.25 1.40 0.3375 636.8587 -45.65 1.51 173.65 2.04 2.26 0.36 0.3557 636.8803 -40.76 1.31 161.51 1.80 -1.23 -3.24 0.4344 282.7961 12.29 1.26 114.13 1.18 2.09 0.00 0.5930 281.7961 134.08 1.06 -11.81 1.10 1.47 -1.32 0.6100 281.8163 143.47 1.02 -22.17 1.10 -0.17 -0.45 0.6259 281.8353 152.98 1.18 -31.36 1.14 -0.13 0.00 0.6422 281.8546 160.28 1.22 -40.58 1.26 -1.59 -0.30 0.6582 281.8736 170.66 1.31 -49.45 1.39 1.21 -1.45 0.8310 633.8797 173.11 1.26 -51.55 1.55 -0.85 1.03 0.8401 633.8905 170.83 1.35 -46.98 1.47 0.63 1.77 : Radial velocities of NV CMa and residuals from the adopted spectroscopic orbit Parameter Value --------------------------------- ------------------- -- -- -- -- -- -- $P$ (days) 1.18851590(fixed) $T_{0}$ (HJD-244 0000) 8663.70748(fixed) $\gamma~(km~s^{-1})$ 61.70$\pm$ 0.30 $e$ 0(fixed) $K_{1}~(km~s^{-1})$ 128.55$\pm$0.52 $K_{2}~(km~s^{-1})$ 130.87$\pm$0.55 Derived quantities: $A~sin~ i$ ($R_{\odot}$) 6.096$\pm$0.018 $M_{1}~sin^{3}~i$ ($M_{\odot}$) 1.084$\pm$0.010 $M_{2}~sin^{3}~i~$($M_{\odot}$) 1.065$\pm$0.010 Other quantities: $\sigma_{1}~(km~s^{-1})$ 1.37 $\sigma_{2}~(km~s^{-1})$ 1.37 : Orbital parameters of NV CMa. Analysis of Broadening Functions -------------------------------- We have analysed the spectra of NV CMa using code based on the broadening function (BF) formalism (Rucinski 2002). The BF analysis lets us study the effects of the spectral line broadening and orbital splitting even for relatively complicated profiles of spectral lines. This macroscopic velocity information is obtained regardless of the parameters of the photosphere such as the temperature, pressure etc. It retains strict linearity in reproduction of the individual contributions to the broadening profile, i.e. the individual luminosities of components can be simply estimated from the strengths of the respective peaks in the broadening profile. In the case of detached binary systems the dominating effect on the observed broadening of the spectral lines comes from the rotation of the components induced by the orbital motion. If we make the approximation of rigid rotation of perfectly spherical stars we can integrate the light over the visible hemispheres analytically. The resulting theoretical rotational BF has a shape described by: $$\begin{aligned} BF_{rot}(v) & = & A [(1-\beta)\sqrt{1-a^2} + {\frac{\pi}{4}} \beta (1 - a^2)] + C \\ \nonumber \\ a & = & {\frac{v-v_{rad}}{v_{rot}\sin{i}}} \nonumber\end{aligned}$$ where $A$ is a normalization constant, $\beta$ is the linear limb darkening coefficient, $C$ is the continuum level, $v_{rad}$ is the radial velocity of the center of mass of the star, $v_{rot}$ is the linear velocity on the equator of the rotating star, and $i$ is the inclination angle between the axis of rotation and the direction to the observer. We have extracted BFs from all of the spectra of NV CMa and have conducted nonlinear least squares fitting of the model profile to the observed BFs to measure the radial and projected rotational velocities of the components of NV CMa. In both steps of the calculation we have used our programs based on procedures from the GNU Scientific Library. As it will be shown in Section 3, the components of NV CMa are almost spherical and there are no signatures of spot activity. Our model profile is then the sum of two theoretical rotational BFs convolved with a Gaussian with a standard deviation of $15~km~s^{-1}$. The Gaussian represents effects of the instrumental resolution and is a part of the BF method in which modest smoothing (adjusted to the width of the spectrograph slit image) is applied to the BFs. We used the spectra in the wavelength range from $400~nm$ to $495~nm$. This range roughly corresponds to the B passband in the UBV photometric system. Since the shape of the rotational BF only weakly depends on the limb darkening coefficient, we have adopted a constant $\beta = 0.63$ as derived from the $(B-V,~\beta)$ relation of van Hamme (1993). The four parameters $A$, $C$, $v_{rad}$, and $v_{rot}\sin{i}$ were simultaneously adjusted in the fitting procedure. Figure 2 presents an example of fitting the model to the BF calculated for a spectrum taken at orbital phase 0.61. A linear least squares fit to the measured radial velocities gives the following orbital solution: $\gamma= 61.47 \pm 0.14~km~s^{-1}$, $K_1= 130.93 \pm 0.59~km~s^{-1}$, $K_2= 128.62 \pm 0.42~km~s^{-1}$, which lies well within formal errors from the result obtained using the cross-correlation analysis. The RMS error of the fit is $0.93~km~s^{-1}$ for both components. We measure the projected rotational velocities to be $V_1\sin{i}= 51.7 \pm 1.4~km~s^{-1}$ and $V_2\sin{i}= 52.4 \pm 2.5~km~s^{-1}$. Note that these rotational velocities are very close to those expected if the components are in synchronous rotation in a circular orbit, adopting the period from Table 2 and the stellar radii from Table 1 ($V_1\sin{i}= 52.0 \pm 1.3~km~s^{-1}$ and $V_2\sin{i}= 50.1 \pm 1.6~km~s^{-1}$). \[SMR: Strictly speaking, we should subtract quadratically the Gaussian smoothing of sigma = 15 km/s. Then one gets 49.5 and 50.2 km/s, in almost perfect agreement with the expectations.\] The integral of a BF profile is proportional to the total flux from a star. Since the absolute measurement of this flux requires a perfect match between the template and object spectra we did not attempt to conduct such an analysis. However, given that the effective temperatures of the components of NV CMa are very similar, the integrals of the BF profiles are proportional to the monochromatic luminosities of the two components. An analytical integration of the profiles leads to the conclusion that the ratio of the luminosities of stars with identical spectral types is proportional to the ratio of the products $A \cdot v_{rot}\sin{i}$ for each of the system components. From a total of 11 spectra with well defined BF profiles we obtained $L_{1B}/L_{2B}=1.087\pm 0.011$ ($rms=0.034$), in good agreement with the values measured from the photometric observations (see Section 3). Velocities of other objects --------------------------- Table 2 lists the radial velocities derived for two red giants and four other eclipsing binaries from the cluster field. The velocities of H3110 and H4115 are close to the velocities of two other cluster red giants observed by Gratton (1982) who measured $V_{rad}=+62~km~s^{-1}$ and $V_{rad}=+60~km~s^{-1}$ for H4110 and H4209, respectively. Based on these four stars we estimate the radial velocity of the cluster to be $V_{rad}=+60.4\pm 0.6~km~s^{-1}$. Two velocity peaks are seen in the cross-correlation functions of V4, V5, and V7. The mean velocity for the two peaks for V4 is $210.7~km~s^{-1}$, well above the cluster velocity. We conclude that V4 is a field binary star not related to the cluster. In the case of V5, two peaks of very similar shape and height are seen in the cross-correlation function. The average velocity of the two components is $59.1~km~s^{-1}$. V5 is a very likely member of the cluster with a mass ratio close to to unity. Three peaks show up in the cross-correlation function obtained for V7, one at $22.8~km~s^{-1}$, one at $96.5~km~s^{-1}$ and a wing to that component at $136.8~km~s^{-1}$. The mean of first two is $59.6~km~s^{-1}$, very similar to the cluster average velocity. It is possible that the component with highest velocity is caused by contaminating light from the close visual companion of V7 whose light leaked through the slit during observations (see finding chart in KKTS). For V9 only one strong peak was detected in the cross-correlation function at a velocity of $65.4~km~s^{-1}$, consistent with cluster membership. It is possible that this binary was observed close to conjunction, and further observations are needed to see if the secondary component can be detected in the cross-correlation function. In conclusion, variables V5, V7 and V9 are likely members of NGC 2243. These three binaries are promising targets for detailed observations aimed at the determination of the parameters of their components. Name HJD-2450000 ${\rm RV_{1}}$ ${\rm RV_{2}}$ member? ------- ------------- ---------------- ---------------- --------- H3110 3281.9127 59.4(1.0) yes H4115 3281.9285 60.1(1.1) yes V4 3635.9006 117.5(2.4) 285.9(9.6) no V5 3638.8430 23.50(0.58) 94.61(0.56) yes V7 3638.8217 96.88(0.80) 22.8(5.3) yes V9 3638.8711 65.45(0.68) yes : Radial velocities of stars from the field of NGC 2243 Light Curve Solution of NV CMa ============================== We have analysed the $BV$ light curves of NV CMa obtained by KKTS using the Wilson-Deviney model (Wilson & Deviney 1971) as implemented in the light-curve analysis program MINGA[^4] (Plewa 1988). The mass-ratio of the binary was fixed at the spectroscopic value of $m_2/m_1 = 0.9825 \pm 0.0018$. The gravity darkening exponents and bolometric albedos were fixed at 0.32 and 0.5, respectively. The linear darkening coefficients were adopted for the $B$ and $V$ filters from van Hamme (1993) for an assumed metallicity of ${\rm [Fe/H]=-0.49}$ (Gratton & Contarini 1994; Friel et al. 2002). An interpolation routine in the PHOEBE package (Prŝa & Zwitter 2005) was used to get values corresponding to the adopted effective temperatures of the two components. The light curves used in the analysis are shown in Fig. 3. They contain 121 points in the $B$ band and 446 points in the $V$ band. Outside of the eclipses, we used normal points formed by averaging 3 to 7 individual observations. There is no evidence for totality in any of the eclipses. The primary and secondary eclipses have very similar depths, implying similar surface brightness and in turn similar effective temperatures for the two components. The light curve is symmetric, indicating that its shape is not noticeably affected by spot activity. The average color index near quadrature is $<B-V>_{max}=0.439\pm 0.020$. The quoted uncertainty includes an external error arising from the photometric calibration. Adopting $E(B-V)=0.055\pm 0.004$ (Anthony-Twarog et al. 2005) we obtain an unreddened color index at maximum light of $(B-V)_{0}=0.384$. Using the empirical calibration of Ram[í]{}rez & Mel[é]{}ndez (2005) we get an effective temperature of the primary component of NV CMa of $T_{1}=6522\pm 129$ K. The uncertainty includes the formal uncertainty in the temperature calibration as well as the uncertainty of the color index. As it is shown below, the effective temperatures of the components do not differ by more than about 30 K and the color index at maximum light can be safely adopted as the color index of the primary component. The following parameters were adjustable in the light curve solution: the orbital inclination $i$, the non-dimensional potentials $\Omega_{1}$ and $\Omega_{2}$, the effective temperature of the secondary $T_{2}$, and the relative luminosity of the primary $L_{1}(V;B)$. For a fixed value of the mass ratio $q$ the potentials $\Omega_{1}$ and $\Omega_{2}$ directly determine the relative radii of the components $r_{1}$ and $r_{2}$. In the following discussion we list “equal volume” mean radii of the components. Our finally adopted solution (see below) implies that for both components of NV CMa the difference between “polar” and “point” radii amounts to about 2% [^5]. An unconstrained light curve solution obtained with MINGA is listed in Table 4. It is worth noting at this point that the errors returned by MINGA take into account correlations between all fitted parameters. One should note the rather large uncertainties of the derived relative radii and luminosities. This is not an unexpected result given the partial eclipses (Irwin 1962). An additional complication is that the effective temperatures of the components of NV CMa are very similar. The accuracy of the fit can be significantly improved by using information about the light ratio of the two components derived from the spectroscopic data. A grid of solutions was calculated with a fixed value of $\Omega_{1}$ (this is equivalent to fixing the value of the radius $<r_{1}>$). The result is presented graphically in Fig. 3 which shows the calculated values of $L_{1B}/L_{2B}$ and $<r_{2}>$ as a function of the assumed value of $<r_{1}>$. There is a strong anti-correlation between the calculated value of $<r_{2}>$ and the assumed value of $<r_{1}>$. The solutions fulfill the condition $L_{1B}/L_{2B}=1.087\pm 0.011$ only for a very narrow range of radii, from $<r_{1}>=0.2001$ to $<r_{1}>=0.2014$. However, the uncertainty in $<r_{1}>$ obtained this way is severely underestimated. To get a realistic estimate of the errors in the fitted parameters one has to take into account correlations between them. To attain that goal we derived solutions for 3 fixed values of $L_{1B}/L_{2B}$ spanning the range $1.087\pm 0.011$. The adjustable parameters were $i$, $\Omega_{1}$, $\Omega_{2}$, $T_{2}$ and $(L_{1V}/L_{2V})$. The final light curve solution along with the uncertainties of all fitted parameters is listed in Table 5. Figure 5 shows the residuals corresponding to this solution. Parameter Value ------------------- ------------------- $i$ (deg) 87.08 $\pm$ 0.95 $\Omega_{1}$ 5.959 $\pm$ 0.332 $\Omega_{2}$ 6.141 $\pm$ 0.392 $T_{1}$ (K) 6522 (fixed) $T_{2}$ (K) 6512 $\pm$ 123 $(L_{1V}/L_{2V})$ 1.110 $\pm$ 0.017 $(L_{1B}/L_{2B})$ 1.112 $\pm$ 0.024 $<r_{1}>$ 0.203 $\pm$ 0.013 $<r_{1}>$ 0.193 $\pm$ 0.014 rms (V) (mag) 0.008 rms (B) (mag) 0.006 : An unconstrained light curve solution . Parameter Value ------------------- ------------------- $i$ (deg) 87.09 $\pm$ 0.66 $\Omega_{1}$ 6.010 $\pm$ 0.136 $\Omega_{2}$ 6.128 $\pm$ 0.155 $T_{1}$ (K) 6522 (fixed) $T_{2}$ (K) 6506 $\pm$ 75 $(L_{1V}/L_{2V})$ 1.088 $\pm$ 0.024 $<r_{1}>$ 0.200 $\pm$ 0.005 $<r_{2}>$ 0.193 $\pm$ 0.006 rms (B) (mag) 0.008 rms (V) (mag) 0.006 : Constrained light curve solution with $(L_{1B}/L_{2B})=1.087 \pm 0.011$ Absolute properties =================== The absolute parameters of NV CMa obtained from our spectroscopic and photometric analysis are given in Table 6. The errors in the temperatures include all sources of uncertainties. The absolute visual magnitudes $M_{\rm V}$ were calculated using bolometric corrections derived from relations presented by VandenBerg & Clem (2003). The observed visual magnitudes derived from the light curve solutions are $V{_1}=17.107\pm 0.023$ and $V{_2}=17.199\pm 0.024$, where the uncertainties include the errors of the photometric zero point. For the $B$ band we obtain $B{_1}=17.549\pm 0.021$ and $B{_2}=17.640\pm 0.021$. Figure 6 shows the location of the individual components of NV CMa on a color-magnitude diagram at the turnoff region of NGC 2243. Note that NV CMa lies on the binary sequence in this cluster (Bonifazi et al. 1990) while the individual components lie on the sequence of single stars, further confirmation of the cluster membership of NV CMa. Using these values for the observed and absolute visual magnitudes one obtains an apparent distance modulus $(m-M)_{\rm V}=13.25\pm0.10$ and $(m-M)_{\rm V}=13.25\pm0.15$ for the primary and the secondary components of the binary, respectively, with an average value of $(m-M)_{\rm V}=13.25\pm0.08$. This value is in good agreement with recent determinations of $(m-M)_{\rm V}=13.15\pm0.1$ obtained from the isochrone fitting method by both Anthony-Twarog et al. (2005; they adopted ${\rm [Fe/H]}=-0.57$) and VandenBerg et al. (2006; they adopted ${\rm [Fe/H]}=-0.61$). The age ------- It is possible to estimate ages of the components of the binary by using theoretical age-luminosity relations. It is worth noting that age-luminosity relations based on stellar models are unaffected by uncertainties associated with model isochrones relating $T_{eff}$ and $L_{bol}$ to color index and absolute magnitude in a selected band. In particular, they are unaffected by the way in which models treat sub-photospheric convection in low mass stars. In Fig. 7 we show age versus luminosity relations based on evolutionary tracks recently published by VandenBerg et al. (2006). The tracks were derived from a set of isochrones calculated with the program $vriso$ distributed with the model grids. Models for ${\rm [Fe/H]}=-0.525$ (see the following section) and ${\rm [\alpha/Fe]}=0.3$ were used. Horizontal lines in Fig. 7 mark $L\pm \sigma_{L}$ ranges for a given component. The intersections of age-luminosity relations with lines marking $1~\sigma$ limits of the luminosities give limits on the age for a given mass. The $1~\sigma$ age limits are $4.00<t_{1}<5.8$ Gyr and $4.1<t_{2}<6.3$ Gyr for the primary and secondary components of NV CMa, respectively. The large ranges result mainly from the fact that both components are still in a relatively slow phase of their evolution: as can be seen in Fig. 6 they are located about 1 $mag.$ below the turn-off point on the cluster color-magnitude diagram. One may also note from Fig. 7 that the errors of the luminosities and masses contribute an equal 0.6 Gyr to the total uncertainty in the estimated ages. Figure 8 shows the time dependence of the radius, also a sensitive diagnostic for the age, using the same set of evolutionary models. As before, the solid lines correspond to evolutionary tracks for the masses of components of NV CMa. The measured radii are indicated by the horizontal lines spanning the range $\pm 1~\sigma$. &gt;From Fig. 8 we derive $1~\sigma$ limits on the ages of the components of NV CMa of $3.2<t_{1} < 4.6$ Gyr and $3.2<t_{1}<4.9$ Gyr. These limits are consistent with these derived from the age-luminosity relations. The age-radius relations suggest slightly lower ages in comparison with the age-luminosity relation for both components. The overlap of the age estimates implies an age for NV CMa of approximately 4.1 to 4.6 Gyr. Metallicity ----------- The age estimates presented in the previous section suffer from a potential systematic error arising from the adopted metallicity of NV CMa. There are three modern determinations of the cluster metallicity. Gratton & Contarrini (1994) obtained high resolution spectra of two cluster giants and derived ${\rm [Fe/H]}=-0.48\pm 0.15$. Friel et al. (2002) used medium resolution spectra of 9 stars to derive ${\rm [Fe/H]}=-0.49\pm 0.05$. Finally, Anthony-Twarog et al. (2005) employed $uvbyCaH\beta$ photometry to obtain ${\rm [Fe/H]}=-0.57\pm 0.03$. The weighted average of these three determinations gives ${\rm [Fe/H]}=-0.547\pm 0.025$. This value is very close to ${\rm [Fe/H]=-0.525}$ for which we extracted the model relations used above. However, the estimated age of NV CMa is very sensitive to the adopted metallicity. For example, using models for ${\rm [Fe/H]}=-0.397$ and ${\rm [Fe/H]}=-0.606$ we obtain ages $t=5.425\pm 0.025$ Gyr and $t=3.85\pm 0.45$ Gyr, respectively, where the very small formal errors just indicate the marginal overlap between the age ranges obtained from age-luminosity and age-radius relations. In summary, for the masses and ages relevant to the present discussion, the metallicity of the analysed stars has to be known with a relative accuracy of a few percent to allow meaningful comparison of observational data with the models. This is illustrated in Fig. 9 which shows age-luminosity and age-radius relations for a star with $m=1.089 m_{\odot}$ and for 3 values of metallicity. The plotted relations are based on models by VandenBerg et al. (2006) for ${\rm [\alpha /Fe]}=+0.3$. At a given age the separation between relations for close, but different, metallicities exceeds the uncertainties of the luminosities and radii of components of NV CMa obtained in our analysis presented above. Parameter Value ------------------------- ----------------- -- -- -- -- -- -- $A$ ($R_{\odot}$) 6.104$\pm$0.018 $M_{1}$ ($M_{\odot}$) 1.089$\pm$0.010 $M_{2}$ ($M_{\odot}$) 1.069$\pm$0.010 $R_{1}$ ($R_{\odot}$) 1.221$\pm$0.031 $R_{2}$ ($R_{\odot}$) 1.178$\pm$0.037 $T_{1}$ (K) 6522$\pm$129 $T_{2}$ (K) 6506$\pm$149 $Lbol_{1}$($L_{\odot}$) 2.42$\pm$0.23 $Lbol_{2}$($L_{\odot}$ 2.23$\pm$0.25 $M_{\rm V1}~(mag)$ 3.86$\pm$0.10 $M_{\rm V2}~(mag)$ 3.95$\pm$0.15 : Absolute parameters of NV CMa Discussion and summary ====================== The analysis of photometric and spectroscopic observations of the eclipsing binary NV CMA has allowed us to determine the orbital parameters and physical properties of the component stars. Our determinations have formal uncertainties of 1% in the masses and 3% in radii. Uncomfortably large uncertainties in the radii result from the degeneracy of the light curve solution for this partially eclipsing system. Comparison with model tracks for $\rm{[Fe/H]}=-0.525$ give an age of 4.1 to 4.6 Gyr for the binary and consequently for the cluster. This determination suffers from a substantial systematic error related to the uncertain metallicity of the cluster. For the relevant range of stellar masses and ages an uncertainty in the metallicity of 0.1 dex leads to an uncertainty in the estimated age of about 0.8 Gyr. There is still room for obtaining a better determination of the parameters of NV CMa, including its age. First of all, it is straightforward to derive masses of the components with an accuracy of 0.5% or even better. We used 12 spectra only and additional radial velocity data obtained near quadrature will lead to an improvement in the estimates of the masses of the components. It is also possible to obtain better light curves than these used in the present analysis; an improvement is possible in both the quality and the phase coverage. In particular, our photometry for the $B$ band had poor coverage inside the primary eclipse. Improved light curves would better constrain the parameters of the components obtained in the light curve solution. And last, but not least, the age determination of NV CMa and of other cluster binaries would benefit enormously from accurate metallicity determinations, preferably based on high resolution spectroscopy of an extended sample of cluster member stars. The analysis of single spectra obtained for four other eclipsing binaries in the cluster field indicates that three of them are radial velocity members of NGC 2243. The fourth star is definitely a non-member. Further observations of member binaries would allow a better constraint on the cluster age as well as a test of evolutionary models of low-mass stars with $\rm{[Fe/H]}\approx -0.5$. ![image](fig1.eps){height="80mm" width="120mm"} Fig.  . Spectroscopic observations and adopted orbit for NV CMa. ![image](fig2.eps){height="90mm" width="120mm"} Fig.  . Broadening Function extracted from a spectrum of NV CMa obtained at orbital phase 0.61 (solid line). The dashed line shows the fit of a model BF to the observed one. ![image](fig3.eps){height="80mm" width="120mm"} Fig.  . Phased $BV$ light curves of NV CMa. ![image](fig4.eps){height="80mm" width="120mm"} Fig.  . Dependence of the luminosity ratio $(L_{1B}/L_{2B})$ and relative radius $<r_{2}>$ on the assumed radius $<r_{1}>$. ![image](fig5.eps){height="50mm" width="120mm"} Fig.  . The residuals for the fit corresponding to the light curve solution listed in Table 5. ![image](fig6.eps){height="100mm" width="120mm"} Fig.  . Color-magnitude diagram for the turnoff region of NGC 2243. The location of NV CMa is marked with an open triangle. The squares denote the locations of individual components of the binary. ![image](fig7.eps){height="90mm" width="120mm"} Fig.  . Theoretical age-luminosity relations for masses $M_{1}=1.089\pm 0.010 M_{\odot}$ (lower) and $M_{2}=1.069\pm 0.010 M_{\odot}$ (upper). Horizontal dashed lines mark $1\sigma$ ranges for the observed luminosities of the components of NV CMa. The solid lines represent the mass $\pm~1~\sigma$. ![image](fig8.eps){height="90mm" width="120mm"} Fig.  . Theoretical age-radius relations for masses $M_{1}=1.089\pm 0.010 M_{\odot}$ (lower) and $M_{2}=1.069\pm 0.010 M_{\odot}$ (upper). Horizontal dashed lines mark $1\sigma$ ranges for the observed radii of the components of NV CMa. The solid lines represent the mass $\pm~1~\sigma$ ![image](fig9.eps){height="80mm" width="120mm"} Fig.  . Theoretical age-luminosity and age-radius relations for a star with mass $M=1.089 M_{\odot}$ and for metallicity ${\rm [Fe/H]=-0.606}$ (upper curve), ${\rm [Fe/H]=-0.525}$ (middle curve) and ${\rm [Fe/H]=-0.397}$ (lower curve). JK and WP were supported by the grant 1 P03D 001 28 from the Ministry of Science and Information Society Technologies, Poland. Research of SMR is supported by a grant from the Natural Sciences and Engineering Council of Canada while IBT is supported by NSF grant AST-0507325. We thank Alexis Brandeker who obtained one of the spectra of NV CMa used here. It is also a pleasure to thank Tomek Plewa for enlightening hints on MINGA usage. [Anthony-Twarog, B. J., Atwell, J., Twarog, B. A.]{} 2005, [**]{}, [**129**]{}, [872]{}. [Bergbusch, P. A., Vandenberg, D. A., Infante, L]{} 1991, [**]{}, [**101**]{}, [2102]{}. [Bernstein, R., Shectman, S. A., Gunnels, S. M., Mochnacki, S., Athey, A. E.]{} 2003, [Design and Performance for Optical/Infrared Ground-based Telescopes. Edited by Iye, Masanori; Moorwood, Alan F. M. Proceedings of the SPIE]{}, [**4841**]{}, [1694]{}. [Bonifazi, F., Fusi Pecci, F., Romeo, G., & Tosi, M.]{} 1990, [**]{}, [**245**]{}, [15]{}. [Friel, E. D., Janes, K. A., Tavarez, M., Scott, J., Hong, L., & Miller, N.]{} 2002, [**]{}, [**124**]{}, [2693]{}. [Gratton, R. G.]{} 1982, [**]{}, [**257**]{}, [640]{}. [Gratton, R. G., & Contarini, G.]{} 1994, [*Å*]{}, [**283**]{}, [911]{}. [Hawarden, T. G.]{} 1975, [**]{}, [**173**]{}, [801]{}. [Irwin, J. B]{} 1962, [Astronomical Techniques, ed. W. A. Hiltner, The University of Chicago Press]{}, [****]{}, [584]{}. [Kaluzny, J., Krzeminski, W., & Mazur, B.]{} 1996, [**]{}, [**118**]{}, [303 (KKM)]{}. [Kaluzny, J., Krzeminski, W., Thomspon, I. B., & Stachowski, G.]{} 2006, [**]{}, [**56**]{}, [51 (KKTS)]{}. [Kelson, D. D.]{} 2003, [**]{}, [**115**]{}, [688]{}. [Kelson, D. D.]{} 2006, [**]{}, [****]{}, [submitted]{}. [Mathieu, R. D.]{} 2005, [ASP Conf. Ser., Tidal Evolution and Oscillations in Binary Stars, eds. A. Claret, A. Gimenez and J.-P. Zah (San Francisco: ASP)]{}, [**333**]{}, [26]{}. [Muterspaugh, M. W., Lane, B. F., Konacki, M., Burke, B. F., Colavita, M. M., Kulkarni, S. R., Shao, M.]{} 2005, [**]{}, [**130**]{}, [2866]{}. [Nordstrom, B., et al.]{} 2004, [*Å*]{}, [**419**]{}, [989]{}. [Plewa, T.]{} 1988, [**]{}, [**38**]{}, [415]{}. [Prŝa, A., Zwitter, T.]{} 2005, [**]{}, [**628**]{}, [426]{}. [Ram[í]{}rez, I., Mel[é]{}ndez, J.]{} 2005, [**]{}, [**626**]{}, [465]{}. [Rucinski, S. M.]{} 2002, [**]{}, [**124**]{}, [1746]{}. [VandenBerg, D. A., Clem, J. L.]{} 2003, [**]{}, [**126**]{}, [778]{}. [VandenBerg, D. A., Bergbusch, P. A., Dowler, P. D.]{} 2006, [**]{}, [**162**]{}, [375]{}. [van Hamme, W.]{} 1993, [**]{}, [**106**]{}, [2096]{}. [Wilson, R. W., Devinney, E. J.]{} 1971, [**]{}, [**166**]{}, [605]{}. [^1]: This paper includes data obtained with the 6.5-meter Magellan Telescopes located at Las Campanas Observatory, Chile. [^2]: IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the NSF. [^3]: Through this paper we adopt the following values of constants: $R_{\odot}=6.95508E5$ km, $M_{\odot}=1.9891E30$ kg, $G=6.67259E-11$ ${\rm m^{3}kg^{-1}s^{-1}}$. [^4]: MINGA is available at http://ftp.camk.edu.pl/camk/tomek/Minga/ [^5]: The “polar radii” is a radius toward the stellar pole while “point radii” is the radius toward the Lagrangian point L1 of the binary orbit.
--- abstract: 'In this short note we prove that a matrix $A\in{\mathbb{R}}^{n,n}$ is self-adjoint if and only if it is equivariant with respect to the action of a group $\Gamma\subset {\bf O}(n)$ which is isomorphic to $\otimes_{k=1}^n\mathbf{Z}_2$. Moreover we discuss potential applications of this result, and we use it in particular for the approximation of higher order derivatives for smooth real valued functions of several variables.' author: - Michael Dellnitz bibliography: - 'SAEQ.bib' title: 'Self-adjoint Matrices are Equivariant' --- [*Key words:*]{} self-adjoint matrix, equivariance, symmetry, Taylor expansion [*AMS subject classifications.*]{} 15B57, 15A24, 37G40, 41A58 Introduction ============ Within this short note we prove a characterization for a matrix being *symmetric* – in the sense of $A = A^T$ – by using the notion of *equivariance*. The proof of this fact is not difficult at all, but to the best of the knowledge of the author so far the related result cannot explicitly be found in the literature. However, in several articles concerning the development of dynamical systems for the solution of certain optimization problems this underlying equivariance structure is implicitly present (e.g. [@Scho68; @Bro88; @Bro89]), and one would expect that this is also the case in other applications. The point of this note is to state this characterization of $A = A^T$ explicitly, and this is done in Section \[sec:mr\]. In Section \[sec:impli\] we discuss potential applications in equivariant bifurcation theory, and we illustrate concretely how this result can be used for the construction of simple approximations of derivatives of higher order for real valued functions. Main Result {#sec:mr} =========== Let $\Sigma \subset {\bf O}(n)$ be the abelian group consisting of the $2^n$ matrices $$\begin{pmatrix} \pm 1 & 0 & 0 & \cdots & 0 \\ 0 & \pm 1 & 0 &\cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & \cdots & 0 & 0 & \pm 1 \end{pmatrix}.$$ Obviously for any diagonal matrix $$D = \begin{pmatrix} \lambda_1 & 0 & 0 & \cdots & 0 \\ 0 & \lambda_2 & 0 &\cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & \cdots & 0 & 0 & \lambda_n \end{pmatrix},\quad \lambda_j \in {\mathbb{R}}, \quad j=1,2,\ldots,n,$$ we have $$\sigma D = D \sigma \quad \forall \sigma \in \Sigma.$$ In fact, it is easy to verify that for an arbitrary matrix $B\in{\mathbb{R}}^{n,n}$ one has $$\label{eq:DSigma} \sigma B = B \sigma \quad \forall \sigma \in \Sigma \quad \Longleftrightarrow \quad \mbox{$B$ is a diagonal matrix.}$$ In this note we prove the following characterization: \[prop:main\] A matrix $A\in {\mathbb{R}}^{n,n}$ is self-adjoint (i.e. $A = A^T$) if and only if there is an orthogonal matrix $V\in {\bf O}(n)$ such that $$\label{eq:equiv} \gamma A = A \gamma\quad \forall \gamma \in \Gamma,$$ where the group $\Gamma \subset {\bf O}(n)$ is defined by $$\Gamma = \{ V^T \sigma V : \sigma \in \Sigma \}.$$ Suppose that $A = A^T$. Then there is $V\in {\bf O}(n)$ such that $$D = V A V^T$$ is a diagonal matrix. By we have for all $\sigma \in \Sigma$ $$\sigma VAV^T = VAV^T \sigma \quad \Longleftrightarrow \quad V ^T\sigma VA = AV^T \sigma V.$$ Therefore $A$ satisfies the equivariance condition . Now suppose that is satisfied for some $V\in {\bf O}(n)$. Then the matrix $V A V^T$ commutes with every $\sigma\in\Sigma$, and by it follows that $D=VAV^T$ is a diagonal matrix. Therefore $$A^T = (V^T D V)^T = A$$ as desired. - Observe that the implication “$\Longrightarrow$” could also be proved by using the well know fact that two matrices $A$ and $B$ commute if there is an orthogonal transformation $V$ such that both $V^T A V$ and $V^T B V$ are diagonal. - By construction all the eigenvalues of every $\gamma \in \Gamma$ are $1$ or $-1$. In particular $\gamma ^2 = I$ for all $\gamma \in \Gamma$. Moreover, by (a) the matrix $A$ and all $\gamma\in \Gamma$ possess the same set of eigenvectors. - Obviously analogous results can be obtained for Hermitian or normal matrices: Using essentially the same proof as in Proposition \[prop:main\] one can show that a matrix $A\in {\mathbb{C}}^{n,n}$ is normal (i.e. $AA^* = A^* A$) if and only if there is a unitary matrix $W\in {\bf U}(n)$ such that $$\gamma A = A \gamma\quad \forall \gamma \in \Gamma,$$ where the group $\Gamma \subset {\bf U}(n)$ is defined by $$\Gamma = \{ W^* \sigma W : \sigma \in \Sigma \}.$$ On Applications {#sec:impli} =============== Proposition \[prop:main\] could be used to look at results for symmetric matrices in the light of the equivariance condition . For instance a result from [@GSS88] on the genericity of the structure of eigenspaces would imply the well known fact that generically eigenspaces of self-adjoint matrices are one-dimensional. (Simply observe that $\Gamma \cong \otimes_{k=1}^n\mathbf{Z}_2 $ possesses only one-dimensional (absolutely) irreducible representations.) A potentially more interesting application may be the analysis of symmetry breaking bifurcations for gradient systems since in this case the Jacobian would be equivariant according to . This could particularly be useful for bifurcation problems where the (symmetric) steady state solution does not depend on the bifurcation parameter. In fact, some time ago the author himself has co-authored an article on “equivariant (and) self-adjoint matrices” [@DeMel94], and it could be interesting to reconsider these results by taking the insight provided by Proposition \[prop:main\] into account. However, within this note let us focus concretely on one implication involving Taylor expansions. In this context the following immediate consequence of Proposition \[prop:main\] strongly indicates that the result could, for instance, be used to develop a novel general approach for the construction of higher order stencils for real valued functions of several variables. Suppose that $f:{\mathbb{R}}^n \to {\mathbb{R}}$ is smooth in a neighborhood of $\bar x\in{\mathbb{R}}^n$. In the following we use Proposition \[prop:main\] to construct a four-point-stencil which provides a second order approximation of evaluations of the fourth order derivative in $\bar x$. For convenience we write the Taylor expansion of $f$ in $\bar x$ as $$f(\bar x+h) = f(\bar x) + \nabla f(\bar x)^T h + \frac{1}{2} h^T H(\bar x) h + \sum_{j=3}^\infty g_j(\bar x,h),$$ where $g_j(\bar x,h) = O(\| h\|^j)$, $j=3,4,\ldots$, and $H(\bar x)$ is the Hessian matrix of $f$ at $\bar x$. Denote by $\Gamma(\bar x)$ the group in Proposition \[prop:main\] corresponding to the Hessian matrix $H(\bar x)$. Then for all $\gamma \in\Gamma(\bar x)$ we have $$\label{eq:Taylor1} f(\bar x+\gamma h) -2f(\bar x)+f(\bar x-\gamma h) = h^T H(\bar x) h + 2 g_4(\bar x,\gamma h) + O(\| h\|^6),$$ and therefore for all $\gamma_1, \gamma_2\in\Gamma(\bar x)$ $$\label{eq:Taylor2} \begin{array}{ll} & f(\bar x+\gamma_1 h) + f(\bar x-\gamma_1 h) - f(\bar x+\gamma_2 h) - f(\bar x-\gamma_2 h) = \\ = & 2(g_4(\bar x,\gamma_1 h) - g_4(\bar x,\gamma_2 h)) + O(\| h\|^6). \end{array}$$ In particular, $f(\bar x+\gamma_1 h) + f(\bar x-\gamma_1 h) - f(\bar x+\gamma_2 h) - f(\bar x-\gamma_2 h) = O(\| h\|^4)$. For $h\in{\mathbb{R}}^n$ and $\gamma_j \in\Gamma(\bar x)$ $(j=1,2)$ we compute using and the fact that $\Gamma(\bar x) \subset {\bf O}(n)$ $$f(\bar x\pm \gamma_j h) = f(\bar x) \pm \nabla f(\bar x)^T \gamma_j h + \frac{1}{2} h^T H(\bar x) h \pm g_3(\bar x,\gamma_j h) + g_4(\bar x,\gamma_j h) \pm g_5(\bar x,\gamma_j h) + \cdots$$ Therefore $$\begin{aligned} f(\bar x+\gamma_1 h) +f(\bar x-\gamma_1 h) & = & 2 \left( f(\bar x) + \frac{1}{2} h^T H(\bar x) h + g_4(\bar x,\gamma_1 h) + O(\| h\|^6)\right) \\ f(\bar x+\gamma_2 h) +f(\bar x-\gamma_2 h) & = & 2 \left( f(\bar x) + \frac{1}{2} h^T H(\bar x) h + g_4(\bar x,\gamma_2 h) + O(\| h\|^6)\right), \end{aligned}$$ and , immediately follow. Obviously, if $\gamma_1 = \pm \gamma_2$ then this result is not useful. However, for all other choices of $\gamma_j$ this leads to interesting approximations of the fourth order derivative as long as $h$ is not an eigenvector of $\gamma_j$ ($j=1,2$). Let $f:{\mathbb{R}}^3 \to {\mathbb{R}}$ be defined by $$f(x_1,x_2,x_3) = x_1 x_2 x_3^2 +x_1^2 - 3x_2^2 + x_2 \sin(x_1) - x_2^2 x_3^2.$$ We choose $\bar x = (1,1,1)^T$ and compute $$H(\bar x) = \begin{pmatrix} 2- \sin(1) & 1 + \cos(1) & 2 \\ 1+\cos(1) & -8 & -2 \\ 2 & -2 & 0 \end{pmatrix}.$$ The choice of $$\sigma_1 = I\quad \mbox{and}\quad \sigma_2 = \begin{pmatrix} -1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$ leads to $$\gamma_1 = I\quad \mbox{and}\quad \gamma_2 = \begin{pmatrix} 0.9225 & 0.3723 & 0.1015\\ 0.3723 & -0.7896 & -0.4877 \\ 0.1015 & -0.4877 & 0.8671 \end{pmatrix}.$$ For $h=(0.2,0.05,0.1)^T$ we obtain $$f(\bar x+h) + f(\bar x-h) - f(\bar x+\gamma_2 h) - f(\bar x-\gamma_2 h) \approx 6.40\cdot 10^{-5},$$ and for $h=\frac{1}{10}(0.2,0.05,0.1)^T$ one computes $$f(\bar x+h) + f(\bar x-h) - f(\bar x+\gamma_2 h) - f(\bar x-\gamma_2 h) \approx 6.38\cdot 10^{-9}$$ as expected.
--- author: - 'K. Nilsson' - 'E. Lindfors' - 'L. O. Takalo' - 'R. Reinthal' - 'A. Berdyugin' - 'A. Sillanpää S. Ciprini' - 'A. Halkola' - 'P. Heinämäki' - 'T. Hovatta' - 'V. Kadenius' - 'P. Nurmi' - 'L. Ostorero' - 'M. Pasanen' - 'R. Rekola' - 'J. Saarinen' - 'J. Sainio' - 'T. Tuominen' - 'C. Villforth' - 'T. Vornanen' - 'B. Zaprudin' bibliography: - 'monitoringv7.bib' date: 'Received; accepted' subtitle: 'I. Data analysis.' title: 'Long-term optical monitoring of TeV emitting Blazars' --- . Introduction ============ Blazars are active galactic nuclei (AGN) with a relativistic jet, which is pointing close to our line of sight. The blazar family consists of flat-spectrum radio quasars (FSRQs) and BL Lac objects. Blazars are the most numerous objects in the extragalactic gamma-ray sky. The spectral energy distribution of blazars shows two humps; one in the infra-red to X-ray range and the second in the X-rays to gamma-rays. The first hump is ascribed to synchrotron emission and the second is typically attributed to inverse Compton (IC) emission. The peak frequency $\nu_{peak}$ of the synchrotron peak is commonly used to further divide the BL Lacs into low-, intermediate- and high frequency peaked BL Lacs (LBL, IBL and HBL, respectively) with $\log{\nu_{peak}} < 14$ defining the LBL, $14 < \log{\nu_{peak}} < 15$ the IBL and $\log{\nu_{peak}} > 15$ the HBL classes [@2010ApJ...716...30A]. Blazars show variability in all bands from radio to Very High Energy (VHE) gamma-rays and in time scales ranging from years to only a few minutes. Sometimes there is correlated variability between two bands [e.g @2016MNRAS.456..171R and references therein], but not always. The long-term variability has been most extensively studied in the radio and optical bands , where long time series have been collected during decades. Blazar light curves are typically characterized by a power-law power spectral density (PSD), lacking clear and persistent periodicities and/or breaks in the spectrum, which would signify upper and lower limits for the variability time scales. The PSD is notoriously difficult to determine reliably due to uneven sampling and instrument noise [@1982ApJ...263..835S; @1989AJ.....97..720H]. In spite of these challenges, there have been several claims for periodicities in both radio and optical light curves of single sources , but e.g. found no periodic changes in large sample of radio light curves. In recent years such searches have also become feasible in the gamma-ray band and, interestingly, for several sources common periodicities in optical and gamma-rays have been reported . In this paper we present a detailed analysis of optical light curves of 31 blazars extending for 10 years. The data originate from the Tuorla Blazar monitoring program, which is introduced in Section 2, along with the sample selection. The observations and reduction processes are explained in Section 3, along with a detailed analysis of the variability, in particular the intrinsic power spectral density, and search for periodicities in the light curves in Section 4. The entire flux data set is also published in electronic form for the first time. Sample ====== Tuorla Blazar Monitoring Program [^1] [@2008AIPC.1085..705T] is an optical monitoring program that was started in autumn 2002. The monitoring program aims to support the VHE gamma-ray observations of the MAGIC Telescopes and therefore the original sample consisted of 24 BL Lac objects from with $\delta>+20^{\circ}$. These targets were predicted to emit VHE gamma-rays and they are observable from Tuorla Observatory over a large portion of the year. The sample has been gradually extended to include also other types of gamma-ray emitting blazars and to the southern sky. Starting from 2004 most of the observations have been performed with the KVA (Kungliga Vetenskapsakademien) telescope on La Palma (see Section 3). The sample discussed here consists of the original sample of 24 blazars from along with seven additional well-sampled blazars. The targets are listed in Table 1 together with their most relevant properties. The sample covers all blazar classes, even though, due to the selection criteria, the HBLs are the most numerous sources in the sample. The large majority of the sources have been detected in VHE gamma-ray energies, some after triggers about high optical state from this monitoring program (e.g. 1ES1011+496, Mrk 180, ON325, S50716+714) . This paper presents photometric data of these 31 blazars from September 2002 to September 2012. Part of these data have been previously presented as light curves in papers reporting results of multiwavelength campaigns of individual blazars (see complete list in Table 1), looking for recurrent timescales and periodicities in the optical band , common periodicities between the optical and gamma-ray bands as well as in studies looking for correlations between different wavebands . However, only a small portion of the data has been published in numerical form before [@2010MNRAS.402.2087V]. -------------- ------- --------- ------ ------------- ---------------- --------------- --------- (1) (2) (3) (4) (5) (6) (7) (8) Target z Type TeV A$_{\rm R}$ N$_{\rm frms}$ N$_{\rm obs}$ ref. det? \[mag\] 1ES 0033+595 - HBL y 1.911 1501 387 1 1ES 0120+340 0.272 HBL - 0.125 1183 300 RGB 0136+391 - HBL y 0.168 1556 393 RGB 0214+517 0.049 HBL - 0.381 1183 309 3C 66A 0.444 IBL y 0.182 2726 644 2,3 1ES 0647+250 0.41 HBL y 0.214 1134 303 1ES 0806+524 0.138 HBL y 0.096 1188 328 4 OJ 287 0.306 LBL y 0.062 3308 699 5,6,7,8 1ES 1011+496 0.212 HBL y 0.027 1509 426 9,10,11 1ES 1028+511 0.360 HBL - 0.027 1040 273 Mkn 421 0.031 HBL y 0.033 2797 683 12-20 RGB 1117+202 0.139 - - 0.043 780 230 Mkn 180 0.045 HBL y 0.029 1323 379 21 RGB 1136+676 0.135 HBL y 0.019 908 244 ON 325 0.130 IBL/HBL y 0.052 1031 272 22 1ES 1218+304 0.182 HBL y 0.045 941 273 RGB 1417+257 0.237 HBL - 0.041 907 246 1ES 1426+428 0.129 HBL y 0.027 825 219 1ES 1544+820 - HBL - 0.108 169 46 Mkn 501 0.034 HBL y 0.042 3958 749 23-28 OT 546 0.055 HBL y 0.064 1496 401 29 1ES 1959+650 0.047 HBL y 0.384 2784 734 30,31 BL Lac 0.069 LBL y 0.714 3122 771 32-34 1ES 2344+514 0.044 HBL y 0.468 1584 451 35,36 S5 0716+714 0.31 LBL y 0.067 2789 511 37-41 ON 231 0.102 IBL y 0.049 757 196 42 3C 279 0.536 FSRQ y 0.062 1198 316 43-50 PG 1424+240 0.604 IBL/HBL y 0.127 408 141 51 PKS 1510-089 0.360 FSRQ y 0.209 994 272 52-54 PG 1553+113 - HBL y 0.113 1610 444 55-60 PKS 2155-304 0.116 HBL y 0.047 1097 190 -------------- ------- --------- ------ ------------- ---------------- --------------- --------- Observations and data reduction =============================== The observations were made at two different telescopes using three different CCD cameras, whose details are given in Table \[ccdcameras\]. The Tuorla 1.03m Dall-Kirkham telescope is located at Tuorla Observatory, Piikkiö, Finland at 53 m altitude from the sea level. The focal length of the telescope is 8.45 m, which results in a field of view (FOV) of 10 $\times$ 10 arcmin with the ST-1001E chip. Typical seeing at the telescope is 3-6 arcsec and hence the CCD was binned by 2$\times$2 pixels to obtain the pixel scale in Table \[ccdcameras\]. Depending on target brightness, 3 to 8 exposures of 60 s were obtained through the R-band filter. In addition to the science frames, five bias, dark and dome flats were obtained. The CCD frames were reduced by first subtracting bias and dark and then dividing by the flat-field. --------------- ----------------- ------------------ ----------------- --------------- --------------- ------------ ---------------- Telescope Camera Pixel format Pixel scale Gain Readout noise Color term N$_{\rm frms}$ \[arcsec/pix.\] \[e$^-$/ADU\] \[e$^-$\] $\zeta$ Tuorla 1.03 m SBIG-ST1001E 1024$\times$1024 1.17 2.3 17 -0.05 7941 KVA 35 cm SBIG-ST8 1530$\times$1020 0.94 2.3 14  0.11 35268 KVA 35 cm Apogee Alta U47 1024$\times$1024 0.68 1.6 10  0.01 4597 --------------- ----------------- ------------------ ----------------- --------------- --------------- ------------ ---------------- -------------- ------- ------------------ ----------------- --------- ------ ------------------------- -------------------- (1) (2) (3) (4) (5) (6) (7) (8) Target Comp. R-band V - R Control Ref. r$_{ap}$ (Tuorla / KVA) Host flux star mag star \[arcsec\] \[mJy\] 1ES 0033+595 D 13.66 $\pm$ 0.03 1.46 $\pm$ 0.04 F 1 5.0 / 5.0 0.22 $\pm$ 0.03 1ES 0120+340 C 13.12 $\pm$ 0.03 0.38 $\pm$ 0.05 G 1 4.0 / 4.0 0.17 $\pm$ 0.01 RGB 0136+391 B 13.82 $\pm$ 0.02 0.42 $\pm$ 0.04 A 1 7.5 / 5.0 - RGB 0214+517 A 13.85 $\pm$ 0.05 0.51 $\pm$ 0.06 B 1 7.5 / 7.5 2.83 $\pm$ 0.09 3C 66A A 13.38 $\pm$ 0.04 0.22 $\pm$ 0.06 B 2 7.5 / 5.0 0.08 $\pm$ 0.01 1ES 0647+250 E 13.03 $\pm$ 0.04 0.59 $\pm$ 0.05 B 1 7.5 / 5.0 0.033 $\pm$ 0.005 1ES 0806+524 C2 14.22 $\pm$ 0.04 0.39 $\pm$ 0.07 C4 3 7.5 / 7.5 0.69 $\pm$ 0.04 OJ 287 4 13.74 $\pm$ 0.04 0.44 $\pm$ 0.06 10 2 7.5 / 7.5 0.077 $\pm$ 0.013 1ES 1011+496 E 14.04 $\pm$ 0.03 0.39 $\pm$ 0.03 B 1 7.5 / 7.5 0.49 $\pm$ 0.02 1ES 1028+511 1 12.93 $\pm$ 0.03 0.27 $\pm$ 0.04 5 5 7.5 / 7.5 0.10 $\pm$ 0.02 Mkn 421 1 14.04 $\pm$ 0.02 0.32 $\pm$ 0.03 2 5 7.5 / 7.5 8.1 $\pm$ 0.4 RGB 1117+202 E 13.56 $\pm$ 0.04 0.42 $\pm$ 0.04 F 1 7.5 / 7.5 0.66 $\pm$ 0.04 Mkn 180 1 13.73 $\pm$ 0.02 0.25 $\pm$ 0.03 2 5 5.0 / 5.0 3.2 $\pm$ 0.2 RGB 1136+676 D 14.58 $\pm$ 0.04 0.46 $\pm$ 0.05 E 1 7.5 / 7.5 0.85 $\pm$ 0.04 ON 325 B 14.59 $\pm$ 0.04 0.37 $\pm$ 0.06 C1 2 7.5 / 7.5 1.0 $\pm$ 0.1 1ES 1218+304 B 13.61 $\pm$ 0.01 0.40 $\pm$ 0.02 C 4 7.5 / 7.5 0.40 $\pm$ 0.02 RGB 1417+257 A 13.78 $\pm$ 0.04 0.57 $\pm$ 0.06 C2 3 7.5 / 7.5 0.52 $\pm$ 0.06 1ES 1426+428 A 13.23 $\pm$ 0.02 0.93 $\pm$ 0.03 B 4 7.5 / 7.5 0.89 $\pm$ 0.03 1ES 1544+820 A 14.59 $\pm$ 0.03 0.37 $\pm$ 0.04 B 1 7.5 / 7.5 0.21 $\pm$ 0.01 Mkn 501 4 14.96 $\pm$ 0.02 0.34 $\pm$ 0.03 1 5 7.5 / 7.5 12.0 $\pm$ 0.3 6 14.99 $\pm$ 0.04 0.68 $\pm$ 0.06 5 OT 546 B 12.81 $\pm$ 0.06 0.33 $\pm$ 0.09 H 2 7.5 / 7.5 1.25 $\pm$ 0.06 1ES 1959+650 4 14.08 $\pm$ 0.03 0.45 $\pm$ 0.05 7 5 7.5 / 7.5 1.70 $\pm$ 0.04 6 14.78 $\pm$ 0.03 0.42 $\pm$ 0.05 5 BL Lac C 13.79 $\pm$ 0.05 0.47 $\pm$ 0.08 H 2 7.5 / 7.5 1.38 $\pm$ 0.03 1ES 2344+514 C1 12.25 $\pm$ 0.04 0.36 $\pm$ 0.06 C3 3 7.5 / 7.5 3.71 $\pm$ 0.05 S5 0716+714 5 13.18 $\pm$ 0.01 0.37 $\pm$ 0.03 6 5 7.5 / 5.0 0.10 $\pm$ 0.05 ON 231 D 13.86 $\pm$ 0.04 0.95 $\pm$ 0.06 C1 2 7.5 / 7.5 0.58 $\pm$ 0.08 3C 279 5 15.47 $\pm$ 0.04 0.51 $\pm$ 0.03 4 6 7.5 / 7.5 0.033 $\pm$ 0.0017 PG 1424+240 C1 13.20 $\pm$ 0.04 0.39 $\pm$ 0.06 C2 2 7.5 / 7.5 - PKS 1510-089 A 14.25 $\pm$ 0.05 0.37 $\pm$ 0.08 B 7 5.0 / 5.0 - PG 1553+113 1 13.2 $\pm$ 0.3 0.5 $\pm$ 0.3 4 8 7.5 / 7.5 - PKS 2155-304 2 11.67 $\pm$ 0.01 0.38 $\pm$ 0.02 3 9 7.5 / 7.5 1.17 $\pm$ 0.12 -------------- ------- ------------------ ----------------- --------- ------ ------------------------- -------------------- The KVA (Kungliga Vetenskapsakademien) telescope is located on Observatorio del Roque de los Muchachos (ORM) on La Palma, Spain at 2396 m above the sea level. The KVA system consists of two telescopes, a 60 cm telescope on a fork mount and a 35 cm Celestron-14 telescope bolted to the underbelly of the 60 cm telescope. All “KVA” data in this paper were obtained with the latter telescope, remotely operated from Finland. The 3.91 m focal length of the 35 cm telescope gave a FOV of 12$\times$8 arcmin with the ST-8 chip and 11.6$\times$11.6 arcmin with the U47 chip. Typical seeing during the observations was 1.5 - 3.5 arcsec, which required binning of the ST-8 chip by 2$\times$2 pixels. Typical exposure times were 3-8$\times$180s, depending on object brightness. Calibration and image reduction was similar to the Tuorla data, except that the flat-fields were obtained from twilight sky. Photometry ---------- Photometry of the targets was made in differential mode, i.e. by comparing the object brightness to the brightness of calibrated comparison stars near the target. Using multiple comparison stars improves the signal to noise (S/N) of the photometry, but in a long-term project it is not guaranteed that all comparison stars are always within the FOV. Since the tabulated comparison star magnitudes always have errors, the derived zero point of the image depends on the stars chosen to calibrate the image. This effect is likely to be small since the above errors are usually small, a few percent, but nevertheless we uses only one comparison star, sufficiently bright in order to obtain good S/N. The observers were then instructed to always include this star within the FOV. Exceptions to this rule are Mkn 501 and 1ES 1959+650, for which only relatively weak calibrated comparison stars are available close to the target. For these targets two comparison stars were used. In addition to the comparison star, each field has a “control star”, whose photometry is performed identically to the target and which is used to identify possible problems during image reduction. Table \[cstars\] lists the comparison and control stars and their properties. Photometry was performed with semiautomatic [Diffphot]{} software developed at Tuorla Observatory. In short, [Diffphot]{} reduces each image in turn as described above, displays the image on the screen and waits for the user to point the target. Then the software finds the comparison and control stars on the image using an internal database and computes accurate positions of the targets by computing the center of gravity of the light distribution. Aperture photometry is then performed at these positions. We used aperture radii r$_{ap}$ between 4.0 and 7.5 arcsec depending on the object brightness (Table \[cstars\]). To facilitate accurate host galaxy subtraction, the aperture was held constant for each target, except when the host galaxy contributed less than 3% to the total flux, in which case we used a smaller aperture for the KVA to take advantage of the better seeing. The chosen aperture sizes correspond roughly to the the optimal aperture r$_{ap} \approx 1-1.5$ FWHM [@1989PASP..101..616H], except during the best seeing conditions at the KVA. However, this telescope suffered sometimes of bad tracking, resulting in elongated stars and the larger than optimal aperture size helped in compensating this. The sky background was determined from a circular annulus, sufficiently far from the target in order not to contaminate the sky region with target flux and devoid of any bright background/foreground targets. The sky pixel distribution was first sigma-cleaned and the mode of the distribution was computed from the formula $$mode = 2.5 * median - 1.5 * mean\ .$$ Using both sigma clipping and mode for sky estimation improve immunity against sky annulus contamination by background/foreground targets. The sky level was subtracted from the pixel values inside the aperture and the net counts $N$ inside the aperture were computed, taking into account that some pixels are only partially inside the aperture. During this and aperture centering phase we also checked and eliminated highly deviant pixels inside the aperture by comparing the pixel value to the median of the six adjacent pixels. This check was inhibited within two pixels from the stellar core in order to not wrongly correct the central pixel when good seeing prevailed. To calibrate the photometry we computed the scaling factor $c$ from ADUs to Flux (Jy s ADU$^{-1}$) for each image. The comparison star magnitude $R_{comp}$ was first transformed into flux $F_{comp}$ via $$\label{fkaava} F_{comp} = F_0\ 10^{-0.4*R_{comp}}$$ with $F_0 = 3080.0$ Jy and then $c$ was computed from $$\label{scalekaava} c = \frac{F_{comp}\ T_{exp}}{N_{comp}}\ 10^{-0.4*\zeta*(V-R)_{comp}}$$ where $N_{comp}$ are the comparison star net counts in ADUs, $\zeta$ is the color term listed in Table \[ccdcameras\] and $T_{exp}$ is the exposure time. The R-band fluxes of the target and the control star, $F$ and $F_{ctrl}$ respectively, were then computed from $$\label{fluxkaava2} F = \frac{c\ N}{T_{exp}}\ 10^{0.4*\zeta*(V-R)}$$ and $$\label{fluxkaava3} F_{ctrl} = \frac{c\ N_{ctrl}}{T_{exp}}\ 10^{0.4*\zeta*(V-R)_{ctrl}}\ .$$ For the BL Lac nuclei we used $V-R = 0.5$, which corresponds to a power-law index $\alpha = 1.78$ ($F_\nu \propto \nu^{-\alpha}$). Finally, the data were averaged into one hour bins to improve the signal to noise (formulae given below). These averaged fluxes $F_a$ were then converted into R-band magnitudes via Eq. \[fkaava\]. Error analysis -------------- The averaged fluxes $F_a$ derived above are affected by (i) statistical noise arising from photon, dark and readout noise and image processing and (ii) systematic errors arising from assumptions of target and detector properties. The latter produce a systematic shift of the whole light curve, but do not change the flux differences between the data points and thus they are not included in the error bars. Below we discuss these errors in the order they appear in the error analysis. Statistical variations of the fluxes in Eqs. \[fluxkaava2\] and \[fluxkaava3\] arise from the noise in observed counts $N$ and the statistical noise in the scale factor $c$, the latter of which originates from the statistical noise in $N_{comp}$ via Eq. \[scalekaava\]. The statistical errors of $c$, $F$ and $F_{ctrl}$ were determined by first computing the statistical errors of the corresponding observed counts $N_{comp}$, $N$ and $N_{ctrl}$ from $$\label{staterrkaava} \sigma_{N} = \frac{\sqrt{G N + G^2 n_{ap}\ \sigma_{sky}^2(1+\frac{n_{ap}}{n_{sky}})}}{G}\ ,$$ where $G$ is the gain factor ($e^-$/ADU), $\sigma_{sky}$ is the standard deviation of sky pixels, $n_{ap}$ is the number of pixels in the aperture and $n_{sky}$ is the number of pixels in the sky annulus. Note that $\sigma_{sky}$ is empirically measured from the image, so it includes the photon noise of the sky, dark noise, readout noise and any residual noise from image processing. The statistical errors of target fluxes $F$ are then obtained from $$\label{ferr} \sigma_{F} = F \sqrt{ \left(\frac{\sigma_{N_{comp}}}{N_{comp}}\right)^2 + \left(\frac{\sigma_N}{N}\right)^2 }\ .$$ These errors were then used to compute the weighted average of the one hour bin $F_{a}$ and its error $\sigma_{a}$ from $$\label{weightavg} F_{a} = \sum_i \frac{F_i}{\sigma_F^2(i)} \Biggm/ \sum_i \frac{1}{\sigma_F^2(i)}$$ and $$\label{weighterr} \sigma_{a} = \sqrt { \frac{1}{ \sum_i 1 / \sigma_F^2(i)}}\ .$$ Systematic flux errors arise in many ways from the color correction term $10^{0.4*\zeta*(V-R)}$ in Eqs. \[fluxkaava2\] and \[fluxkaava3\]. Firstly, since $\zeta$ varies from one instrument to another, small offsets between the three instruments are expected. We checked this by extracting the light curves of 31 control stars and measuring the systematic offsets between data obtained by different cameras. We found offsets between -0.051 and 0.050 mag, with 67% of the offsets between -0.011 and 0.019 mag. The target and control star data obtained by the KVA were shifted to the Tuorla data using these offsets, thereby suppressing the systematic differences between the cameras down to a level undetectable by our data. Secondly, our assumption of the same color $V - R = 0.5$ mag for all the targets is clearly too simple and in any case the color correction derived from stars is not an accurate model for blazars which have different spectral energy distributions (SEDs) from the stars. Thirdly, blazars display color variations, e.g. a “bluer when brighter” type of behavior [e.g. @2011PASJ...63..639I], which produces small brightness-dependent errors in our data. Given the $zeta$-values in Table \[ccdcameras\] and the range of (V-R) color variations ($\sim$ 0.1) mag, this error is negligible compared to the error bars. We also checked if the error bars $\sigma_a$ obtained by the above procedure could be underestimated. We tested the control star light curves for variability using the chi squared test with the null hypothesis that the stars are intrinsically not variable. The chi squared statistic was computed from the formula $$\label{chikaava} \chi^2 = \sum_{i=1}^N \frac{(\langle F \rangle - F_{a}(i))^2}{\sigma_a^2(i)}\ ,$$ where $\langle F \rangle$ is the average flux of the light curve. We also computed the probability $p$ that the null hypothesis can be rejected and assigned a limit $p < 0.05$ for a target to be considered variable. ![\[sigmaplot\] The dependence of the additional error term $\sigma_s$ on the average flux level. The solid line shows the relationship in Eq. \[extrakaava\], i.e. the relationship, which will make 95% of the control stars nonvariable. This relationship is applied to our data. ](sigmaplot.pdf){width="8cm"} Applying this procedure to the control stars, we found $p < 0.01$ for every control star. Rather than classifying all control stars variable, we assumed that the error bars derived from Eqs. \[staterrkaava\] - \[weighterr\] were too small. We thus added in quadrature an additional error term $\sigma_s$ to the error bars $\sigma_a$ and determined for each star the smallest $\sigma_s$ which made the star non-variable at the 5% level. Plotting the smallest $\sigma_s$ against the average flux $\langle F \rangle$ (Fig. \[sigmaplot\]), we found $\sigma_s$ to increase linearly with $\langle F \rangle$ with a slope of 0.0078$\pm$0.0014 and an intercept of (9$\pm$4) $\mu$Jy. The linear dependence indicates that $\sigma_s$ is always a constant fraction of the total flux, leading us to attribute this linear behavior to flat-fielding errors, which are multiplicative in nature. The intercept is barely significant, but we nevertheless included it into our noise model since such a noise limit is expected, and without this term the noise of faint targets is systematically underestimated. Since the relation above is an average dependence, adding $\sigma_s$ from this relation makes $\sim$ 50% of the control stars non variable. To be consistent with the 5% variability limit we thus multiplied this relation until only 2 of the control stars (6%) remained variable, resulting in $$\label{extrakaava} \sigma_s = 13\,\mu {\rm Jy} + 0.011 \times F_a\ .$$ The final error bars $\sigma$ for the binned average $F_a$ is then obtained from $$\label{finalerr} \sigma = \sqrt{ \sigma_a^2 + \sigma_s^2 }\ .$$ A small random error remains in the light curves of those blazars where the host galaxy component is relatively strong. Variable seeing causes different fractions of host galaxy and comparison star light to be included inside the aperture due to the difference in the surface brightness profiles [@1991AJ....101.1196C; @2000AJ....119.1534C]. However, for most of our targets this effect is very small. For instance, Mkn 501 has one of the strongest host galaxies in our sample and the effect of FWHM changing from 2 to 5 arcsec is $\sim$ 0.02 mag . Targets with a nearby companion galaxy or a foreground star are most affected by the variable seeing. These targets are discussed in more detail in Section \[analyysi\]. In the Tables in Appendix \[fluxtables\] the errors have been converted into a magnitude errors $\sigma_m$ via $$\label{magekaava} \sigma_{m} = \frac{2.5 \log{(F_a+\sigma)} - 2.5 \log{(F_a-\sigma)}}{2}\ ,$$ i.e. the asymmetric magnitude errors have been made symmetric by taking the average of the upward and downward magnitude errors. The flux errors $\sigma$ can be recovered from magnitude errors $\sigma_m$ by marking $$\label{palautus1} k=10^{\sigma_m/(0.5*2.5)}$$ and using $$\label{palautus2} \sigma = \frac{k-1}{k+1}\ F_a\ .$$ To summarize our procedure: we first obtain the counts for the target, comparison star and control star, $N$, $N_{comp}$ and $N_{ctrl}$, respectively, via aperture photometry. Then we determine $c$ for each CCD frame from Eq. \[scalekaava\] and the target and control star fluxes from Eqs. \[fluxkaava2\] and \[fluxkaava3\] and their errors from Eqs. \[staterrkaava\] and \[ferr\]. We compute one hour averages using Eq. \[weightavg\] and their errors from Eqs. \[weighterr\], \[extrakaava\] and \[finalerr\]. Finally we convert fluxes to magnitudes via Eqs. \[fkaava\] and \[magekaava\]. Analysis methods\[analyysi\] ============================ As a first step in the analysis we subtracted the host galaxy contribution from the observed fluxes, corrected the light curves for the galactic extinction and applied the K-correction. As was mentioned above, the presence of a host galaxy makes the fluxes to depend on both aperture and seeing. By using a constant aperture per target we have eliminated the aperture dependence, but an additional step was needed to account for the seeing effect. The host galaxy fluxes for different apertures and seeing conditions for the topmost 24 targets in Table \[cstars\] are tabulated in . This work used observed, high-resolution (FWHM 0.5-1.0 arcsec) R-band images of our targets, convolved to a range of seeing values and and measured with different of aperture radii. We extracted from the tables in the host galaxy fluxes for each target using the corresponding measurement aperture and a seeing value of 2.0 arcsec for the KVA data and 5.0 arcsec for the Tuorla data. These values represent average seeing conditions at the two sites. Using different seeing values for the KVA and Tuorla data effectively reduces the shift between the two data sets, especially for 1ES 0120+340, Mkn 180 and 1ES 1544+820, all of which have a relatively strong nearby object leaking light into the measurement aperture. These targets are also most strongly affected by the varying seeing conditions, which increase their apparent variability. For the 7 targets not included in we used the analytical formulae in [@2005PASA...22..118G] and literature data to integrate the host galaxy light inside the aperture. These formulae do not take into account the smoothing by seeing, whose effect on the host galaxy fluxes is complicated due to the differential mode used. We thus applied the analytical formulae to the topmost 24 targets in Table \[cstars\] and checked the results against the more rigorously obtained values given in . This comparison indicated that the analytic expression overestimates the host galaxy fluxes by only 3%. We thus divided the analytical host galaxy fluxes by 1.03 to be consistent with the other targets. The galactic extinction was corrected by extracting the R-band extinction value $A_R$ from the NED[^2] and applying the correction. These values based on the results in [@1998ApJ...500..525S]. Finally, the light curves were corrected for the cosmological expansion by dividing the time scales by $1+z$ and applying the K-correction by multiplying the fluxes by $(1+z)^{3+\alpha}$ with $\alpha = 1.1$ $(F_{\nu} \propto \nu^{-\alpha})$. The spectral slope chosen here corresponds to the mode of the $\alpha$ distribution of HBL in . The LBL have generally steeper spectra ($\alpha_{\rm mode}$ $\sim$ 1.5), so the transformed fluxes of LBL are likely to be underestimated. We note that this transform does not correct the light curves for the beaming effect caused by bulk relativistic motion in the jet. Variability strength -------------------- As a general indicator of how variable our targets are, we use the chi squared obtained by fitting a constant flux model to the data (Eq. \[chikaava\]). This also provides us with the significance of the variations. Only significantly variable targets are submitted to further tests. As discussed above, the error bars include a noise-term scaled in such a way that the light curves of the control stars are consistent with a non variable target. Synchrotron peak frequencies ---------------------------- In order to determine the peak frequency $\nu_{peak}$ of the synchrotron component, we extracted the archival broadband flux data for all 31 targets from the Roma-BZCAT using the SED builder at the ASI Science Data Center [^3]. In cases where there were few data points in the optical, were augmented the data by our host galaxy subtracted R-band monitoring data. We fit simultaneously two log-parabolic spectra , one for the synchrotron hump and another for the Inverse Compton (IC) hump, to the broadband spectral energy distribution (SED) of the targets, including only data with $\log \nu / ({\rm Hz}) > 8.5$. Since the archival data are non-simultaneous and $\nu_{peak}$ is known to change with the activity state in blazars [e.g. @2009ApJ...705.1624A], we can expect the fitted $\nu_{peak}$ to depend on the frequencies covered and on the number of observing epochs. To roughly estimate how much this could affect $\nu_{peak}$ we binned the data starting from $\log \nu / ({\rm Hz}) = 8.5$. The first bin had a width of 0.25 in log space, followed by bins increasing by a factor of two in width. We computed the mean flux in each bin and assigned an error bar equal to the standard error of the mean in each bin. The two humps require 8 parameters , two of which, the pivot energies, were held constant and the remaining 6 were free. The fit was made by applying a Bayesian approach, sampling the posteriori distribution of the six free parameters with an Monte Carlo Markov Chain (MCMC) sampler and with ensemble sampling and 30 walkers. At each iteration $i$, the synchrotron peak frequency $\nu_{\rm peak}^i$ was computed from current parameters. Then the distribution of $\nu_{\rm peak}^i$ was used to determine $\nu_{\rm peak}$ and its uncertainty by a Gaussian fit made to this distribution. The values of $\nu_{peak}$ are tabulated in Table \[betatulos\] and all the SEDs together with the best-fitting curve are show in in the Appendix (Figure \[sed1\]). It is obvious that the radio part is poorly fitted, but this does not not seem to introduce a large shift in the fitted synchrotron component with respect to the data. However, it may add a small systematic error not taken into account by our error estimate. In some cases the IC peak fit can be considered questionable, but given that, for the most part of the synchrotron spectrum, the contribution of the IC peak is negligible, no large errors are expected for $\nu_{peak}$. \[slopesection\]PSD Power-law slope ----------------------------------- Next we proceeded with estimating the slope of the intrinsic power spectral density (PSD) $P(f)$ of the targets under the assumption that the PSD has a power-law form, i.e. $P(f) \propto f^{\beta}$ where $f$ is the temporal frequency with in units of day$^{-1}$ and $\beta$ is the power-law slope. The PSD is equal to the square of the Fourier transform of the underlying time series. In practice we can only produce an estimate $p(f)$ of $P(f)$ by computing the discrete Fourier transform (DFT) of the observed time series. Inferring $P(f)$ from $p(f)$ is notoriously difficult due to the instrumental noise and the sampling effects [see e.g. @2002MNRAS.332..231U]. The observed Fourier transform is a convolution of the true underlying Fourier transform and the window function $W(x)$. The latter can be a very complicated function of $f$ resulting in a distorted PSD $p(f)$. Furthermore, due to the limited length of the times series and discrete sampling, the PSD can be estimated only within a limited window between $f_{min}$ to $f_{max}$. If the true PSD contains significant power outside this window, limited data length and sampling cause power outside the window to leak into the window, further distorting the $p(f)$. Especially in the case of a power-law PSD, power from frequencies below $f_{min}$, where the PSD is strongest, leaks into the frequency window (the so called “red noise leak”). Many different approaches have been developed over the years to overcome the problems associated with time series dominated by power-law noise [e.g. @2010MNRAS.404..931E; @2014MNRAS.445..437M; @2016MNRAS.461.3145V]. The most recent methods use a “forward casting” approach: starting from a model $P(f)$, a large number of time series are generated with the same sampling and noise properties as in the observed data. The simulated sets are then used to derive an estimate of the statistical properties of $P(f)$, and the observed data are tested against these distributions. By varying the model parameters, the best-fitting parameters can then be found by a suitable statistic. The distortions of $P(f)$ are imprinted into the probability distributions and thus naturally taken into account. The sampling patterns of our light curves are highly irregular and contain large gaps due to the target being close to the Sun. Thus we decided to reject any method relying on binning or interpolating in the time domain. We performed $P(f)$ estimation using the multiple fractions variance function (MFVF) presented in . The method studies the variance of the time series as function of time window. The algorithm works as follows: first, it computes the variance $\sigma_0^2$ of the whole time series and the corresponding “frequency” $1/\Delta t_0$, where $\Delta t_0$ is the length of the data train. Next, it divides the times series into two “fragments” in the middle and compute the variances $\sigma_1^2$ and $\sigma_2^2$ of the two subsets together with their corresponding “frequencies”. This process of subsequent halving is repeated until there are less than 10 data points in a fragment. This process results in a set of variances $\sigma_i$ over a number of frequencies $f_i = 1/\Delta t_i$ which can be analyzed with the same tools as the Fourier spectra. Our procedure to estimate the PSD slope $\beta$ is thus the following: 1\. Let $\beta$ vary from -2.8 to -1.0 with step 0.1. At each $\beta$ repeat steps 2–8: 2\. Generate 5000 evenly sampled light curves with a length of $\sim$ 100 times longer than the observed curve and a sampling of 10 samples per day by inverse Fourier transform from the assumed model PSD $$\label{psdeq} P(f) \propto f^{\beta}$$ (Fig. \[likekuva\], upper left). The dense sampling ensures that the high frequencies of the power-law noise are properly presented in the data and that the long simulation length incorporates the red noise leak into the simulation. In our case the number of data points was $2^{22} = 4~194~304$. The time series are generated using the prescription of . Note that our model includes no flattening of the spectrum at low frequencies and the probability density function (PDF) of the time series is assumed to be Gaussian. Furthermore, our model does not implicitly include a white noise component. These points are discussed in more detail below. ![image](raw.pdf){width="9.5cm"} ![image](likeplot.pdf){width="9.8cm"} 3\. Resample the simulated light curves into the observing epochs (Fig. \[likekuva\], lower left). 4\. Scale the light curve to have the same variance as the observed data and add to each point a Gaussian random number with $\sigma$ equal to the observational error of that point to simulate observational errors. The observed variance cannot be directly used to scale the simulated curve, because the former contains instrumental noise, which increases the variance. We use the normalized excess variance [NXV; @1997ApJ...476...70N] to estimate the intrinsic variance $\sigma_I^2$, via the equation $$\sigma_I^2 = \frac{1}{N} \sum_{k=1}^N \left[ (x(k) - \overline{x})^2 - \sigma_k^2 \right]\ ,$$ where $\overline{x}$ is the average of the data series and $\sigma_k$ is the error of the $k$th data point. We then scale the simulated and resampled curve to have a variance equal to $\sigma_I^2$ and add Gaussian random noise to each data point. 5\. Compute the MFVF of the simulated curves. 6\. Bin the MFVF data into frequency bins $f_i$ with roughly a factor two increase in frequency per bin. 7\. At each frequency bin $f_i$, estimate the probability density function (PDF) $p(f_i)$ of the MFVF variance from the 5000 simulated values using Gaussian kernel density estimation. 8\. Compute the log likelihood of $\beta$ from $$\ln{p} = \sum_{i=1}^{N_f} \ln{p(f_i)}\ ,$$ where $p(f_i)$ is the value of the PDF at $f_i$ and the summation is over all $N_f$ frequency bins . The MFVF transform uses variances in time windows of various lengths, so each point in the MFVF “spectrum” is distributed like chi squared $\chi^2_{n-1}$, where $n$ is the number of points in each time window. However, due to the possible effects of uneven sampling and power-law nature of PSD, we don’t use an analytical formula for $p(f_i)$. As explained in step 7, $p(f_i)$ was derived from the simulated spectra using a Gaussian kernel smoothing of the simulated points. The resulting $p(f_i)$s do visually correspond to a chi squared distribution with then appropriate degree of freedom, giving us further confidence that the simulations are producing correct results. 9\. After scanning through the whole range in $\beta$, find the $\beta$ corresponding to the maximum likelihood. The maximum was found by fitting a 3rd degree polynomial to the 7 points straddling the highest likelihood found, and by finding the maximum of this polynomial. Figure \[likekuva\] (right) shows a typical example of the likelihood curve and the fit. We tested through Monte Carlo simulations the capability of the MFVF in recovering the correct power-law slope $\beta$. We generated 200 light curves with $\beta_in$ between -1.0 and -2.3 and ran the MFVF analysis on each of them. For the temporal sampling and instrumental noise we used the light curve of 3C 66A with 644 data points. The results are summarized in Fig. \[betatest\]. Two things are readily apparent from this figure: a) the capability to recover the correct power-law slope gets increasingly worse when the input slope becomes steeper, and b) there is a small bias to underestimate the slope, which is statistically significant in some cases, but nevertheless at least a factor of $\sim$ 2 smaller than the internal scatter. We note that MFVF method is applied here in its simplest form, i.e. the results are computed directly from the observed points without binning or interpolating the data or applying any filtering technique. The performance of MFVF could probably be improved for steep power-law spectra with suitable filtering, but this is out of the scope of this paper. In any case, all derived PSD slopes are $>-1.9$, indicating that the most troublesome $\beta$ range is mostly avoided in our study. We also note that although our PSD model (Eq. \[psdeq\]) does not specify a white noise component, it is taken into account in step 4, where we add Gaussian noise to the simulated data points. When the simulated light curves are then transformed by the MFVM, this white noise gets imprinted into the probability density distribution at each frequency. ![\[betatest\] Distributions of power-low slopes $\beta_{out}$ for three different input slopes $\beta_{in}$, -1.0 ([*left*]{}), -1,5 ([*middle*]{}) and -2.3 ([*right*]{}) using the MFVF function. The rms scatter of the distributions are also indicated. ](testsim.pdf){width="9cm"} Errors on the derived $\beta$ values were estimated by Monte Carlo simulations of artificial light curves, generated in the same way as in points 2–4 above. We generated 100 such curves, computed their PSDs and MFVF data, ran the likelihood analysis for each of the 100 curves and recorded the rms scatter of the obtained $\beta$ values. Search for periodicities ------------------------ The difficulty of reliably identifying a periodic signal in a red noise background has been discussed in detail e.g. by . We estimated the PSD by computing the periodogram, i.e. the amplitude of the discrete Fourier transform of the light curve in the case of uneven sampling. Before computing the periodogram, the data were binned into bins of 3.0 days in order to avoid dependencies between different frequencies. As in we denote the true periodogram at frequencies $f_j = 1/\Delta t{_j}$ with $\mathcal{P}(f_j)$, the observed periodogram with $I(f_j)$ and the true probability density function (PDF) of $\mathcal{P}(f_j)$ with $p(f_j)$. We created 35 000 simulated light curves per target, again with similar mean, variance and sampling as in the observed data and with the power-law slope derived in the previous step (Sect. \[slopesection\]). We then computed the periodogram $I(f_j)$ for each simulation and an estimate of the PDF $p(f_j)$, denoted here $\hat{p}(f_j)$, from the ensemble of 35 000 points at each frequency $f_j$ via Gaussian kernel estimation. The high number of simulations was needed to sample the high end $p(f_j) > 0.99$ properly. The probability that the power $x$ at single frequency $f_j$ exceeds the observed value $I(f_j)$ was computed from $$P = Pr \left\{ x > I(f_j) \right\} = \int_{I(f_j)}^{\infty} \hat{p}(f_j)\,d x\ .$$ Since the possible periodic signal lies on top of a power-law background, it does not necessarily appear as the highest peak in the PSD. For this reason we chose the frequency with the highest significance (lowest $P = P_{\rm min}$) as a candidate for a periodic signal and computed the probability $P_N$ of finding such a peak in the absence of a periodic signal when $N$ frequencies are examined from $$P_N = 1 - (1 - P_{\rm min})^N\ .$$ Finally, we set $P_N < 5$% as a limit for significant detection. In a sample of 31 targets we would then expect $\sim$ 2 targets to show significant periodicity by chance only. Results\[resu\] =============== Here we list shortly the main results and discuss them further in the next section. Table \[esimtaulu\] gives a sample of the photometric tables, available for all 31 targets through Vizier[^4]. A conversion from magnitudes to fluxes can be made through Eqs. \[fkaava\], \[palautus1\], and \[palautus2\]. Note that the presented magnitudes have not been corrected for the galactic extinction or the host galaxy component. Target JD R-mag err -------- --------------- -------- ------- 3C 66A 2452528.40571 14.311 0.015 3C 66A 2452529.43809 14.392 0.015 3C 66A 2452550.38235 14.872 0.017 3C 66A 2452556.31446 14.919 0.018 3C 66A 2452567.38406 14.881 0.019 3C 66A 2452577.38589 14.821 0.017 3C 66A 2452590.41486 14.711 0.017 3C 66A 2452613.45107 15.008 0.019 3C 66A 2452613.51023 14.998 0.018 3C 66A 2452615.45196 15.087 0.017 ... ... ... ... : \[esimtaulu\]Sample of the light curve data available electronically at Vizier. The target is 3C 66A. Only the first 10 lines of the table are shown. Figures \[ekavalo\]-\[vikavalo\], available only electronically, show on the left the light curves after subtracting the host galaxy and correcting for galactic extinction. The next panel shows the MVFV spectrum and the rightmost panel the periodogram. Figures \[sed1\]-\[vikased\] show the SEDs and their corresponding fits. Table \[betatulos\] summarizes the main results of our analysis. We show the reduced $\chi^2$ obtained by fitting a constant flux model to the target light curve (Col. 2) , the synchrotron peak frequency $\nu_{peak}$ from our fits (Col. 3). The BL Lac subclass division Col. (4) (LBL/IBL/HBL) in Table \[bllist\] is based on the value in Col. (3). The PSD slope $\beta$ is shown (Col. 5) and the period with the highest significance (Col. 6) with its probability $P_N$ (Col. 7). ![\[nuubeta\] Best-fitting PSD slope against synchrotron peak frequency. Filled symbols are BL Lacs, open symbols FSRQs. ](nuubeta.pdf){width="9cm"} Using the chi squared test, we find that the null hypothesis that the target flux does not vary with time can be rejected for all of our targets with $p < 0.0001$. As discussed above, the control stars are by design non-variable by the same test. The 30 targets we analyzed therefore exhibit significant variability, so we apply our variability analysis to all of them, except to 1ES 1544+820, which has significantly lower number of data points compared to the other targets. In Fig. \[nuubeta\] we plot the power-law slope $\beta$ vs. $\nu_{peak}$. A weak correlation seems to be present, so we tested the significance by a chi squared test with the null hypothesis that the $\beta$ values are drawn from a distribution $\beta = \beta_0$. PKS 1510-089 was excluded from this analysis since its light is dominated by a single huge flare and our assumption of a powerlaw PSD with Gaussian PDF is clearly not valid. Fitting a constant $\beta$ to the data we obtain $\beta_{avg} = -1.42$, which we use as a surrogate for the population $\beta_0$. Applying the chi squared test yields $\chi^2_{red} = 1.36$ with a probability of $p = 0.098$ that the null hypothesis can be rejected, assuming our “model” of constant $\beta$ is true. Thus we do not find any significant deviation from a single PSD slope for our sample. Our periodicity search finds a significant PSD peak in one target, Mkn 421 with a rest frame period of 477 days. Finding one periodicity in 31 targets is consistent with the expected false alarm rate. Our result is thus consistent with no significant periodicities in any of our targets, but see discussion below on Mkn 421. -------------- ---------- -------------------------------- --------- ------------------ ---------- ------- (1) (2) (3) (4) (5) (6) (7) Target $\chi^2$ $\log[{\nu_{peak}({\rm Hz})]}$ Class $\beta$ $f_P(d)$ $P_N$ 1ES 0033+595 5.9 18.17 $\pm$ 0.14 HBL -1.34 $\pm$ 0.15 7 48.3 1ES 0120+340 2.4 17.66 $\pm$ 0.13 HBL -1.46 $\pm$ 0.30 61 44.0 RGB 0136+391 60.5 16.00 $\pm$ 0.30 HBL -1.57 $\pm$ 0.15 10 6.8 RGB 0214+517 2.4 16.08 $\pm$ 0.07 HBL -1.27 $\pm$ 0.23 130 99.1 3C 66A 899.0 14.15 $\pm$ 0.09 IBL -1.40 $\pm$ 0.11 15 77.5 1ES 0647+250 80.8 16.41 $\pm$ 0.23 HBL -1.86 $\pm$ 0.20 13 88.6 1ES 0806+524 185.2 15.84 $\pm$ 0.14 HBL -1.79 $\pm$ 0.19 38 54.0 OJ 287 1393.6 13.27 $\pm$ 0.07 LBL -1.30 $\pm$ 0.10 40 18.5 1ES 1011+496 155.0 15.63 $\pm$ 0.26 HBL -1.50 $\pm$ 0.14 16 30.6 1ES 1028+511 33.1 16.70 $\pm$ 0.26 HBL -1.57 $\pm$ 0.15 6 96.4 Mkn 421 355.2 17.03 $\pm$ 0.19 HBL -1.38 $\pm$ 0.09 477 0.1 RGB 1117+202 21.0 15.98 $\pm$ 0.12 HBL -1.36 $\pm$ 0.14 40 84.8 Mkn 180 48.2 16.47 $\pm$ 0.25 HBL -1.57 $\pm$ 0.15 29 99.9 RGB 1136+676 3.1 17.90 $\pm$ 0.29 HBL -1.81 $\pm$ 0.49 40 51.7 ON 325 137.2 14.85 $\pm$ 0.18 IBL/HBL -1.25 $\pm$ 0.12 28 99.5 1ES 1218+304 93.7 17.17 $\pm$ 0.23 HBL -1.72 $\pm$ 0.17 33 34.3 RGB 1417+257 2.7 17.62 $\pm$ 0.10 HBL -1.41 $\pm$ 0.31 20 88.8 1ES 1426+428 4.5 18.02 $\pm$ 0.26 HBL -1.25 $\pm$ 0.15 12 90.0 1ES 1544+820 - 16.04 $\pm$ 0.21 HBL - - - Mkn 501 8.3 16.47 $\pm$ 0.06 HBL -1.65 $\pm$ 0.16 15 97.9 OT 546 10.3 16.35 $\pm$ 0.20 HBL -1.40 $\pm$ 0.15 18 64.8 1ES 1959+650 249.1 16.70 $\pm$ 0.04 HBL -1.70 $\pm$ 0.17 1050 84.7 BL Lac 849.6 13.99 $\pm$ 0.12 LBL -1.27 $\pm$ 0.10 197 94.5 1ES 2344+514 5.7 16.35 $\pm$ 0.12 HBL -1.47 $\pm$ 0.17 14 10.0 S5 0716+714 2761.4 14.24 $\pm$ 0.13 IBL -1.18 $\pm$ 0.09 163 20.8 ON 231 354.6 14.32 $\pm$ 0.08 IBL -1.38 $\pm$ 0.15 18 98.8 3C 279 1597.3 12.69 $\pm$ 0.05 FSRQ -1.54 $\pm$ 0.14 202 80.0 PG 1424+240 162.5 15.14 $\pm$ 0.07 IBL/HBL -1.54 $\pm$ 0.19 17 88.8 PKS 1510-089 248.3 13.75 $\pm$ 0.15 FSRQ -0.97 $\pm$ 0.14 155 40.1 PG 1553+113 323.6 15.90 $\pm$ 0.16 HBL -1.49 $\pm$ 0.15 174 31.5 PKS 2155-304 1980.8 16.01 $\pm$ 0.28 HBL -1.55 $\pm$ 0.15 99 81.1 -------------- ---------- -------------------------------- --------- ------------------ ---------- ------- Discussion ========== PSD slopes ---------- In Table \[betavertailu\] and Fig. \[slopevsband\] we compare our average PSD slope $-1.42 \pm 0.12$ to the values in the literature obtained recently at radio, optical and gamma-rays for samples comparable in size to ours and by using similar methodology. Particularly, these studies considered carefully the distortions caused by uneven sampling and noise. Band $\log({\rm Freq.})$ $\beta \pm {\rm err}$ N ref. ----------- --------------------- ----------------------- ---- ------ R-band 14.67 $1.46 \pm 0.18$ 26 1 15 GHz 10.18 $2.19 \pm 0.17$ 11 2 Fermi LAT 24.38 $1.34 \pm 0.55$ 11 2 37 GHz 10.57 $2.00 \pm 0.27$ 13 3 Fermi LAT 24.38 $1.12 \pm 0.36$ 12 3 Fermi LAT 24.38 $0.87 \pm 0.16$ 5 4 : \[betavertailu\] PSD slopes of BL Lacs in recent studies ![\[slopevsband\] PSD slope vs. observing frequency for samples of BL Lacs and a few individual targets. Filled red symbols : Samples in Table \[betavertailu\]; Open purple square: FSRQ 3C 279 (this work); Open purple: 3C 279 [@2008ApJ...689...79C]; Filled purple square: PKS 2155-304 [@2016arXiv161003311H]. ](slopevsband.pdf){width="9cm"} The number of results is still small and one cannot draw firm conclusions, but a trend of decreasing beta with increasing frequency is apparent, or at the very least the radio slopes appear significantly different from the rest. The same trend seems to continue in the FSRQ 3C 279, although the results are more noisy for a single target compared to samples of targets. This result, if confirmed, would mean that in the regions emitting at radio frequencies variability preferentially occurs over long time scales, rather than over short time scales, i.e. the radio emitting regions have a longer “memory” of their previous state than the optical and gamma-ray emitting regions. This could simply be due to larger emitting volume in the radio than in the gamma-rays and in the optical. We emphasize, however, that although we fit a power-law PSD to the data, this does not imply that the underlying process is indeed a power-law process or even that the light curves at different wavebands result from the same process. For instance, the 22 and 37 GHz light curves of blazars can apparently be decomposed into a series of exponential flares [e.g @1999ApJS..120...95V] with some regular features, like the decay times scale always being 1.3 times the rising time scale. A visual inspection of our optical light curves gives an impression that such a decomposition might be possible in some cases (e.g. 1ES 1959+650), but in most cases not. The apparent regularity in the radio suggests that the steeper PSD slope could simply be a result of fitting a noise process to a light curve that is not a result of such a process. ![\[mkn421period\] The folded light curve of Mkn 421 using a rest-frame period of 477 days (grey symbols). The black line shows the harmonic function corresponding to the phase and amplitude in the periodogram. ](mkn421folded.pdf){width="9.5cm"} In the optical, we do not find a significant correlation between the synchrotron peak frequency and the PSD slope. Such a correlation would not be unexpected. In Low-Peaked BL Lacs (LBL) the synchrotron peak is below the observation frequency and thus we are observing electrons in the high-energy tail of the energy distribution. In contrast, the optical emission from High-Peaked BL Lacs (HBL) is originating from electrons radiating below the peak energy. The cooling times scales of the high- and low energy electrons are very different, and thus one might expect differences in the variability characteristics of LBL and HBL. However, our sample is not complete and contains only few LBL. Therefore, such a correlation could be biased, if even found. Periodicities ------------- Looking at the sample as a whole, we did not find any evidence of periodic variations over the 10-year time span studied here. Our analysis takes into account the power-law background and is expected to be less sensitive to spurious peaks in the periodogram than many previous studies. We found significant periodicity ($P_N < 0.05$) in one target only, a 477 days rest frame period in Mkn 421. Finding one significant period among 31 targets is just what we would expect from chance alone. However, the PSD peak in Mkn 421 is very strong, which warrants further consideration. Figure \[mkn421period\] shows the folded light curve of Mkn 421 over 7 cycles. The variations seem consistently sinusoidal, except during the first $\sim$ 150 days of the cycle. There is thus an intriguing possibility of periodic variations in this source with extra activity triggered at certain phase of the cycle. Considering that we have tested the periodicity at $>$100 frequencies over 31 targets, a chance coincidence cannot be completely ruled out, however. [@2016PASP..128g4101L] found periods of 280-310 days in radio, x-ray and gamma-ray light curves of Mkn 421 in data spanning 6 to 10 years. The period found here is longer, but since we do not interpolate the spectrum in order to retain independence between the frequencies, our frequency resolution is quite low. Indeed, the adjacent frequencies in our PSD correspond to periods of 830 and 330 days. The difference cannot be completely explained by resolution only, but the actual difference cannot be well determined considering the differences in the analyses. We finally comment on some periods claimed to have been found in our sample objects. Periodicities or quasiperiodicities have been claimed for S5 0716+714, but with much shorter time scales, e.g. 25-73 minutes [@2009ApJ...690..216G] or 15 min [@2010ApJ...719L.153R]. Our sampling is too sparse to investigate this. In [@2013MNRAS.434.3122P] a 50 day period was found in OJ 287 from a 2-year densely sampled data set taken in 2004-06. This study used partly the same data as here, but the number of common data points is very small. Out of the 3991 data points in [@2013MNRAS.434.3122P], only about 140 originate from the data presented here. Our data also cover a time span longer than [@2013MNRAS.434.3122P] by a factor of $\sim$ 5 and hence the two data sets are largely independent. We find a very similar period of 52 days in the observed frame, but the significance is below our detection threshold. The folded light curve in Fig. \[oj287period\] gives an indication why the results could differ between different authors. There seems to be a stable periodic signal at low flux levels intermixed by a few high flux points at random phases. These high points are due to the double flares that occur in this source at $\sim$ 12 year intervals [@2006ApJ...646...36V e.g], which are very likely to be caused by a process completely unrelated to the periodic variations. [@2016ApJ...819L..37V]. The inclusion or exclusion of these flares will certainly affect the Fourier analysis pushing the result beyond the significance level in our case. ![\[oj287period\] The folded light curve of OJ 287 using a rest-frame period of 477 days (grey symbols). The black line shows the harmonic function corresponding to the phase and amplitude in the periodogram. ](OJfolded.pdf){width="9.5cm"} Another significant periodicity reported recently is the $798\pm30$ day period found in Fermi gamma-ray data of PG 1553+113 and further supported by optical data with a period of $754\pm20$ days [@2015ApJ...813L..41A]. The significance of the optical PSD peak was reported to be $<$5%. We do not detect this period in our data, although our data set is almost entirely included in [@2015ApJ...813L..41A], forming about half of their sample. However, our frequency resolution is again very poor at periods of $\sim$ 800 days due to the relatively short time span with respect to this period. The fact that similar time scale was found in gamma-rays strengthens the case of significant periodic variations in PG 1553+113. There are many other reports of detected periodicities, which we did not find here, like the optical 65-day period of 3C 66A [@1999ApJ...521..561L], or the $\sim$ 1 year optical periods tentatively, but not conclusively detected in OJ 287, PKS 1510-089 and PKS 2155-304 by [@2016AJ....151...54S]. Our analysis, and these examples, illustrate the difficulty of finding a weak periodic signal in a red noise background using data suffering from unknown systematic errors and sparse and uneven sampling [@2016MNRAS.461.3145V]. If persistent or recurrent periods were actually found, the time scale could shed some light onto their origin. The optical emission in BL Lacs is dominated by synchrotron emission from the jet, so periodic variations could be a result of a precession of the jet. This model has been used to explain e.g. the trajectories of the parsec-scale Very Long Baseline (VLBI) components in BL Lac [@2013MNRAS.428..280C], although in this particular case no optical variations have been detected in the derived precession period of 12.1 years. Other possibilities exist, like helical structure of the jet, which can form as a consequence of current-driven instabilities in the jet [@2004ApJ...617..123N]. Regular changes in the accretion mechanism that feeds the jet could also lead to periodic or quasiperiodic changes in the jet. [@2013MNRAS.434.3122P] attributed the 50 day period found in OJ 287 to a spiral density wave in the accretion disk and performed particle N-body simulations to show that a spiral wave configuration results in a periodic influx of material with approximately the same period as observed in OJ 287. Spiral density waves seem to be naturally generated around single [@2001ApJ...551..874L] and binary [@2010ApJ...708..485H] black hole systems. In the former study, high-pressure vortices formed in the accretion disk, providing a natural source for increased accretion. Also in the latter study the spiral waves exhibited oscillations, which could lead to episodes of periodic variations in the matter influx. Caveats and future work ----------------------- Our results and conclusions have to be taken with some caveats: firstly, we assume a Gaussian probability density function (PDF) when doing the simulations and secondly, our simulated spectra have no low- or high frequency cutoffs. The assumption of Gaussian PDF is clearly not always valid and a log normal distribution would in many cases better represent the PDF, especially in targets whose light curve is dominated by a single or a few strong flares with apparently exponential growth and decay. Recently, a method has been presented to generate non-Gaussian light curves [@2013MNRAS.433..907E], but the application of this procedure was left to future studies. At least [@2015ApJ...798...27I] found their simulated x-ray PSDs of Mrk 421 to depend only little on the assumption of the PDF. Since a power-law spectrum extending all the way to $f = 0$ would imply infinite power output, the PSDs of BL Lacs are expected to level off at some long time scale $t_b$, which would reveal itself as a break in the spectrum at $f_b = 1 / t_b$. Many of our PSDs show this kind of break, but these could be a result of finite data length rather than true breaks. Our simulations do not include this break as an input, but this is necessarily not a problem since the break time scale could be far longer than the 10-year interval studied here. In order to look for true breaks in blazar PSDs, long-term historical data needs to be collected and analyzed, like in e.g. . Conclusions =========== We have presented R-band monitoring data of 31 blazars (29 BL Lacs and 2 FSRQs) observed over a time span of 10 years. In addition to presenting the light curves and describing in detail the data reduction process, we have analyzed the light curves by determining their PSD slopes and by searching for periodic variations in the light curves. These analyses were augmented by substantial number of simulations to take into account the effects of uneven sampling and detector noise and to calibrate the false alarm rate of the periodicity search. Our results can be summarized as follows: - We present for the first time all our R-band monitoring data in tabular form, altogether 11820 photometric data points. - By applying a chi squared test we find that all 32 targets show significant variability with respect to the comparison stars. - The average PSD slope of the 29 targets in our sample -1.42$\pm$0.12 (1$\sigma$ standard deviation). The PSD slope is not significantly ($p = 9.8$%), correlated with the synchrotron peak frequency. - Our average PSD slope $\overline{\beta}$ is consistent with values found in the literature. Comparing our average PSD slope to those in the literature, we find that in the radio the slope tends to be steeper than in the optical and gamma-ray bands. - The periodicity search returned one target, Mkn 421, with a significant ($p<5$%) peak in the periodogram. This is consistent with the expected false alarm rate, but the signal is Mrk 421 is very strong ($p=0.1$%) and warrants further study with longer time span. The 52 day period found in OJ 287 is now confirmed by us, but we note that high flare states caused by an unrelated emission process may complicate the analysis. This paper is dedicated to the memory of our colleague and dear friend Leo Takalo 1952-2018, who played a crucial role in starting this monitoring effort and contributed significantly to the data acquisition. We would like to thank the Instituto de Astrofísica de Canarias for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. Part of this work is based on archival data, software, online services provided by the ASI Space Science Data Center (ASI-SSDC). Light curves ============ ![image](1ES_0033+595_summary.pdf){width="\textwidth"} ![image](1ES_0120+340_summary.pdf){width="\textwidth"} ![image](RGB_0136+391_summary.pdf){width="\textwidth"} ![image](RGB_0214+517_summary.pdf){width="\textwidth"} ![image](3C_66A_summary.pdf){width="\textwidth"} ![image](1ES_0647+250_summary.pdf){width="\textwidth"} ![image](1ES_0806+524_summary.pdf){width="\textwidth"} ![image](OJ_287_summary.pdf){width="\textwidth"} ![image](1ES_1011+496_summary.pdf){width="\textwidth"} ![image](1ES_1028+511_summary.pdf){width="\textwidth"} ![image](Mkn_421_summary.pdf){width="\textwidth"} ![image](RGB_1117+202_summary.pdf){width="\textwidth"} ![image](Mkn_180_summary.pdf){width="\textwidth"} ![image](RGB_1136+676_summary.pdf){width="\textwidth"} ![image](ON_325_summary.pdf){width="\textwidth"} ![image](1ES_1218+304_summary.pdf){width="\textwidth"} ![image](RGB_1417+257_summary.pdf){width="\textwidth"} ![image](1ES_1426+428_summary.pdf){width="\textwidth"} ![image](1ES_1544+820_summary.pdf){width="\textwidth"} ![image](Mkn_501_summary.pdf){width="\textwidth"} ![image](OT_546_summary.pdf){width="\textwidth"} ![image](1ES_1959+650_summary.pdf){width="\textwidth"} ![image](BL_Lac_summary.pdf){width="\textwidth"} ![image](1ES_2344+514_summary.pdf){width="\textwidth"} ![image](S5_0716+714_summary.pdf){width="\textwidth"} ![image](ON_231_summary.pdf){width="\textwidth"} ![image](3C_279_summary.pdf){width="\textwidth"} ![image](PG_1424+240_summary.pdf){width="\textwidth"} ![image](PKS_1510-089_summary.pdf){width="\textwidth"} ![image](PG_1553+113_summary.pdf){width="\textwidth"} ![image](PKS_2155-304_summary.pdf){width="\textwidth"} \[seds\]Spectral energy distributions ===================================== ![image](sed-0035.pdf){width="45.00000%"} ![image](sed-0123.pdf){width="45.00000%"} ![image](sed-0136.pdf){width="45.00000%"} ![image](sed-0214.pdf){width="45.00000%"} ![image](sed-0222.pdf){width="45.00000%"} ![image](sed-0650.pdf){width="45.00000%"} ![image](sed-0806.pdf){width="45.00000%"} ![image](sed-0854.pdf){width="45.00000%"} ![image](sed-1015.pdf){width="45.00000%"} ![image](sed-1031.pdf){width="45.00000%"} ![image](sed-1104.pdf){width="45.00000%"} ![image](sed-1117.pdf){width="45.00000%"} ![image](sed-1136_2.pdf){width="45.00000%"} ![image](sed-1136.pdf){width="45.00000%"} ![image](sed-1217.pdf){width="45.00000%"} ![image](sed-1221p3010.pdf){width="45.00000%"} ![image](sed-1417.pdf){width="45.00000%"} ![image](sed-1428.pdf){width="45.00000%"} ![image](sed-1544.pdf){width="45.00000%"} ![image](sed-1653.pdf){width="45.00000%"} ![image](sed-1728.pdf){width="45.00000%"} ![image](sed-1959.pdf){width="45.00000%"} ![image](sed-2202.pdf){width="45.00000%"} ![image](sed-2347.pdf){width="45.00000%"} ![image](sed-0721.pdf){width="45.00000%"} ![image](sed-1221.pdf){width="45.00000%"} ![image](sed-1256.pdf){width="45.00000%"} ![image](sed-1424.pdf){width="45.00000%"} ![image](sed-1510.pdf){width="45.00000%"} ![image](sed-1553.pdf){width="45.00000%"} ![image](sed-2155.pdf){width="45.00000%"} \[fluxtables\]Tables ==================== In electronic form only. [^1]: http://users.utu.fi/kani/1m/index.html [^2]: NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. [^3]: http://www.asdc.asi.it/bzcat/ [^4]: http://vizier.u-strasbg.fr/viz-bin/VizieR
--- abstract: 'The $k$-plane transform is a bounded operator from $\lp$ to $L^q$ of the Grassmann manifold of all affine $k$-planes in $\R^n$ for certain exponents depending on $k$ and $n$. In the endpoint case $q=n+1$, we identify all extremizers of the associated inequality for the general $k$-plane transform.' author: - 'Taryn C. Flock[^1]' bibliography: - 'KPbib.bib' title: 'Uniqueness of extremizers for an endpoint inequality of the $k$-plane transform' --- [Acknowledgments]{} The author would like to thank Michael Christ for suggesting the problem and his guidance, and Alexis Drouot for insightful discussion. [^1]: The author was supported in part by NSF grant DMS-0901569
--- abstract: | There is a trend towards increased specialization of data management software for performance reasons. In this paper, we study the automatic specialization and optimization of database application programs – sequences of queries and updates, augmented with control flow constructs as they appear in database scripts, UDFs, transactional workloads and triggers in languages such as PL/SQL. We show how to build an optimizing compiler for database application programs using generative programming and state-of-the-art compiler technology. We evaluate a hand-optimized low-level implementation of TPC-C, and identify the key optimization techniques that account for its good performance. Our compiler fully automates these optimizations and, applied to this benchmark, outperforms the manually optimized baseline by a factor of two. By selectively disabling some of the optimizations in the compiler, we derive a clinical and precise way of obtaining insight into their individual performance contributions. author: - | Mohammad Dashti, Sachin Basil John, Thierry Coppey,\ Amir Shaikhha, Vojin Jovanovic, and Christoph Koch\ \ EPFL DATA Lab {firstname}.{lastname}@epfl.ch\ bibliography: - 'refs.bib' title: Compiling Database Application Programs ---
--- abstract: 'We report the self-energy associated with RPA magnetic susceptibility in the hole-doped Bi$_2$Sr$_2$CuO$_{6}$ (Bi2201) and the electron-doped Nd$_{2-x}$Ce$_x$CuO$_4$ (NCCO) in the overdoped regime within the framework of a one-band Hubbard model. Strong weight is found in the magnetic spectrum around $(\pi ,0)$ at about 360 meV in Bi2201 and 640 meV in NCCO, which yields dispersion anomalies in accord with the recently observed ‘waterfall’ effects in the cuprates.' author: - 'R.S. Markiewicz, S. Sahrakorpi, and A. Bansil' title: 'Paramagnon-induced dispersion anomalies in the cuprates' --- Very recent angle-resolved photoemission (ARPES) experiments in the cuprates have revealed the presence of an intermediate energy scale in the 300-800 meV range where spectral peaks disperse and broaden rapidly with momentum, giving this anomalous dispersion the appearance of a ‘waterfall’[@RonK; @Ale; @Non; @Feng; @Valla; @PanDing]. Similar self-energies have also been adduced from optical data[@TimCar]. This new energy scale is to be contrasted with the well-known low energy ‘kinks’ in the 50-70 meV range, which have been discussed frequently in the cuprates as arising from the bosonic coupling of the electronic system with either phonons[@pkink] and/or magnetic modes[@mkink]. Although low energy plasmons[@HedLee; @WZD] are an obvious choice for the new boson, analysis indicates that the plasmons lie at too high an energy of $\sim$1 eV to constitute a viable candidate[@MBII]. Here we demonstrate that paramagnons provide not only an explanation of the energy scale but also of the other observed characteristics of the waterfall effect in both hole and electron doped cuprates. For this purpose, we have evaluated the self-energy associated with the RPA magnetic susceptibility in the hole-doped Bi$_2$Sr$_2$CuO$_{6}$ (Bi2201) and the electron-doped Nd$_{2-x}$Ce$_x$CuO$_4$ (NCCO).[@foot6] In order to keep the computations manageable, the treatment is restricted to the overdoped systems where magnetic instabilities are not expected to present a complication. Our analysis proceeds within the framework of the one-band Hubbard Hamiltonian, where the bare band is fit to the tight-binding LDA dispersion[@Arun3; @foot3]. We incorporate self-consistency by calculating the self energy and susceptibility using an approximate renormalized one-particle Green function $$G=\bar Z/(\omega -\bar\xi_k+i \delta ), \label{eq:1}$$ where $\bar\xi_k=\bar Z(\epsilon_k-\mu)$. Here, $\epsilon_k$ are bare energies and $\mu$ is the chemical potential, and the renormalization factor is $\bar Z\sim (1-\partial\Sigma'/\partial\omega )^{-1}<1$. The associated magnetic susceptibility is $$\chi_0(\vec q,\omega) =-\bar Z^2\sum_{\vec k}{\bar f_{\vec k}-\bar f_{\vec k+ \vec q}\over\bar\epsilon_{\vec k}-\bar\epsilon_{\vec k+\vec q}+\omega+i\delta}, \label{eq:2}$$ where $\delta$ is a positive infinitesimal, $\bar f_{\vec k}\equiv f(\bar\epsilon_{\vec k})$ is the Fermi function. The RPA susceptibility is given by $$\chi (\vec q,\omega )={\chi_0(\vec q,\omega )\over 1-U\chi_0(\vec q,\omega )}, \label{eq:3}$$ with $U$ denoting the Hubbard parameter. The self-energy can be obtained straightforwardly from the susceptibility via the expression[@BrEng] (at $T=0$) $$\begin{aligned} \Sigma (\vec k,\omega )={3\over 2}\bar ZU^2\sum_{\vec q}\int_0^{\infty}{d\omega '\over\pi}Im\chi (\vec q,\omega ') \nonumber \\ \times\Bigl[{\bar f_{\vec k-\vec q}\over\omega -\bar\xi_{\vec k-\vec q}+\omega '}+ {1-\bar f_{\vec k-\vec q}\over\omega -\bar\xi_{\vec k-\vec q}-\omega '} \Bigr]. \label{eq:4}\end{aligned}$$ Concerning technical details, we note that for the generic purposes of this study, all computations in this article employ a fixed value $\bar Z$ =0.5, which is representative of the band dispersions observed experimentally in hole as well as electron doped cuprates.[@foot2] Self-consistency is then achieved approximately by determining values of the chemical potential $\mu$ and the Hubbard parameter $U$ to keep a fixed doping level and to ensure that the bands are indeed renormalized by the average factor $\bar Z =0.5$. The procedure is relatively simple, but it should capture the essential physics of the electron-paramagnon interaction, although our treatment neglects the energy[@foot4] and momentum dependencies of $\bar Z$. Note also that in the overdoped regime considered, the effective $U$ values in Bi2201 and NCCO are small enough that the system remains paramagnetic and the complications of the antiferromagnetic instability are circumvented. Specifically, the presented results on Bi2201 are for $x=0.27$ with $\mu=-0.43$ eV and $U=3.2t$, while for NCCO, $x=-0.25$ with $\mu=0.18$ eV and $U=4t$. Figure \[fig:9\] summarizes the results for Bi2201. We consider Figs. 1(a) and (b) first, which give the real and imaginary parts of the self-energy at several different momenta as a function of frequency. The theoretical self-energies, which refer to Bi2201, should be compared directly with the corresponding experimental data (gold squares[@Non]), although available experimental points for Bi2212[@Feng] and La$_{2-x}$Sr$_x$CuO$_4$ (LSCO)[@Valla] are also included for completeness. The agreement between theory and experiment is seen to be quite good for the real part of the self-energy in (a), while theory underestimates the imaginary part of the self-energy by a factor of $\sim$ 2. That the computed $\Sigma''$ is smaller than the experimental one is to be generally expected since our calculations do not account for scattering effects beyond those of the paramagnons. Here, we should keep in mind that there are uncertainties inherent in the experimental self-energies due to different assumptions invoked by various authors concerning the bare dispersions in analyzing the data. In particular, Feng [*et al.*]{}[@Feng] extract the bare dispersion by assuming that $\Sigma '$ is always positive and goes to zero at large energies. Other groups[@Non; @Ale2] compare their results to LDA calculations and argue that $\Sigma '$ must become negative at higher energies. Our computed $\Sigma '$ in Fig. 1(a) becomes negative over the range 0.35-0.9 eV in certain $\vec k$-directions. Interestingly, various computed colored lines in (a) and (b) more or less fall on top of one another, indicating that the self-energy is relatively insensitive to momentum, especially below the Fermi level, consistent with experimental findings[@Valla], even though $\Sigma$ possesses a fairly strong frequency dependence. 0.5cm Fig. 1(c) gives further insight into the nature of the spectral intensity obtained from the self energy of Eq. \[eq:4\]. The spectral intensity shown in the color plot of the figure is representative of the ARPES spectrum, matrix element effects[@Sep] notwithstanding. The peak of the spectral density function defined by taking momentum density cuts (MDCs), shown by yellow dots, follows the renormalized dispersion (orange dashed line) up to binding energy of about 200 meV. It then disperses to higher energies rapidly (waterfall effect) as it catches up with the bare dispersion (red solid line) around $\Gamma$. In fact, near $\Gamma$, the dressed spectral peak lies slightly below the bare band. The width of the spectral function is largest in the intermediate energy range of 200-600 meV, where its slope also is the largest. This behavior of the spectral function results from the presence of peaks in the real and imaginary parts of the self-energy in the 200-500 meV energy range discussed in connections with Figs. 1(a) and (b) above. It is also in accord with the waterfall effect observed in ARPES experiments, although the sharpness of the theoretically predicted waterfall in Fig. 1(c) is less severe than in experiments, which may be due to limitations of our model, including the approximations underlying our treatment of the susceptibility. 0.5cm Fig. 2 considers the case of electron doped (overdoped) NCCO. The peak in $\Sigma'$ in Fig. 2(a) lies at binding energies of 0.5-0.6 eV (in different $\vec k$-directions) with a height of 0.55-0.7 eV. Correspondingly, the peak in $\Sigma''$ in Fig. 2(b) lies at a binding energy of 0.7-1.1 eV with a height of 1-1.4 eV. Comparing these with the results of Fig. 1, we see that the self-energy effects in NCCO are much larger than in Bi2201. Our computed shift of $\sim$300 meV in the position of the peak in $\Sigma'$ to higher binding energy in going from Bi2201 to NCCO is in good accord with the experimentally reported shift of $\sim$300 meV[@PanDing]. The dispersion underlying the dressed Green function, which may be tracked through the yellow dots, is highly anomalous and presents a kink-like feature quite reminiscent of the more familiar low energy kinks in the 50 meV range around the $(\pi ,0)$-direction[@LEK], which have been discussed frequently in the cuprates. This strong bosonic coupling is also reflected in the fact that the band bottom in NCCO lies several hundred meVs below the bare LDA band in Fig. 2(c). It is interesting to note that the self-energies of Figs. 1 and 2 display a ‘mirror-like’ symmetry: The peaks below the Fermi energy in $\Sigma'$ and $\Sigma''$ for Bi2201 in Fig. 1 are smaller than those above the Fermi energy, but the situation reverses itself for NCCO in Fig. 2 in that now the peaks below the Fermi energy become larger than those above the Fermi energy. The aforementioned shift of the peak in $\Sigma'$ to higher energy in NCCO can be understood in terms of the characteristics of the magnetic susceptibility. Figure  \[fig:7\] compares in Bi2201 and NCCO the imaginary part $\chi''$, which is seen from Eq. \[eq:4\] to be related directly to the real as well as imaginary part of the self-energy. $\chi''$ is seen to be quite similar in shape along the $\Gamma$ to $(\pi ,0)$ line in Bi2201 and NCCO, except that in NCCO the band of high intensity (the yellowish trace) extends to a significantly higher energy scale. In contrast, $\chi''$ in the two systems differs sharply around $(\pi,\pi)$. These differences reflect those in the low-energy magnetic response of the two cuprates. NCCO with strong magnetic response around $(\pi,\pi)$ exhibits a nearly commensurate AFM order, while Bi2201 is very incommensurate, with peaks shifted toward $(\pi ,0)$. In fact, the high energy peaks in the self-energy in Figs. 1 and 2 are tied to the flat-tops near $(\pi ,0)$ at $\omega_1\sim 0.36$ eV in Bi2201 (solid arrow), and near both $(\pi ,0)$ at $\omega_2=0.62$ eV (solid arrow) and $(\pi /2,\pi /2)$ at $\omega_3 = 0.9$ eV (dashed arrow) in NCCO. Above these energies the weight in $\chi''$ falls rapidly, going to zero near an energy $8\bar t$. 0.5cm A reference to Figs. 1(a) and 2(a), where the energy $\omega_1$ in Bi2201, and the energies $\omega_2$ and $\omega_3$ in NCCO are marked by arrows, indicates that the peaks in $\Sigma'$ are correlated with these features in the magnetic susceptibility. In this spirit, the shift in the peak in $\Sigma'$ to higher energy in going from Bi2201 to NCCO reflects the fact that feature $\omega_3$ in $\chi''$ at $(\pi /2,\pi /2)$ in NCCO (dashed arrow in Fig. 2(a)) lies at a higher energy than the $(\pi ,0)$ feature $\omega_1$ in Bi2201 (arrow in Fig. 1(a)). Notably, when the Stoner factor $S=1/ (1-U\chi_0)$ is large, a peak in $\chi''$ arises from a peak in $\chi_0'(\omega )$, which in turn is associated with nesting of features separated by $\omega$ in energy. In the present case, the nesting is from unoccupied states near the Van Hove singularity (VHS) at $(\pi ,0)$ to the vicinity of the band bottom at $\Gamma$, so $\omega_1\sim 2(t+2t')\sim 0.32$ eV in Bi2201. The larger value of $\omega_2$ in NCCO reflects the shift of the Fermi energy to higher energies in an electron-doped material. A notable difference between electron and hole doping is the low-$\omega$ behavior of $\Sigma''$, which is quadratic in $\omega$ for electron-doping in Fig. 2(b), but nearly linear for hole-doping in Fig. 1(b). The linearity for hole-doping, reminiscent of marginal Fermi liquid physics, is associated here with the proximity of the chemical potential to the VHS. This point is considered further in Fig. 4 where $\Sigma''$ is shown in Bi2201 at the $(\pi,0)$ point for three different values of the chemical potential. When the chemical potential lies at the VHS (red line), $\Sigma''$ varies linearly, but when it is shifted by $75$ meV above or below the VHS, the behavior changes rapidly to become parabolic. 0.5cm The strong magnetic scattering discussed in this study in the case of overdoped cuprates should persist into the underdoped regime, where the Stoner factor is expected to become larger. In fact, this scattering is a [*precursor*]{} to the magnetically ordered state near half-filling and it is responsible for opening the magnetic gap. In contrast, a number of authors have related the presence of waterfall-like effects near half-filling to ‘Mott’ physics associated with $(\pi ,\pi )$ AFM fluctuations[@KFul; @Mano; @WTW], but have difficulty explaining why these effects persist into the overdoped regime. The possible doping dependence of $U$ has been an important issue in connection with electron-doped cuprates. A doping-dependent $U$ is suggested by a number of studies in the hole-doped cuprates as well. These include: Optical evidence of Mott gap decrease[@optU]; ARPES observation of very LDA-like bands in optimally and overdoped materials; models of the magnetic resonance peak[@magresU]; and, a strongly doping-dependent gap derived from Hall effect studies[@Ando]. The $\bar Z^2$-renormalization of $\chi_0$ in Eq. \[eq:2\] bears on this question and gives insight into how the value of $U$ enters into the magnetic response of the system. Recall that the susceptibility is often evaluated in the literature via Eq. \[eq:3\] using experimental band parameters, but without the $\bar Z^2$ factor of Eq. \[eq:2\] in $\chi_0$, which yields a $\chi$ scaling $\sim \bar Z^{-1}$ rather than the correct scaling of $\chi\sim\bar Z$. This can be corrected by replacing the $U$ in the Stoner factor by $$U_{eff}=\bar Z^2U. \label{eq:5}$$ Indeed, our Hubbard parameter for NCCO of $U=4t$ is closer to the value at half-filling than is generally found.[@foot5] In conclusion, we have shown that the higher energy magnetic susceptibility in the cuprates has considerable weight near $(\pi ,0)$ and that this leads to a high energy kink or waterfall-like effect in dispersion in both electron and hole-doped cuprates, providing an explanation of such effects observed recently in ARPES. Although our analysis is limited to the overdoped regime, we expect strong magnetic scattering to persist into the underdoped regime. This point however bears further study. This work is supported by the US Department of Energy contract DE-AC03-76SF00098 and benefited from the allocation of supercomputer time at NERSC and Northeastern University’s Advanced Scientific Computation Center (ASCC). [99]{} F. Ronning [*et al.,*]{} Phys. Rev. B[**71**]{}, 094518 (2005). J. Graf [*et al.,*]{} to be published, Phys. Rev. Lett. W. Meevasana [*et al.,*]{} unpublished. B.P. Xie, [*et al.,*]{} cond-mat/0607450. T. Valla [*et al.*]{}, cond-mat/0610271. Z.-H. Pan[*et al.*]{}, cond-mat/0610442. J. Hwang [*et al.*]{}, cond-mat/0610488. A. Lanzara [*et al.,*]{} Nature (London) [**412**]{}, 510 (2001); X.J. Zhou, [*et al.,*]{} Phys. Rev. Lett. [**95**]{}, 117001 (2005). A. Kaminski [*et al.*]{}, Phys. Rev. Lett. [**86**]{}, 1070 (2001); P.D. Johnson [*et al.,*]{} Phys. Rev. Lett. [**87**]{}, 177007 (2001); S.V. Borisenko [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 207001 (2003); A.D. Gromko [*et al.*]{}, Phys. Rev. B[**68**]{}, 174520 (2003). N. Nücker [*et al.,*]{} Phys. Rev. B[**39**]{}, 12379 (1989). Y.Y. Wang [*et al.,*]{} Phys. Rev. Lett. [**77**]{}, 1809 (1996). R.S. Markiewicz and A. Bansil, to be published, Phys. Rev. B. Although not discussed here, we find similar results in LSCO. R.S. Markiewicz [*et al.,*]{} Phys. Rev. B[**72**]{}, 054519 (2005). For Bi2201, we use the parameters of Bi$_2$Sr$_2 $CaCu$_2$O$_{8}$ (Bi2212), but neglect the bilayer splitting. Following Ref. , the hopping parameters are ($t,t',t'',t'''$) = (360,–100,35,10) meV for Bi2201 and (420,–100,65,7.5) meV for NCCO. W.F. Brinkman and S. Engelsberg, Phys. Rev. [**169**]{}, 417 (1968). The specific values of $Z$ are 0.28 in Bi2212 and 0.55 in NCCO.[@Arun3] Specifically, $\bar Z$ is the renormalization of the coherent part of the dispersion. A. Lanzara, personal communication. S. Sahrakorpi [*et al.,*]{} Phys. Rev. Lett. [**95**]{}, 157601 (2005). A. Kaminski [*et al.,*]{} Phys. Rev. Lett. [**86**]{}, 1070 (2001). Y. Kakehashi and P. Fulde, J. Phys. Soc. Japan [**74**]{}, 2397 (2005). E. Manousakis, cond-mat/0608467. Q.-H. Wang, F. Tan, and Y. Wan, cond-mat/0610491. S. Uchida [*et al.*]{}, Phys. Rev. B[**43**]{}, 7942 (1991). H. Woo [*et al.*]{}, Nature Physics, [**2**]{}, 600 (2006), in supplementary materials. S. Ono, S. Komiya, and Y. Ando, cond-mat/0610361. The stronger doping dependence may be due to the high measurement temperatures. The smaller $U$ for Bi2201 may be due to neglect of $k_z$-dispersion, which broadens features in $\chi$ near the VHS.
--- author: - 'Hsiu-Hui Huang' - Werner Becker date: 'Received October 13, 2006; accepted January 9, 2007' title: 'XMM-Newton Observations of the Black Widow Pulsar PSR B1957+20' --- Introduction ============ Until now, more than 1700 rotation-powered radio pulsars are detected. Among them are about 10% which are millisecond pulsars (MSPs) (Manchester et al. 2005). They form a separate population. The majority of them resides in Globular Clusters (c.f. Bogdanov et al. 2006). MSPs are presumed to have been spun up in a past accretion phase by mass and angular transfer from a binary companion (Alpar et al. 1982). Only about one third of them are seen to be solitary. It is believed that they lost their companion, e.g. in a violent supernova event. All MSPs possess very short spin periods of less than 20 ms and show a high spin stability with period derivatives in the range $\approx 10^{-18}-10^{-21}$. MSPs are generally very old neutron stars with spin-down ages $\tau = P/2\dot{P}$ of $\sim 10^{9}-10^{10}$ years and low surface magnetic fields in the range $B ~\propto \sqrt{(P \dot{P})} \sim 10^{8} - 10^{10}$ G. At present, about 50% of all X-ray detected rotation-powered radio pulsars are MSPs (c.f. Bogdanov et al. 2006 and references therein). Among them an extraordinarily rich astrophysics binary system which is formed by the millisecond pulsar PSR B1957+20 and its 0.025 $M_{\odot}$ low mass white dwarf companion (Fruchter, Stinebring, $\&$ Taylor 1988). The binary period of the system is 9.16 hours. The spin period of the pulsar is 1.6 ms which is the third shortest among all known MSPs. Its period derivative of $\dot{P} = 1.69 \times 10^{-20} ~s~s^{-1}$ implies a spin-down energy of $\dot{E} = 10^{35} ~erg~s^{-1}$, a characteristic spin-down age of $ > 2 \times 10^{9}$ years, and a dipole surface magnetic field of $B_\perp = 1.4 \times 10^{8}$ Gauss. Optical observations by Fruchter et al. (1988) and van Paradijs et al. (1988) revealed that the pulsar wind consisting of electromagnetic radiation and high-energy particles is ablating and evaporating its white dwarf companion star. This rarely observed property gave the pulsar the name [*black widow pulsar*]{}. Interestingly, the radio emission from the pulsar is eclipsed for approximately 10% of each orbit by material expelled from the white dwarf companion. For a radio dispersion measure inferred distance of 1.5 kpc (Taylor $\&$ Cordes 1993) the pulsar moves through the sky with a supersonic velocity of 220 km/sec. The interaction of a relativistic wind flowing away from the pulsar with the interstellar medium (ISM) produces an H$_\alpha$ bow shock which was the first one seen around a “recycled" pulsar (Kulkarni & Hester 1988). In 1992 Kulkarni et al. published a contour map of the X-ray emission of PSR B1957+20 which was derived from ROSAT PSPC observations. Although this ROSAT data were very sparse in statistics it led the authors to predict faint diffuse X-ray emission with constant surface brightness to be present along a cylindrical trail formed when the relativistic pulsar wind expands into pressure equilibrium with the interstellar medium behind the nebula. The much improved sensitivity of the Chandra and XMM-Newton observatories made it possible to probe and investigate the structure and properties of this unique binary system in much higher detail than it was possible with ROSAT, ASCA or BeppoSAX. A narrow X-ray tail with the extent of 16 arcsec and the orientation to the north-east was detected from it in deep Chandra observations by Stappers et al. (2003). Searching in ROSAT data for a modulation of the pulsar’s X-ray emission as a function of its orbital phase revealed a suggestive but insignificant increases in flux before (at phase $\phi \sim 0.17$) and after (at phase $\phi \sim 0.4 - 0.5$) the pulsar radio eclipse ($\phi = 0.25$) (Kulkarni et al. 1992). Taking Chandra data into account revealed a hint that the lowest and highest fluxes are located during and immediately after the radio eclipse, respectively. The statistical significance of this modulation observed by Chandra, though, is only 98% and thus prevents any final conclusion on it (Stappers et al. 2003). In this paper we report on XMM-Newton observations of PSR B1957+20 and its white dwarf companion. The paper is organized as follows. In §2 we describe the observations and data analysis. §3 gives a summary and discussion. Observations and Data Analysis ============================== PSR B1957+20 was observed with XMM-Newton on October 31, 2004 for a 30 ksec effective exposure. In this observation, the EPIC-MOS1 and MOS2 instruments were operated in full-frame mode using the thin filter to block optical stray light. The EPIC-PN detector was setup to operate in the fast timing mode. Because of the reduced spatial information provided by the PN in timing mode we use the MOS1/2 data for imaging and spectral analysis of the pulsar and its diffuse X-ray nebula while the PN data having a temporal resolution of 0.02956 ms allowed us to search for X-ray pulsations from the pulsar. All the data were processed with the XMM-Newton Science Analysis Software (SAS) package (Version 6.5.0). Spatial and spectral analyses were restricted to the $0.3-10.0$ keV energy band while for timing analysis events were selected for the energy range $0.3-3.0$ keV. Spatial Analysis ---------------- Figure 1 shows the combined EPIC-MOS1/2 image of the PSR B1957+20 system. The image was created with a binning factor of 6 arcsec and by using an adaptive smoothing algorithm with a Gaussian kernel of $\sigma < 4$ pixels in order to better make visible faint diffuse emission. The extent of the diffuse emission with its orientation to the north-east is about 16 arcsec which is consistent with the previous result derived from the Chandra Observations. However, the detailed structure of the X-ray emission from XMM-Newton can not be as clearly seen as from Chandra due to the 10 times wider Point Spread Function (PSF) of XMM-Newton. Inspecting the XMM-Newton MOS1/2 image two faint features (denoted as A and B) which contribute to only about 3% of the total X-ray flux are apparent. In order to investigate whether this faint features are associated with nearby stars we inspected the Digitized Sky Survey data (DSS) and the USNO-B1.0 Catalogue for possible sources. These catalogues which are limited down to 22 mag (Krongold, Dultzin-Hacyan & Marziani 2001) and 21 mag (Monet et al. 2003) do not reveal possible counterparts. These features are not seen in the Chandra image though (Stappers et al. 2003). Spectral Analysis ----------------- Combined EPIC-MOS1/2 data of PSR B1957+20 were extracted from a circle of 30 arcsec radius centered at the pulsar position (RA (J2000) $= 19^{h}59^{m}36^{s}.77$, Dec $= 20^{\circ}48'15".12$). The selection region contains about 85% of all source counts. Background photons were selected from a source-free region near to the pulsar position. Response files were derived by using the XMM-Newton SAS tasks RMFGEN and ARFGEN. After subtracting background photons, in total 338 sources counts were available for a spectral analysis. The extracted spectra were binned with at least 30 source counts per bin. Assuming that the emission originates from the interaction of the pulsar wind with the ISM or with the stellar wind we expect synchrotron radiation to be the emission mechanism of the detected X-rays. To test this hypothesis we fitted the spectrum with a power law model. Indeed, this model describes the observed spectrum with a reduced $\chi^{2}_{\nu}$ of 1.09 (for 8 D.O.F.). The photon-index is found to be $\alpha = 2.03^{+0.51}_{-0.36}$. The column absorption $N_{H}$ is $8.0 \times 10^{20} ~cm^{-2}$. For the normalization at 1 keV we find $1.5^{+0.9}_{-0.3} \times 10^{-5}\,\mbox{photons keV}^{-1} \mbox{cm}^{-2} \mbox{sec}^{-1}$ (1-$\sigma$ confidence for 1 parameter of interest). The spectrum (data and model) and the fit residuals are shown in Figure 2. The unabsorbed X-ray fluxed derived from the best fitting model parameters is $f_{x}=8.35 \times 10^{-14} ~erg ~s^{-1}~cm^{-2}$ and $f_{x}=7.87 \times 10^{-14} ~erg ~s^{-1}~cm^{-2}$ in the $0.3-10$ keV and $0.1-2.4$ keV energy band, respectively. The X-ray luminosities in these energy bands – calculated for a pulsar distance of 1.5 kpc – are $L_{x}(0.3 - 10.0\mbox{keV}) = 2.24 \times 10^{31}~erg~s^{-1}$ and $L_{x}(0.1 - 2.4 \mbox{keV}) = 2.12 \times 10^{31}~erg~s^{-1}$, respectively. The conversion efficiency $L_{x}/\dot{E}$ in the $0.1-2.4$ keV band is found to be $\sim 2.12 \times 10^{-4}$. In order to check whether the spectral emission characteristics changes for photons detected in a smaller compact region of 10 arcsec radius at the pulsar position we applied a spectral fit to this events only. The encircled energy within 10 arcsec is 60 $\%$. In total, 186 sources counts were available for spectral fits. We did not find any significant change in the spectral parameters than reported above. Timing Analysis --------------- The EPIC-PN camera observed the pulsar in the fast timing mode. In this mode the spatial and spectral information from a $64 \times 199$ CCD pixel array is condensed into a one dimensional $64 \times 1$ pixel array (1D-image), i.e. the spatial information in Y-direction is lost due to the continuous read-out of the CCD. The complete photon flux (source plus DC emission from foreground or background sources located along the read-out direction) is accumulated and collapsed in the final 1D-image, severely reducing the signal-to-noise ratio of pulsed emission and preventing the detection of weak X-ray pulsations from the target of interest. In order to search for X-ray pulsations from PSR B1957+20 we extracted 1639 counts from the CCD columns $33- 41$ in which the pulsar got located. In order to increase the signal-to-noise ratio we restricted the analysis to the energy range $0.3-3.0$ keV. Below and beyond this energy band the accumulative sky and instrument background noise exceeds the contribution from the pulsar itself (c.f. Becker & Aschenbach  2002). However, still 80 $\%$ of the counts are estimated to be derived from the background. The photon arrival times were corrected to the solar system barycentre with the BARYCEN tool (version: 1.17.3, JPL DE200 Earth ephemeris) of the SAS package. As the pulsar is in a binary we corrected for the orbital motion of the pulsar by using the method of Blandford & Teukolsky (1976). As millisecond pulsars are known to be extremely stable clocks we used the pulsar ephemeris from the ATNF Catalogue, $f = 622.122030511927$  Hz and $\dot{f} = -6.5221 \times 10^{-15}\mbox{sec}^{-2}$ (at MJD = 48196.0) to perform a period folding. By using the $Z^{2}_{n}$ statistics (Buccheri et al. 1983) with the harmonics number (n) from one to ten no significant signal was detected at the radio spin period extrapolated for the epoch of the XMM-Newton observation. Restricting the period search to the various smaller energy bands did not change the result. A pulsed fraction upper limit of 9% (1-$\sigma$) is deduced by assuming a sinusoidal pulsed profile. Arons $\&$ Tavani (1993) predicted that depending on the flow speed and the degree of absorption and/or scattering by the companion wind the X-ray emission from PSR B1957+20 increases by up to a factor of 2.2 at the orbital phase before and after the radio eclipse. In order to test this prediction we created a lightcurve by binning all events in bins of 1.5 ksec width. With an effective exposure time of about 30 ksec and an orbit period of 9.16-h the XMM-Newton MOS1/2 and PN data cover roughly 83 $\%$ and 92 $\%$ of one binary orbit, respectively. Table 1 lists the first and the last photon arrival times recorded by the MOS1/2 and EPIC-PN detectors and the corresponding pulsar binary orbital phase. The lightcurves resulting from the EPIC-PN and MOS1/2 data are shown in Figure 3. For the lightcurve deduced from the PN-data (upper solid curve) it is clearly seen that before the radio eclipse, i.e. between the orbital phase 0.1 and 0.25, the X-ray emission increases. The emission in the highest bin is about a factor of 3.0 stronger than at other orbital phase angles before and after the radio eclipse. This is in agreement with the predictions made by Arons $\&$ Tavani (1993). Although it is the first time that a significant X-ray flux modulation from PSR B1957+20 is observed, the flux increase near the phase of the radio eclipse is virtually only seen in the PN data. Owing to the short observation time of 30 ksec, which is less than the time of one full binary orbit, the orbit angles $0.30 - 0.38$ and $0.18 - 0.36$ are not covered by EPIC-PN and MOS1/2 data at all. It thus turns out that it was a great fortune that the EPIC-PN, more or less by chance, covered the orbital phase range of the radio eclipse and provided us evidence for the flux enhancement while the MOS1/2 detectors were already switched off. ---------- ------------ --------------- -------------- --------------- --------- Date set Duration (s) Time (MJD) Orbital Phase Time (MJD) Orbital Phase MOS1 53309.9688 0.3605 53310.2830 0.1830 27143.4 MOS2 53309.9666 0.3548 53310.2846 0.1871 27467.1 PN 53309.9791 0.3873 53310.3301 0.3063 30327.4 ---------- ------------ --------------- -------------- --------------- --------- Summary and Discussion ====================== Interaction between relativistic pulsar winds which carry away the rotational energy of pulsars and the surrounding medium is expected to create detectable X-ray emission. Indeed, there are about 30 pulsar wind nebulae (PWNe) currently detected in the X-ray band (e.g. Kaspi, Roberts & Harding 2006, Gaensler $\&$ Slane 2006, Kargaltsev $\&$ Pavlov 2006). However, these PWNe are all powered by young and powerful pulsars with spin-down energies of more than $\sim 3.6 \times 10^{36}~erg~s^{-1}$. Until now, only two MSPs are known to have X-ray nebulae. They are PSR B1957+20 (Stappers et al. 2003) and PSR J2124-3358 (Hui $\&$ Becker 2006). Both of them have tail-like structures behind the moving pulsars. These trails could be associated with the shocked relativistic wind confined by the ram pressure of the ambient ISM. The XMM-Newton data of PSR B1957+20 have provided observational evidence for a strong dependence of the pulsar’s X-ray emission on its binary orbital phase. It is the first time that a significant X-ray flux modulation from PSR B1957+20 is observed. The emission near to the radio eclipse is supposed to be beamed in a forward cone because the shocked fluid is accelerated by the pressure gradient as it flows around the eclipse region. Relativistic beaming would tend to give the maximum flux just before and after the pulsar eclipse which for PSR B1957+20 is at orbital phase = 0.25. Arons $\&$ Tavani (1993) predicted that the immediately downstream flow velocity of the shocked pulsar wind from the shock area along the line between the pulsar and the ablating companion is about c/3 and is even higher behind the relativistic shock when it passes around the companion. The tenuous relativistic plasma accelerates as it flows around the companion, possibly passing through a sonic transition to leave the binary system with the velocity larger than $c / \sqrt{3}$. Due to Doppler boosting, the probable post-shock velocities of the relativistic wind then suggest X-ray emission variation around the orbital phase by a factor between 1.3 ($v= c/3$) and 2.2 ($v \cong c/\sqrt{3}$). This numbers were estimated without considering the effect of absorption and/or scattering within the binary. On the contrary, the X-ray emission at the eclipse may be reduced because of the obscuration of the shock by the companion. However, given the limited photon statistics of the XMM-Newton data it is not possible to investigate any binary-phase resolved imaging and spectral variation as a function of orbit phase or to determine the exact geometry of the peak emission. Such analysis would allow us to investigate with higher accuracy than currently possible whether the X-ray emission from PSR B1957+20 is present during all orbit angles or virtually only near to the radio eclipse while diffuse X-ray emission from the PWN is present at any orbital angle. Indeed, the later is suggested by the current data and a confirmation would have a severe impact on our understanding of the pulsar’s X-ray emission properties. As the present XMM observation covers barely one binary orbit, we stress that it can not be fully excluded that the increase in photon flux near the orbital angle of the radio eclipse is due to a single “burst like" event or that the peak flux emission varies from orbit to orbit or on longer time scales. Clearly, a repeated coverage of the binary orbit in a longer XMM-Newton observation and a comparison with the 2004 data would answer this question immediately. This, in addition, would provide us not only a better photon statistic but would also allow us to determine the emission geometry with a much higher accuracy than currently possible. This work made use of the XMM-Newton data archive. The first author acknowledges the recipe of funding provided by the Max-Planck Society in the frame of the International Max-Planck Research School (IMPRS). Alpar, M. A., Cheng, A. F., Ruderman, M. A., Shaham, J., 1982, Nature, 300, 728 Arons, J. & Tavani, M., 1993, ApJ, 403, 249 Becker, W. & Aschenbach, B., 2002, in Proceedings of the WE-Heraeus Seminar on Neutron Stars, Pulsars and Supernova remnants, eds. W. Becker, H. Lesch & J. Trümper, MPE-Report 278, 64, (arxiv:astro-ph/0208466) Blandford, R., Teukolsky, S. A., 1976, ApJ, 205, 580 Buccheri, R., Bennett, K., Bignami, G. F., Bloemen, J. B. G. M., Boriakoff, V., Caraveo, P. A., Hermsen, W., Kanbach, G., Manchester, R. N., Masnou, J. L., Mayer-Hasselwander, H. A., Ozel, M. E., Paul, J. A., Sacco, B., Scarsi, L., Strong, A. W., 1983, A&A, 128, 245 Bogdanov, S., Grindlay, J. E., Heinke, C. O., Camilo, F., Freire, P. C. C., Becker, W., 2006, ApJ, 646, 1104 Fruchter, A. S., Gunn, J. E., Lauer, T. R., Dressler, A., 1988, Nature, 334, 686 Fruchter, A. S., Stinebring, D. R., Taylor, J. H. 1988, Nature, 333, 237 Gaensler, B. M. & Slane, P.O., 2006, Ann. Rev. Astron. Astrophys., 44, 17 Hui, C. Y. & Becker, W., 2006, A$\&$A, 448, L13 Kargaltsev, O. Y.& Pavlov, G. G., in preparation (2006) Kaspi, V. M., Roberts, M. S. E., Harding, A. K., In: Compact Stellar X-ray Sources, ed. W. H. G. Lewin & M. van der Klis, Cambridge University Press, p. 279, 2006 Krongold, Y., Dultzin-Hacyan, D., Marziani, P., 2001, AJ, 121, 702 Kulkarni, S. R., Hester, J. J., 1988, Nature, 335, 801 Kulkarni, S. R., Phinney, E. S., Evans, C. R., & Hasinger, G., 1992, Nature, 359, 300 Manchester, R. N., Hobbs, G. B., Teoh, A., Hobbs, M., 2005, AJ, 129, 1993 Monet, D. G., et al., 2003, AJ, 125, 984 Stappers, B. W., Gaensler, B. M., Kaspi, V. M., van der Klis, M., & Lewin, W. H. G., 2003, Science, 299, 1372 Taylor, J.H. & Cordes, J.M., 1993, ApJ, 411, 674 van Paradijs, J., Allington-Smith, J., Callanan, P., Hassall, B. J. M., Charles, P. A, 1988, Nature, 334, 684
--- abstract: 'Given an arithmetic function $a: {\mathbb{N}}\rightarrow {\mathbb{R}}$, one can associate a naturally defined, doubly infinite family of Jensen polynomials. Recent work of Griffin, Ono, Rolen, and Zagier shows that for certain families of functions $a: {\mathbb{N}}\rightarrow {\mathbb{R}}$, the associated Jensen polynomials are eventually hyperbolic (i.e., eventually all of their roots are real). This work proves Chen, Jia, and Wang’s conjecture that the partition Jensen polynomials are eventually hyperbolic as a special case. Here, we make this result explicit. Let $N(d)$ be the minimal number such that for all $n \geq N(d)$, the partition Jensen polynomial of degree $d$ and shift $n$ is hyperbolic. We prove that $N(3)=94$, $N(4)=206$, and $N(5)=381$, and in general, that $N(d) \leq (3d)^{24d} (50d)^{3d^{2}}$.' author: - Hannah Larson - Ian Wagner title: Hyperbolicity of the partition Jensen polynomials --- Introduction ============ Given a function $a: {\mathbb{N}}{\longrightarrow}{\mathbb{R}}$ and positive integers $d$ and $n$, the associated *Jensen polynomial of degree $d$ and shift $n$* is defined by $$J^{d,n}_a(X) := \sum_{j=0}^d {d \choose j} a(n+j) X^j.$$ A polynomial is said to be *hyperbolic* if all of its zeros are real. Given an entire real function $\varphi(x)$ with Taylor expansion $\varphi(x) = \sum_{n \geq 0} \frac{\alpha(n)x^n}{n!}$, it is a theorem of Jensen [@J] that $\varphi(x)$ is in the Laguerre-Pólya class if and only if all of the associated Jensen polynomials $J_\alpha^{d,0}(X)$ are hyperbolic. Pólya proved [@P] that the Riemann Hypothesis is equivalent to the hyperbolicity of all Jensen polynomials associated to Riemann’s $\xi(s)$. In this paper, we study the hyperbolicity of Jensen polynomials $J_p^{d,n}(X)$ associated to the partition function $p(n)$, which counts the number of integer partitions of $n$. Chen, Jia, and Wang conjectured that for each positive integer $d$, $J_{p}^{d,n}(X)$ is eventually hyperbolic [@CJW]. For example, hyperbolicity of $J^{2,n}_p(X)$ is equivalent to $p(n+2)p(n) \leq p(n+1)^2$, a condition known as log concavity. Nicolas originally proved that this condition holds for all $n \geq 25$ in [@Ni]. This result was reproved by Desalvo and Pak in [@DP]. Recent results of Griffin, Ono, Rolen, and Zagier [@GORZ] show that Jensen polynomials for a large family of functions, including those associated to $\xi(s)$ and the partition function, are eventually hyperbolic. Their proof relates the polynomials $J^{d,n}_p(X)$ to the *Hermite polynomials* $H_d(X)$, defined by the generating function $$e^{tX-t^2} = \sum_{d=0}^\infty H_d(X) \cdot \frac{t^d}{d!} = 1 + X \cdot t + (X^2 - 2) \cdot \frac{t^2}{2} + (X^3 - 6 X) \cdot \frac{t^3}{6} + \ldots .$$ More precisely, if $$c := \frac{2}{3}\pi^2, \qquad w(n) := \frac{1}{\sqrt{c(n-\frac{1}{24})}}, \qquad \delta(n) := \frac{c w(n)^{\frac{3}{2}}}{\sqrt{2}},$$ the authors prove that $$\label{grozmain} \lim_{n \rightarrow \infty} \frac{2^d}{p(n)\delta(n)^d} \cdot J^{d,n}_p\left(\delta(n) X - e^{-cw(n)/2}\right) = H_d(X).$$ Since the Hermite polynomials have distinct real roots, it follows that the polynomial on the left-hand side above, and hence $J^{d,n}_p(X)$, is eventually hyperbolic. In other words, for each $d$ there exists some $N$ such that for all $n \geq N$, the polynomial $J^{d,n}_p(X)$ is hyperbolic. Define $N(d)$ to be the minimal such $N$. For example, the results of Nicolas and Desalvo and Pak show $N(2) = 25$. We determine the following further values of $N(d)$. \[lowdegrees\] Let $N(d)$ be defined as above. Then $N(3) = 94, N(4) = 206,$ and $N(5) =381$. During the preparation of this paper, the authors were notified that Chen, Jia, and Wang [@CJW] independently proved $N(3)=94$ using different methods. The proof of Theorem \[lowdegrees\] relies on obtaining functions that closely approximate the ratios $p(n+j)/p(n)$ and bounding the error of these approximations for large $n$. For $d=3,4,5$, direct computation gives rise to good bounds, allowing us to reduce Theorem \[lowdegrees\] to checking a reasonably small finite number of cases. As an illustration of these techniques, we also prove a recent conjecture of Chen which involves an inequality of polynomials in ratios of close partition numbers. \[con\] Let $u_n = p(n+1)p(n-1)/p(n)^2$. Then for all $n \geq 2$, we have $$\label{chen} 4(1-u_n)(1-u_{n+1}) < \left(1 + \frac{\pi}{\sqrt{24}n^{3/2}}\right)(1 - u_n u_{n+1})^2. $$ For arbitrary $d$, similar techniques, along with the convergence of $J^{d,n}_p(X)$ to the Hermite polynomials $H_d(X)$ after change of variable, gives rise to an upper bound for $N(d)$. However, without the benefit of direct computation we rely on rather rough estimates for the errors mentioned above. This yields the following. \[general\] For every positive integer $d$, we have $N(d) \leq (3d)^{24d} (50d)^{3d^2}$. This paper is organized as follows. In Section $2$ we describe an equivalent condition for a polynomial to have all real roots and prove two lemmas that bound higher order terms discarded by the methods in [@GORZ]. In Section $3$ we prove Theorem \[lowdegrees\] and in Section $4$ we prove Theorem \[general\] through a series of estimates on accumulating error terms. The appendix contains the Mathematica and Sage code used in the proof of Theorem \[lowdegrees\]. Acknowledgements {#acknowledgements .unnumbered} ---------------- The authors thank Ken Ono for suggesting this problem and providing advice. The authors are also grateful to Jesse Thorner for his help implementing Mathematica code that was used in the proof of Theorem \[lowdegrees\]. This research was supported by the National Science Foundation under Grant 1557960. Hankel determinants and ratios of close partition numbers ========================================================= The hyperbolicity of a polynomial $P(X) = a_d X^d + a_{d-1}X^{d-1} + \ldots + a_0$ is equivalent to certain polynomial conditions in the coefficients $a_i$, which we now describe. If $\lambda_1, \ldots, \lambda_d$ are the roots of $P(X)$, let $S_k = \lambda_1^k + \ldots + \lambda_d^k$ denote the sum of $k$th powers of the roots. The $m \times m$ *Hankel determinant associated to $P(X)$* is defined by $$\label{hmat} \Delta_m (P(X)) := \left| \begin{matrix} S_0 & S_1 & \cdots &S_{m-1} \\ S_1 & S_2 & \cdots & S_{m} \\ \vdots & \vdots & & \vdots \\ S_{m-1} & S_m & \cdots & S_{2m-2} \end{matrix}\right| = \sum_{i_1 < \cdots < i_m} \prod_{a<b} (\lambda_{i_a} - \lambda_{i_b})^2.$$ In addition, let $$D_{d,m}(P(X)) = D_{d,m}(a_0, \ldots, a_{d}) := a_d^{2m-2} \cdot \Delta_m(P(X))$$ so that $D_{d,d}(a_0, \ldots, a_{d})$ is the discriminant of $P(X)$ and $D_{d,m}(a_0, \ldots, a_d)$ is a homogeneous polynomial of degree $2m-2$ in the coefficients $a_i$. A theorem of Hermite [@Her] says the hyperbolicity of $P(X)$ is equivalent to the condition $D_{d,m}(P(X)) \geq 0$ for all $m=2, \ldots, d$. We will prove Theorems \[lowdegrees\] and \[general\] by showing that $$\mathcal{D}_{d,m}(n) := D_{d,m}\left(\frac{J^{d,n}_p(X)}{p(n)}\right) = D_{d,m}\left(1, {d \choose 1}\frac{p(n+1)}{p(n)}, {d \choose 2} \frac{p(n+2)}{p(n)}, \ldots, \frac{p(n+d)}{p(n)}\right) > 0$$ for each $m=2, \ldots, d$ and all $n$ greater than the claimed quantities. Note that $\mathcal{D}_{d,m}(n)$ approaches $0$ in the limit as $n \rightarrow \infty$, since $\lim_{n\rightarrow \infty} J_p^{d,n}(X)/p(n) = (X+1)^d$. This fact is true because the partition ratios $\frac{p(n+j)}{p(n)} \rightarrow 1$ as $n \rightarrow \infty$ for any fixed $j$. A priori, this makes the sign of $\mathcal{D}_{d,m}(n)$ difficult to ascertain. However, the results in [@GORZ] determine the rate at which $\mathcal{D}_{d,m}(n)$ approaches $0$ and the coefficient of the leading term. More precisely, by the behavior of $\Delta_m$ under change of variable and , we know that $$\lim_{n \rightarrow \infty} \frac{1}{\delta(n)^{m(m-1)}}\Delta_m\left(\frac{J_p^{d,n}(X)}{p(n)}\right) = \lim_{n\rightarrow \infty}\Delta_m\left(\frac{J_p^{d,n}(\delta(n) X - e^{-cw(n)/2})}{p(n)}\right) = \Delta_m(H_d(X)).$$ Equivalently in terms of $w=w(n)=1/\sqrt{c(n-1/24)}$ and $\mathcal{D}_{d,m}(n)$, we have $$\label{key} \lim_{w\rightarrow 0} \frac{1}{w^{\frac{3}{2}m(m-1)}}\mathcal{D}_{d,m}(n) = \left(\frac{c}{\sqrt{2}}\right)^{m(m-1)} \Delta_m(H_d(X)).$$ Because the Hermite polynomials have distinct, real roots, the term on the right is a positive constant. Our strategy is to expand $\mathcal{D}_{d,m}(n)$ in powers of $w$ around zero, up to $w^{\frac{3}{2}m(m-1)}$. Because the above limit exists, we are guaranteed that all lower powers of $w$ cancel, and the coefficient of the $w^{\frac{3}{2}m(m-1)}$ term is the specified positive multiple of $\Delta_m(H_d(X))$. We then must find explicit bounds for the remaining terms that are tending to zero. To do this, we need to study ratios of close partition numbers. In terms of $w$, the Hardy-Ramanunjan asymptotic formula for the partition numbers [@HR] takes the form $$p(n) \sim F(w) := \frac{\pi^2}{6\sqrt{3}}(w^2 - w^3) e^{1/w}.$$ As observed in [@GORZ], $w(n+j) = \frac{w(n)}{\sqrt{1+cjw(n)^2}}$, so the function $$\label{R} R(j,w) := \frac{F\left(\frac{w}{\sqrt{1+cjw^2}}\right)}{F(w)}=\frac{e^{\frac{cjw}{1+\sqrt{1+cjw^{2}}}}(\sqrt{1+cjw^2} -w)}{(1-w)(1+cjw^2)^{3/2}}$$ closely approximates $p(n+j)/p(n)$. To bound the error of this approximation, we use Lehmer’s error bound for Rademacher’s convergent series for the partition function, in which $F(w)$ is the leading term. In what follows, $A_k(n)$ is a Kloosterman sum. The only property we need is $|A_1(n)|=|A_2(n)|=1$, so we do not define it here, instead referring the reader to [@Leh]. Let $w=w(n) = 1/\sqrt{c(n-1/24)}$. For all $n \geq 1$, we have $$\label{rad} p(n) = \frac{\pi^2}{6\sqrt{3}} w^2 \sum_{k=1}^N \frac{A_k(n)}{\sqrt{k}} \left((1-w)e^{1/kw} + (1+w)e^{-1/kw}\right)+B(n,N),$$ where $$|B(n,N)|<\frac{\pi^2N^{-2/3}}{\sqrt{3}}\left( N^3w^3\sinh\left(\frac{1}{Nw}\right) + \frac{1}{6} - N^2w^2\right) < \frac{\pi^2N^{-2/3}}{\sqrt{3}}\left( N^3w^3\frac{e^{1/Nw}}{2} + \frac{1}{6}\right) .$$ In order for us to state precisely how well $R(j,w)$ approximates $p(n+j)/p(n)$, let $$L(w) := \frac{1+21w}{1-w} \cdot e^{-1/2w} + \frac{e^{-1/w}}{w^2-w^3}.$$ \[bound1\] For all $n \geq 1$, we have $$\left| \frac{p(n+j)}{p(n)} - R(j,w)\right| \leq R(j,w) \frac{2L(w)}{1-L(w)} \sim 2 e^{-1/2w}.$$ Let $E(w(n)) = p(n) - F(w(n))$. The function $F(w)$ appears in the $k=1$ term of . Gathering the rest of that term, the $k=2$ term, and the Lehmer’s bound on $|B(n,2)|$ we find $$\begin{aligned} |E(w)| &\leq \frac{\pi^2}{6\sqrt{3}}\left((w^2+w^3)e^{-1/2w} + (w^2 - w^3+12\cdot 2^{5/6} w^3)e^{1/2w} + 2^{-7/6}\right) \\ &\leq \frac{\pi^2}{6\sqrt{3}}\left((w^2+21w^3)e^{1/2w}+1\right),\end{aligned}$$ where in the last line we have used that $w \leq 1/\sqrt{c}$. Hence, $|E(w)/F(w)| \leq L(w)$. Noting that the function $L(w)$ is increasing in $w$ for $0 < w \leq 1/\sqrt{c}$, it follows that $$\begin{aligned} \left | \frac{p(n+j)}{p(n)} - \frac{F(w(n+j))}{F(w(n))} \right |&=\frac{F(w(n+j))}{F(w(n))} \left| \frac{1 + \frac{E(w(n+j))}{F(w(n+j))}}{1 + \frac{E(w)}{F(w)}} - 1 \right| \\ &=R(j,w)\left|\frac{\frac{E(w(n+j))}{F(w(n+j))}-\frac{E(w(n))}{F(w(n))}}{1 + \frac{E(w)}{F(w)}} \right | \leq R(j,w) \frac{2L(w)}{1-L(w)}. \qedhere\end{aligned}$$ To study the behavior $p(n+j)/p(n)$ for large $n$, we want to study $R(j,w)$ near $w=0$. To this end, let $A_{s}(j,w)$ be the degree $s-1$ Taylor polynomial of $R(j, w)$. Applying Lemma \[bound1\] and Taylor’s theorem, we immediately obtain the following. \[bound\] Let $n \geq 1$ and suppose $w=1/\sqrt{c(n-1/24)} \in [0, {\varepsilon}]$ for some $0 < {\varepsilon}\leq 1/\sqrt{c}$. Then we have $$\frac{p(n+j)}{p(n)} = A_{s}(j,w) + E_{s}(j,w) w^s,$$ where $$|E_{s}(j,w)| \leq \frac{1}{s!} \cdot \sup_{x \in [0, {\varepsilon}]} \left |R^{(s)}(j,x) \right| + \sup_{x \in [0,{\varepsilon}]} \left|R(j,x) \frac{2L(x)}{x^s(1-L(x))} \right|.$$ Proof of Theorems \[lowdegrees\] and \[con\] ============================================ We now prove Theorem \[lowdegrees\] by bounding the error terms that accumulate from approximating $p(n+j)/p(n)$ by the Taylor polynomials $A_s(j,w)$ in the polynomial expression for $\mathcal{D}_{d,m}(n)$. This allows us to reduce to checking finitely many cases. Using the Newton-Girard identities to write the power sums of the roots in terms of the elementary symmetric functions, one can generate symbolic expressions for the polynomials $D_{d,m}(a_0, \ldots, a_d)$ in terms of $a_0, \ldots, a_n$. To obtain $\mathcal{D}_{d,m}(n)$, we substitute $${d \choose j}(A_{10}(j,w) + E_j w^{s})$$ in for $a_j$ in these polynomials, introducing $E_j$ as a variable. This gives rise to a polynomial expression in $w$ whose coefficients are polynomials in $E_j$. It turns out that all coefficients of $w^i$ for $i < k= \frac{3}{2}m(m-1)$ vanish in this expression. In addition, dividing through by $w^{k}$ gives rise to an expression of the form $$\mathcal{D}_{d,m}(w) = c_0 + c_1 w + c_2(E_1,\ldots, E_d) w^2 + \ldots + c_{(2m-2)s-k}(E_1, \ldots, E_d)w^{(2m-2)s-k},$$ where $c_0$ and $c_1$ are positive constants. We then use Mathematica to calculate the upper bound on $E_j=E_{10}(j,w)$ for $w \in [0, {\varepsilon}]$ given in Lemma \[bound\], where we choose $${\varepsilon}= 0.021,0.0163,0.0081 \qquad \text{ for $d=3,4,5$ respectively.}$$ From these, we can obtain a lower bound $-c_i' \leq c_i(E_1, \ldots, E_d)$ for each $i\geq 2$, giving rise to an expression of the form $$\mathcal{D}_{d,m}(w) \geq c_0 + c_1 w - c_2'w^2 - \ldots - c_{(2m-2)s-k}' w^{(2m-2)s-k}.$$ Moreover, we can arrange for each of the $c_i'$ above to be nonnegative so that the function on the right crosses zero at most once in the interval $[0, {\varepsilon}]$. For our chosen values of ${\varepsilon}$, evaluating the right-hand side at $w={\varepsilon}$ is positive, so $\mathcal{D}_{d,m}(w) > 0$ for all $1 \leq m \leq d$ and $w \leq {\varepsilon}$. Equivalently, $J_{p}^{d,n}(X)$ is hyperbolic for all $n \geq \frac{1}{c {\varepsilon}^2}+\frac{1}{24}$. Using the values of ${\varepsilon}$ listed above, this shows $J_p^{3,n}(X)$ is hyperbolic for all $n > 344$, $J_p^{4,n}(X)$ is hyperbolic for all $n > 572$ and $J_p^{5,n}$ is hyperbolic for all $n > 2316$. Checking the finite number of remaining possible counter examples directly now proves the theorem. Annotated Sage and Mathematica code to implement the full procedure described above appears in the appendix. With our chosen parameters, the total run time of this procedure is about $15$ minutes. We note that by increasing the number of terms $s$ that we take in the Taylor expansion of $R(j, w)$, the number of cases one needs to check directly can be brought down. However, this increases total run time, as checking more particular cases directly is faster than carrying out the more complex symbolic manipulations. For example, when $d = 5$, by increasing $s$ to $16$, one may increase ${\varepsilon}$ to $0.013$, corresponding to checking $n=899$ cases directly, but this has a total run time of about an hour. For $d \geq 6$ one would need to keep more than $s=10$ terms in order to see the cancellation of lower order terms in $w$ take place. The main obstruction of applying this method in higher degrees is tracking the increasing number of error terms in the increasingly complex symbolic expressions for $\mathcal{D}_{d,m}(n)$. A code for $d=6$ with $s = 16$ did not finish within $36$ hours when run on a laptop. Taylor expanding $R(j,w)$ and symbolically keeping track of errors can be used to prove inequalities about other polynomial equations involving ratios of close partition numbers. We now prove Theorem \[con\] using this idea. Setting $a_i = p(n+i)/p(n)$ we can rewrite as $$0 < \left(1+\frac{\pi^4}{9}w^3\right)(a_1^2-a_{-1}a_{1}a_{2})^2 - 4a_1^2(1-a_{-1}a_{1})(a_1^2-a_2).$$ We follow the same procedure and notation as in the proof of Theorem \[lowdegrees\], taking $s = 6$ and ${\varepsilon}= 0.013$. Substituting $a_i = A_{6}(i,w) + E_i w^{6}$ into the right-hand side above gives rise to a polynomial expression in $w$ with coefficients that are polynomials in the $E_i$, where the first term is a positive constant times $w^{10}$. We then minimize all the coefficients as before, using the bounds on $|E_i|$ from Lemma \[bound\]. This leaves us with an expression of the form $$w^{10}\left(\frac{25}{729} \pi^{12} - x(w)\right) \leq \left(1+\frac{\pi^4}{9}w^3\right)(a_1^2-a_{-1}a_{1}a_{2})^2 - 4a_1^2(1-a_{-1}a_{1})(a_1^2-a_2),$$ where $x(w)$ is a strictly increasing polynomial in $w$. Evaluating the left-hand side at $w={\varepsilon}$ yields a positive number, so the right-hand side is positive for all $w \in [0, {\varepsilon}]$. Equivalently, the proposition holds for all $n > 900$. Checking all $n \leq 900$ directly completes the proof. Bounds for general $d$ ====================== The polynomial $\mathcal{D}_{d,m}(n)$ we wish to study is homogeneous of degree $2m-2$ in the coefficients of $J^{d,n}_p(X)/p(n)$ and homogeneous of degree $m(m-1)$ in its roots. That is, it has the form $$\label{mon} \mathcal{D}_{d,m}(n) = \sum_{i_1 + \ldots + i_{2m-2}=m(m-1)} A_{i_1, \ldots, i_{2m-2}} \cdot \prod_{k=1}^{2m-2} {d \choose i_k} \frac{p(n+d-i_k)}{p(n)},$$ where the $A_{i_1, \ldots, i_{2m-2}}$ are constants. To bound errors when we expand in terms of $w$, we find bounds on the derivatives $R^{(s)}(j, w)$ for $w$ in the interval $[0, {\varepsilon}]$, where ${\varepsilon}:=(3d)^{-12d}(50d)^{-\frac{3}{2}d^2}$, corresponding to our eventual bound on $N(d)$. For convenience, let $t = t(j) :=cj$. \[Deriv\] Assume that $w \in [0, {\varepsilon}]$ with ${\varepsilon}$ as above. Then $$\left | R^{(m)}(j,w) \right| \leq m! \binom{m+3}{3} e^{g({\varepsilon})} (4 e^{2t{\varepsilon}} t)^{m},$$ where $g({\varepsilon}) = \frac{t{\varepsilon}}{1 + \sqrt{1+t{\varepsilon}^2}}$. The idea of the proof is to use the product rule to split up $R(j,w)$ into four more manageable parts and use Faà di Bruno’s formula for iterated applications of the chain rule to evaluate each part as needed. This formula says that for differentiable functions $f(x)$ and $g(x)$, we have $$\label{Bruno} \frac{d^{n}}{dx^{n}} f \left(g(x) \right) = \!\!\!\!\! \sum_{m_{1} + 2 \cdot m_{2} + \cdots + n \cdot m_{n} = n} \frac{n!}{m_{1}! \cdots m_{n}!} f^{(m_{1} + m_{2} + \cdots + m_{n})} \left(g(x) \right) \prod_{j=1}^{n} \left( \frac{g^{(j)}(x)}{j!} \right)^{m_{j}}.$$ Let $$\begin{aligned} A &= A(t,w) := e^{\frac{tw}{1 + \sqrt{1+tw^2}}} & B &= B(t,w) := \sqrt{1+tw^2} - w, \\ C &= C(t,w) := \frac{1}{1-w} & D &= D(t,w) := \frac{1}{(1+tw^2)^{3/2}},\end{aligned}$$ so that $$\label{Rderiv} R^{(m)}(j,w) = \!\!\!\! \sum_{m_{1} + m_{2} + m_{3} + m_{4} = m} \frac{m!}{m_{1}! \cdots m_{4}!} \left(\frac{d^{m_{1}}A}{dw^{m_{1}}} \right) \cdot \left(\frac{d^{m_{2}} B}{dw^{m_{2}}} \right) \cdot \left(\frac{d^{m_{3}} C}{dw^{m_{3}}} \right) \cdot \left(\frac{d^{m_{4}} D}{dw^{m_{4}}} \right).$$ We will focus on $A$ first. Let $f(w) = e^w$ and $g(w) = \frac{tw}{1 + \sqrt{1+tw^2}}$. By (\[Bruno\]), we have $$\label{dA} \frac{d^nA}{dw^n}=\frac{d^n}{dw^n} f(g(w)) = \sum_{m_{1} + 2\cdot m_{2} + \cdots + n \cdot m_{n} =n} \frac{n!}{m_{1}! \cdots m_{n}!} e^{g(w)} \prod_{i=1}^{n} \left( \frac{g^{(i)}(w)}{i!} \right)^{m_{i}}.$$ By the product rule, it is easy to see that $$\label{g} g^{(i)}(w) = tw \left(\frac{d}{dw} \right)^{i} \frac{1}{1+\sqrt{1+tw^2}} + it \left(\frac{d}{dw} \right)^{i-1} \frac{1}{1 + \sqrt{1+tw^2}}.$$ Next, let $g_{*}(w) := \frac{1}{1+\sqrt{1+tw^2}}$ and let $\alpha(k) := \left(\frac{d}{dw} \right)^{k} \sqrt{1+tw^2}$. We use (\[Bruno\]) again to show $$\label{g*} g_{*}^{(i)}(w) = \sum_{r_{1} + \cdots + i \cdot r_{i} =i} \frac{i!}{r_{1}!\cdots r_{i}!} \frac{(-1)^{r_{1} + \cdots + r_{i}} (r_{1} + \cdots + r_{i})!}{(1+ \sqrt{1+tw^2})^{r_{1} + \cdots + r_{i} +1}} \prod_{k=1}^{i} \left( \frac{\alpha(k)}{k!} \right)^{r_{k}}.$$ Using (\[Bruno\]) once more we have $$\begin{aligned} \alpha(k) &= \sum_{s_{1} + 2 s_{2} = k} \frac{k!}{s_{1}! s_{2}!} \binom{\frac{1}{2}}{s_{1} + s_{2}} \frac{(2tw)^{s_{1}} t^{s_{2}}}{(1+tw^2)^{s_{1} + s_{2} - \frac{1}{2}}} \leq k! e^{2tw} t^{k}.\end{aligned}$$ We can plug this back into (\[g\*\]) to find that $$g_*^{(i)}(w) \leq i! (e^{2tw} t)^i \sum_{r_{1} + \cdots + i \cdot r_{i} =i}\frac{(r_{1} + \cdots + r_{i})!}{r_{1}! \cdots r_{i}!} \leq i!(2e^{2tw}t)^{i},$$ where we used the fact that the sum is counting the number of ordered partitions of $i$. Next, we plug this into (\[g\]) and use the fact that $tw \leq 1$ to find $\left| g^{(i)}(w) \right| \leq i! \cdot 2(2 \lambda t)^{i}$. Finally, we are able to plug this into (\[dA\]) to find that $$\label{A} \left | \frac{d^nA}{dw^n} \right | \leq n! e^{g(w)} (2 e^{2tw} t)^{n} \cdot \!\!\!\!\sum_{m_{1} + \cdots + n \cdot m_{n} =n} \frac{2^{m_{1} + \cdots + m_{n}}}{m_{1}! \cdots m_{n}!} \leq n! e^{g(w)} (4 e^{2tw} t)^{n}.$$ Next, it is easy to show that $$\label{B} \left | \frac{d^nB}{dw^n} \right| \leq \left| \alpha(n) \right| \leq n! (e^{2tw} t)^{n},$$ and $$\label{C} \left|\frac{d^nC}{dw^n}\right| = \frac{n!}{(1-w)^{n+1}} \leq n! (e^{2tw} t)^{n}.$$ Lastly, we have $$\label{D} \left|\frac{d^nD}{dw^n}\right| \leq \sum_{r_{1} + \cdots + n \cdot r_{n} =n} \frac{n!}{r_{1}! \cdots r_{n}!} \frac{ (\frac{3}{2})_{r_{1} + \cdots + r_{n}}}{(1+tw^2)^{\frac{3}{2}+ r_{1} + \cdots + r_{n}}} \prod_{k=1}^{n} \left( \frac{|\alpha(k)|}{k!} \right)^{r_{k}} \leq n! (2 e^{2tw} t)^{n}.$$ where $(x)_{n} := x(x+1) \cdots (x+n-1)$ is the rising factorial. Finally, we substitute the bounds in equations , , , and back into and use the fact that the sum over $m_1 + \ldots + m_4 = m$ contains $\binom{m+3}{3}$ terms. Given some $\underline{i} = (i_1, \ldots, i_{2m-2})$ with $i_1+\ldots + i_{2m-2} = m(m-1)$, let $T_{d,m}(\underline{i}; w)$ be the degree $\frac{3}{2}m(m-1)$ Taylor polynomial of $\prod_{k=1}^{2m-2} R(d-i_k, w)$. \[TE\] Suppose $w \in [0,{\varepsilon}]$. Then $$\label{tay} \prod_{k=1}^{2m-2} \frac{p(n+d-i_k)}{p(n)} = T_{d,m}(\underline{i};w) + E_{d,m}(\underline{i};w)w^{\frac{3}{2}m(m-1)+1}$$ where $$|E_{d,m}(\underline{i};w)| \leq e^2(3d)^{10d-10}(4cd)^{\frac{3}{2}d^2} + 8m \cdot 6^{2m} \leq 2e^2(3d)^{10d-10}(4cd)^{\frac{3}{2}d^2}.$$ By Lemma \[bound1\], we can write $$\prod_{k=1}^{2m-2} \frac{p(n+d-i_k)}{p(n)} = \prod_{k=1}^{2m-2} R(d-i_k,w)(1 + U_k(w)) = \prod_{k=1}^{2m-2}R(d-i_k,w) + U(w),$$ where $$\begin{aligned} |U(w)| &\leq \prod_{k=1}^{2m-2}R(d-i_k,w) \left(\left(1 + \frac{2L(w)}{1-L(w)}\right)^{2m-2} - 1\right) \\ &\leq 2^{2m-2} \cdot (2m-2)\cdot 3^{2m-2} \cdot \frac{2L(w)}{1-L(w)} \leq 8m \cdot 6^{2m} \cdot e^{-1/2w}.\end{aligned}$$ Let $s = \frac{3}{2}m(m-1)+1$. Note also that we can easily bound $$\frac{e^{-1/2w}}{w^s} \leq \frac{e^{-1/2{\varepsilon}}}{{\varepsilon}^s} \leq \exp\left(\frac{3}{2}d^2 \left(2d \log(3d) + \frac{3}{2}d^2\log(50d)\right) - \frac{1}{2}(3d)^{12d}(50d)^{\frac{3}{2}d^2}\right) < 1.$$ Meanwhile, from Lemma \[Deriv\] and the product rule, we know that $$\begin{aligned} &\frac{1}{s!} \left | \frac{d^s}{dw^s} \prod_{k=1}^{2m-2}R(d-i_{k},w) \right | \leq e^{(2m-2)g({\varepsilon})}(4 e^{2cd{\varepsilon}} cd)^{s} \\ &\qquad\qquad \qquad\qquad\qquad\qquad\qquad \times \!\!\!\!\!\sum_{n_{1} + \cdots + n_{2m-2} = \frac{3}{2}m(m-1) +1} \binom{n_{1} +3}{3} \cdots \binom{n_{2m-2}+3}{3}.\end{aligned}$$ The largest term in the sum on the right hand side occurs if each $n_{i}$ is equal, which is in turn bounded by replacing each $n_i$ with $m \geq \frac{\frac{3}{2}m(m-1) +1}{2m-2}$. Counting the number of terms, we see that the sum is bounded above by $$\begin{aligned} \binom{\frac{3}{2}m(m-1)+ 2m-2}{2m-3} \cdot \binom{m+3}{3}^{2m-2} \leq (2m^2)^{2m-2} \cdot \left(\frac{3}{2} m^3\right)^{2m-2} = (3m)^{10m-10}. \end{aligned}$$ This shows that $$\begin{aligned} \left|\prod_{k=1}^{2m-2}R(d-i_k,w) - T_{d,m}(\underline{i};w) \right| &\leq e^{(2m-2)g({\varepsilon})}(4e^{2d{\varepsilon}}cd)^s(3m)^{10m-10} \cdot w^s \\ &\leq e^2(3d)^{10d-10}(4cd)^{\frac{3}{2}d^2} \cdot w^s. \qedhere\end{aligned}$$ In order to finish bounding the monomials in equation we need the following result. We include the extra factor out front because of how it enters in equation . \[binomial\] Suppose $0 \leq m \leq d$ and $i_1 + \ldots + i_{2m-2}= m(m-1)$ for positive integers $i_k$. Then we have $$\left | \left( \frac{\sqrt{2}}{c} \right)^{m(m-1)} \prod_{k=1}^{2m-2} \binom{d}{i_{k}} \right | \leq \left( e^{\frac{4e}{c^{2}}} \right)^{d^{2}}.$$ The product $\prod_{k=1}^{2m-2} \binom{d}{i_{k}}$ is maximized when all $i_{k}$ are equal (i.e. $i_{k} = \frac{m}{2}$). Using standard bounds on binomial coefficients, we therefore have $\prod_{k=1}^{2m-2} \binom{d}{i_{k}} \leq \left( \frac{2ed}{m} \right)^{m(m-1)}$. For $0 \leq m \leq d$, the function $\left( \frac{2 \sqrt{2} ed}{cm} \right)^{m^2}$ achieves its maximum at $m=\frac{2 \sqrt{2e} d}{c}$. Thus $$\left | \left( \frac{\sqrt{2}}{c} \right)^{m(m-1)} \prod_{k=1}^{2m-2} \binom{d}{i_{k}} \right | \leq \left | \left( \frac{2 \sqrt{2}ed}{cm} \right)^{m^2} \right | \leq \left( e^{\frac{4e}{c^{2}}} \right)^{d^2}. \qedhere$$ We now have bounds on the errors of our approximations of each monomial in . We also must bound the number of such terms that appear in this equation for $\mathcal{D}_{d,m}(n)$. \[monomials\] Suppose $n > (3d)^{24d}(50d)^{3d^2}$ and let $A_{i_1, \ldots, i_{2m-2}}$ be as in . Then $$\sum_{i_1, \ldots, i_{2m-2}} |A_{i_1, \ldots,i_{2m-2}}| \leq m! (m-1)^{m} 2^{m^2-2} \leq d^{2d} \cdot 2^{d^{2}}.$$ By the Newton-Girard identities, the power sums $S_{k}$ in the matrix in can be written as a sum of at most $$k \sum_{r_{1} + \cdots + k \cdot r_{k} = k} \frac{(r_{1} + \cdots + r_{k} -1)!}{r_{1}! \cdots r_{k}!} \leq k 2^{k-1}$$ monomials in the coefficients of our polynomial. The determinant of the matrix in is made up of a sum of at most $m!$ monomials of the form $$\prod_{\ell=1}^{m} S_{i_{\ell}} \qquad \text{where $i_{1} + \cdots + i_{m} = m(m-1)$.}$$ Plugging in the elementary symmetric functions for each $S_{i_\ell}$ in this product and expanding will express each of these “$S$-monomials" as a sum of at most $$\prod_{\ell=1}^{m} i_{l} 2^{i_{\ell} -1} \leq (m-1)^{m} 2^{m(m-2)}$$ monomials in the coefficients. To obtain $\mathcal{D}_{d,m}(n)$ from this, we must multiply by $(\frac{p(n+d)}{p(n)})^{2m-2}$. Since $n$ is so large, we easily have $p(n+d)/p(n) \leq 2$, for example by using Lemma \[bound1\] with $s=1$. Multiplying together the factors discussed above gives the result. The last ingredient we need to prove Theorem \[general\] is a lower bound on the Hankel determinants of Hermite polynomials. \[discriminant\] For each $m \leq d$, we have $ \Delta_{m}(H_{d}(X)) \geq 1$. We know $\Delta_{m}(H_{d}(X)) = \sum_{i_{1}< \cdots < i_{m}} \prod_{a<b} ( \lambda_{i_{a}} - \lambda_{i_{b}})^{2}$ so by the inequality of the arithmetic and geometric mean $$\begin{aligned} \Delta_{m}(H_{d}(X)) &\geq \binom{d}{m} \prod_{i_{1}< \cdots < i_{m}} \left( \prod_{a<b} ( \lambda_{i_{a}} - \lambda_{i_{b}})^{2} \right)^{\frac{1}{\binom{d}{m}}} = \binom{d}{m} \left( \prod_{j<k} ( \lambda_{j} - \lambda_{k})^{2 \binom{d-2}{m-2}} \right)^{\frac{1}{ \binom{d}{m}}} \\ &= \binom{d}{m} \Delta_{d}(H_{d}(X))^{\frac{m(m-1)}{d(d-1)}}. \end{aligned}$$ By Theorem 6.71 of [@Szego], and the fact that $a_{d}(H_{d}(X)) = 2^{d}$, we have $$\Delta_{d}(H_{d}(X)) = \frac{\mathrm{Disc}(H_d(X))}{2^{2d(d-1)}} = 2^{-\frac{d(d-1)}{2}} \prod_{\nu=1}^{d} \nu^{\nu} \geq 1,$$ so the result follows. Proving Theorem \[general\] is now just a matter of collecting and bounding all of the higher order terms from expanding $\mathcal{D}_{d,m}(n)$ in terms of $w$. Suppose $n > (3d)^{24d}(50d)^{3d^2}$ so that $w(n) \in [0, {\varepsilon}]$. By , we have $$\begin{aligned} \frac{\mathcal{D}_{d,m}(n)}{w^{\frac{3}{2}m(m-1)}} &=\sum_{i_1, \ldots, i_{2m-2}=m(m-1)} \frac{A_{i_1, \ldots, i_{2m-2}}}{w^{\frac{3}{2}m(m-1)}} \cdot \prod_{k=1}^{2m-2}{d \choose i_k}\left(T_{d,m}(\underline{i};w)+E_{d,m}(w)w^{\frac{3}{2}m(m-1)+1}\right) \\ &= \left(\frac{c}{\sqrt{2}}\right)^{m(m-1)}\Delta_m(H_d(X)) + w \cdot \mathcal{E}_{d,m}(w),\end{aligned}$$ where by Lemmas \[monomials\], \[binomial\], and \[TE\], $$\begin{aligned} \left(\frac{\sqrt{2}}{c}\right)^{m(m-1)} \cdot |\mathcal{E}_{d,m}(w)| \cdot w &\leq d^{2d} \cdot 2^{d^2} \cdot \left(e^{\frac{4e}{c^2}}\right)^{d^2} \cdot 2e^2 (3d)^{10d-10}(4cd)^{\frac{3}{2}d^2} \cdot w \\ &< (3d)^{12d}(50d)^{\frac{3}{2}d^2} \cdot w \leq 1.\end{aligned}$$ Since $\Delta_m(H_d(X)) \geq 1$, it follows that $\mathcal{D}_{d,m}(n) > 0$ and therefore, $J_{p}^{d,n}(X)$ is hyperbolic. Appendix {#appendix .unnumbered} ======== The Sage and Mathematica code below implements the procedure described in the proof of Theorem \[lowdegrees\]. Sage code ``` {.python language="Python"} epsilon_list=[0,0,0.0295,0.021,0.0163,0.0081,0.001] #list of our epsilon choices error_list=[0,0,[0,12719.9+1.59552*10^8,328255+1.7476*10^8],[0,10559.2+4.30607*10^6,328255+4. 60022*10^6,3.77919*10^6+4.91402*10^6],[0,9026.37+51727.4,328255+54478.9,3.77919*10^6+57374.2, 1.75707*10^7+60420.8],[0,5893.44+1.54878*10^-6,328255+1.58991*10^-6,3.77919*10^6+1.63212*10^- 6,1.75708*10^7+1.67544*10^-6,5.37043*10^7+1.71991*10^-6]] #from Mathematica and Lemma 2.3 #build symbolic expressions for Hankel determinants in terms of power sums s_i S.<s0,s1,s2,s3,s4,s5,s6,s7,s8>=PolynomialRing(QQ) ss=[s0,s1,s2,s3,s4,s5,s6,s7,s8] Matrices=[matrix([ [ss[k] for k in [j..j+i-1]] for j in [0..i-1] ]) for i in [0..5] ] MM=[M.determinant() for M in Matrices] #Hankel determinant in terms of S_i AA.<a0,a1,a2,a3,a4,a5>=PolynomialRing(QQ) #the coefficients aj of a polynomial aa=[a0,a1,a2,a3,a4,a5] var('w,p,j') #p=pi c=2*p^2/3 s=10 #point of bounding errors -- see Remark following the proof of Theorem 1.1 #define the function R(j,w) that approximates p(n+j)/p(n) R=-exp(j*c*w/(1 + sqrt(1 + j*c*w^2)))*(sqrt(1 + j*c*w^2) - w)/((w - 1)*(1 + j*c*w^2)^(3/2)) A=R.series(w,s).truncate() #degree s-1 Taylor polynomial T.<E1,E2,E3,E4,E5,w,p,j>=PolynomialRing(QQ) EE=[0,E1,E2,E3,E4,E5] A=T(A) #put A in the polynomial ring def collect_errors(c,err): #minimizes c, given list of bounds on |E_i| M=c.monomials() C=[a.n() for a in c.coefficients()] l=len(C) to_sub= dict((EE[i],err[i]) for i in [1..len(err)-1]) new=[] for i in [0..l-1]: if M[i].degree(E1)==M[i].degree(E2)==M[i].degree(E3)==M[i].degree(E4)==M[i]. degree(E5)==0: #a monomial with no error terms will stay the same new.append(C[i]*M[i].subs(p=RR(pi)).n()) else: new.append(-abs(C[i]*M[i].subs(to_sub).subs(p=RR(pi)).n())) return min(0,sum(new)) for d in [2,3,4,5]: epsilon=epsilon_list[d] elem=[(-1)^i*aa[d-i]/aa[d] for i in [0..d]] #elem sym functs in roots of sum(a_iX^i) for i in [d+1..2*d-2]: elem.append(0) power_sums=[d] #list of power sums for k in [1..2*d-2]: #builds power sums recursively using Newton-Girard formulae power_sums.append((-1)^(k-1)*k*elem[k]+sum([(-1)^(k-1+i)*elem[k-i]*power_sums [i] for i in [1..k-1]])) hankel_list=[0,0] #polynomial expression for Hankel det in terms of coefficients aj for m in [2..d]: to_sub = dict( (ss[i],power_sums[i]) for i in [0..2*m-2] ) D=MM[m].subs(to_sub)*aa[d]^(2*m-2) D=AA(D) #put D back in polynomial ring hankel_list.append(D) err=error_list[d] to_sub = dict((aa[i],binomial(d,i)*(A.subs(j=i)+EE[i]*w^s)) for i in [0..d]) Delta_is_positive=[] for m in [2..d]: D=hankel_list[m] #D is D_{d,m} Delta=T(D.subs(to_sub)) #with A_s and symbolic errors plugged in k=3*m*(m-1)/2 w=T(w) minimized_Delta = sum([Delta.coefficient({w:i}).subs(p=RR(pi)).n()*w^(i-k) fo r i in [0..k+1]]) + sum([collect_errors(Delta.coefficient({w:i}),err).n()*w^( i-k) for i in [k+2..(2*m-2)*s] ]) if minimized_Delta.subs(w=epsilon).n() > 0: Delta_is_positive.append(m) else: print d,m,'choose smaller epsilon' if len(Delta_is_positive)==d-1: print 'For d =', d, 'J^{n,d} is hyperbolic for all n > ', floor(1/(c.subs(p=R R(pi))*epsilon^2)+1/24) else: print 'choose smaller epsilon' ``` Mathematica code c = 2/3*Pi^2; R[j_, w_]:=-Exp[c*j*w/(1+Sqrt[1+c*j*w^2])](Sqrt[1+c*j* w^2]-w)/((w-1)(1+c*j* w^2)^(3/2)) L[w_]:=(1+21*w)/(1-w)*Exp[-1/(2*w)]+Exp[-1/w]/(w^2-w^3) Do[Print[N[Maximize[{R[i, w]*L[w]/(w^10*(1 - L[w])),0<=w<=0.0295},w],30]],{i,1,2}] Do[Print[N[Maximize[{R[i, w]*L[w]/(w^10*(1 - L[w])),0<=w<=0.021},w],30]],{i,1,3}] Do[Print[N[Maximize[{R[i, w]*L[w]/(w^10*(1 - L[w])),0<=w<=0.0163},w],30]],{i,1,4}] Do[Print[N[Maximize[{R[i, w]*L[w]/(w^10*(1 - L[w])),0<=w<=0.0081},w],30]],{i,1,5}] Do[Print[N[Maximize[{Abs[D[R[i,w],{w,10}]]/Factorial[10],0<=w<=0.0295},w],30]],{i,1,2}] Do[Print[N[Maximize[{Abs[D[R[i,w],{w,10}]]/Factorial[10],0<=w<=0.021},w],30]],{i,1,3}] Do[Print[N[Maximize[{Abs[D[R[i,w],{w,10}]]/Factorial[10],0<=w<=0.0163},w],30]],{i,1,4}] Do[Print[N[Maximize[{Abs[D[R[i,w],{w,10}]]/Factorial[10],0<=w<=0.0081},w],30]],{i,1,5}] Do[If[CountRoots[PartitionsP[i+3]*x^3+3*PartitionsP[i+2]*x^2+3*PartitionsP[i+1]*x+Partitions P[i],x]<3,Print[i]],{i,94,344}] Do[If[CountRoots[PartitionsP[i+4]*x^4+4*PartitionsP[i+3]*x^3+6*PartitionsP[i+2]*x^2+4*Partit ionsP[i+1]*x+PartitionsP[i],x]<4,Print[i]],{i,206,572}] Do[If[CountRoots[PartitionsP[i+5]*x^5+5*PartitionsP[i+4]*x^4+10*PartitionsP[i+3]*x^3+10*Part itionsP[i+2]*x^2+5*PartitionsP[i+1]*x+PartitionsP[i],x]<5,Print[i]],{i,381,2105}] [9]{} W.Y.C. Chen. *The spt-function of Andrews*. arXiv:1707.04369, 2017. W.Y.C. Chen, D. Jia, and L. Wang. *Higher order Turán inequalities for the partition function.* arXiv:1706.10245, 2017. S. Desalvo and I. Pak. *Log-concavity of the partition function.* The Ramanujan Journal, 38, 61-73, 2015. M. Griffin, K. Ono, L. Rolen, and D. Zagier. *Jensen polynomials for the Riemann zeta function and other sequences.* Preprint. G.H. Hardy and S. Ramanujan. *Asymptotic formulae in combinatory analysis*. Proc. London Math. Soc. 17:75-115, 1918. J.L.W.V. Jensen. *Recherches sur la théorie des équations*. Acta Math. 36:181-195, 1913. D.H. Lehmer. *On the series for the partition function*. Section 159, Zweiter Band, Teubner, Berlin 1909. J-L. Nicolas. *Sur les entiers $N$ pour lesquels il y a beaucoup de groupes abéliens d?ordre $N$.* Annales de l’Institut Fourier, Volume 28, no. 4, 1-16, 1978. N. Obrechkoff. *Zeros of polynomials.* Publ. Bulg. Acad. Sci., Sofia, 1963 (in Bulgarian), English translation (by I. Dimovski and P. Rusev), The Marin Drinov Acad. Publ. House, Sofia, 2003. G. Pólya, *Über die algebraisch-funktionentheoretischen Untersuchungen von J. L. W. V. Jensen*. Kgl. Danske Vid. Sel. Math.-Fys. Medd. 7:3-33, 1927. G. Szëgo. *Orthogonal polynomials* Colloq. Publ. XXIII, Amer. Math. Soc., Providence, 1939.
--- abstract: 'We investigate the merging rates of compact binaries in galaxies, and the related detection rate of gravitational wave (GW) events with AdvLIGO/Virgo and with the Einstein Telescope. To this purpose, we rely on three basic ingredients: (i) the redshift-dependent galaxy statistics provided by the latest determination of the star formation rate functions from UV+far-IR/(sub)millimeter/radio data; (ii) star formation and chemical enrichment histories for individual galaxies, modeled on the basis of observations; (iii) compact remnant mass distribution and prescriptions for merging of compact binaries from stellar evolution simulations. We present results for the intrinsic birthrate of compact remnants, the merging rates of compact binaries, GW detection rates and GW counts, attempting to differentiate the outcomes among BH-BH, NS-NS, and BH-NS mergers, and to estimate their occurrence in disk and spheroidal host galaxies. We compare our approach with the one based on cosmic SFR density and cosmic metallicity, exploited by many literature studies; the merging rates from the two approaches are in agreement within the overall astrophysical uncertainties. We also investigate the effects of galaxy-scale strong gravitational lensing of GW in enhancing the rate of detectable events toward high-redshift. Finally, we discuss the contribution of undetected GW emission from compact binary mergers to the stochastic background.' author: - | L. Boco, A. Lapi,\ S. Goswami, F. Perrotta, C. Baccigalupi, L. Danese title: | Merging Rates of Compact Binaries in Galaxies:\ Perspectives for Gravitational Wave Detections --- Introduction {#sec|introduction} ============ The recent detections of several gravitational wave (GW) events by the LIGO/Virgo collaborations (Abbott et al. 2016a,b,c; 2017a,b,c,d,e; 2019; also `https://www.ligo.org/`), and the many more expected with the advent of the upcoming advanced configurations and detectors like the Einstein Telescope (ET; see Sathyaprakash et al. 2012; Regimbau et al. 2012; also `http://www.et-gw.eu/`), are to provide tremendous breakthroughs in astrophysics, cosmology and fundamental physics (e.g., Taylor et al. 2012; Barack et al. 2018). The GW events in the LIGO/Virgo operating frequency band are consistently interpreted as mergers of binary compact star remnants, e.g., neutron stars (NS) and/or black holes (BHs). On the one hand, the analysis of the individual GW signal waveforms can provide useful information about the properties and evolution of the progenitor binary systems (remnant masses, spins, orbital parameters; e.g., Weinstein 2012; Abbott et al. 2016a,b,c). On the other hand, the statistics of GW events can yield astrophysical constraints on stellar binary evolution (SN kicks, common envelope effects, mass transfers; e.g., Belczynski et al. 2016; Dvorkin et al. 2018; Mapelli & Giocobbo 2018), on the average properties of the host galaxies (chemical evolution, star formation histories, initial mass function; e.g., O’Shaughnessy et al. 2010; de Mink & Belczynski 2015; Vitale & Farr 2018), and even on cosmology at large (e.g., Taylor & Gair 2012; Nissanke et al. 2013; Liao et al. 2017; Fishbach et al. 2019). In the present paper we will focus on forecasting the GW detection rate from merging compact binaries as a function of redshift, in the perspective of the next AdvLIGO/Virgo observing runs and of the future ET[^1]. The issue is complex because it involves numerous astrophysical processes occurring on vastly different time and spatial scales: from stellar astrophysics, to galaxy formation, to GW physics. A number of previous studies have approached the issue basing on population-synthesis simulations, that follow stellar and binary evolution so as to provide estimates of the remnant masses and merging timescales (e.g., Dominik et al. 2013, 2015; de Mink & Belczynski 2015; Spera et al. 2015, 2019; Spera & Mapelli 2017; Giacobbo & Mapelli 2018). The compact binary merging rate has generally been derived by combining the above results with the cosmic star formation rate density and with a distribution of metallicity around the mean cosmic value, either inferred from observations (e.g., Belczynski et al. 2016; Lamberts et al. 2016; Cao et al. 2018; Elbert et al. 2018; Li et al. 2018) or derived from cosmological simulations (e.g., O’Shaughnessy et al. 2017; Mapelli et al. 2017; Lamberts et al. 2018; Mapelli & Giacobbo 2018). On the other hand, in the last decade a wealth of observations (e.g., UV+far-IR/sub-mm/radio luminosity functions and stellar/gas/dust mass functions, broadband spectral energy distribution, mass-metallicity relationships, size/kinematic evolution, etc.) have allowed to estimate the statistics of different galaxy populations as a function of their main physical properties across cosmic time; in addition, these observations have allowed to shed light on the age-dependent star formation and chemical enrichment histories of individual galaxies. In this paper we will exploit these ingredients, in combination with the remnant mass distribution from single stellar evolution simulations (specifically, we rely on the `SEVN` code by Spera & Mapelli 2017 based on the delayed SN engine and including pair-instability and pair-instability pulsation SNe, hereafter (P)PSNe) to compute GW detection rates in the perspective of the next AdvLIGO/Virgo observing runs and of the future ET detector. We also provide a tentative separation among the signals expected from BH-BH, NS-NS, and BH-NS merger events in disk and spheroidal galaxies. The approach based on galaxy SFR and metallicity histories pursued here can provide the joint probability distribution of chirp masses and host galaxy properties (SFR, stellar mass, metallicity, etc.) as a function of redshift (see also Belczynski et al. 2010a,b; O’Shaughnessy et al. 2010, 2017). We then predict how the detected GW event rates from high-redshift galaxies can be enhanced by strong galaxy-scale gravitational lensing. Finally, we investigate the contribution to the GW background expected from the incoherent superposition of undetected signals from compact binary mergers in galaxies. The paper is organized as follows: in Sect. \[sec|basics\] we introduce the basic ingredients of our computation, including redshift-dependent galaxy statistics (see Sect. \[sec|SFR\_func\]), star formation and chemical enrichment histories for individual galaxies (see Sect. \[sec|SFR\_hist\]), and compact remnant mass distribution from stellar evolution simulations (see Sect. \[sec|stellarevo\]); then we compute compact remnant birthrates in Sect. \[sec|birthrates\] and intrinsic merging rates in Sect. \[sec|mergerrates\]; in Sect. \[sec|GWdetection\] we calculate the GW event detection rates expected in the next AdvLIGO/Virgo observing runs and the future ET detector, and in Sect. \[sec|lensing\] we discuss how these rates are affected by galaxy-scale gravitational lensing of GW; in Sect. \[sec|GWback\] we investigate the GW background from undetected events; finally, in Sect. \[sec|summary\] we summarize our findings. Throughout this work, we adopt the standard flat $\Lambda$CDM cosmology (Planck Collaboration 2019) with rounded parameter values: matter density $\Omega_M = 0.32$, baryon density $\Omega_b = 0.05$, Hubble constant $H_0 = 100\,h$ km s$^{-1}$ Mpc$^{-1}$ with $h = 0.67$, and mass variance $\sigma_8 = 0.81$ on a scale of $8\, h^{-1}$ Mpc. In addition, we use the widely adopted Chabrier (2003, 2005; see also Mo et al. 2010) initial mass function (IMF) with shape $\phi(\log m_\star)\propto \exp[-(\log m_\star-\log 0.2)^2/2\times 0.55^2]$ for $m_\star\la 1\, M_\odot$ and $\phi(\log m_\star)\propto m_\star^{-1.35}$ for $m_\star\ga 1\, M_\odot$, normalized as $\int_{0.08\, M_\odot}^{350\, M_\odot}{\rm d}m_\star\, m_\star\, \phi(m_\star)=1\, M_\odot$; the impact of the IMF choice on our results will be discussed in Sect. \[sec|GWdetection\]. Finally, a value $Z_\odot\approx 0.015$ for the solar metallicity (Caffau et al. 2011) is adopted. Basic ingredients {#sec|basics} ================= Our analysis is based on three main ingredients: (i) an observational determination of the SFR function at different redshifts; (ii) average star formation and chemical enrichment histories of individual galaxies; (iii) outcomes from single stellar evolution simulations specifying the remnant masses for a given zero-age main sequence star. We now briefly present and discuss these in turn. SFR functions and cosmic SFR density {#sec|SFR_func} ------------------------------------ The first ingredient is constituted by the SFR function ${\rm d}N/{\rm d}\log \psi\,{\rm d}V$, namely the number density of galaxies per comoving volume and per logarithmic bin of SFR $\psi$ at given cosmic time $t$ (corresponding to redshift $z$). The SFR of a galaxy is directly proportional to the intrinsic UV luminosity; however, the latter can be significantly absorbed by even a modest amount of dust and re-radiated mostly at far-IR/(sub)mm wavelengths (e.g., Kennicutt & Evans 2012). For galaxies with relatively low SFRs $\psi\la 30-50\, M_{\odot}$ yr$^{-1}$ dust attenuation is mild and the intrinsic SFR can be soundly estimated from UV data alone via standard corrections based on the UV slope (see Meurer 1999; Calzetti et al. 2000; Bouwens et al. 2015). As a consequence, the SFR functions for SFRs $\psi\la 30-50\, M_{\odot}$ yr$^{-1}$ are rather well established by deep surveys in the rest-frame UV band, in some instances eased by gravitational lensing from foreground galaxy clusters, up to very high redshift $z\la 7-10$ (see Wyder et al. 2005; Oesch et al. 2010; van der Burg et al. 2010; Cucciati et al. 2012; Finkelstein et al. 2015; Alavi et al. 2016; Bouwens et al. 2016, 2017; Bhatawdekar et al. 2019; cf. open symbols in Fig. \[fig|SFRfunc\]). On the other hand, in galaxies with high SFRs $\ga 30-50\, M_{\odot}$ yr$^{-1}$ dust absorption is heavier, and UV slope-based corrections are wildly dispersed and statistically fail (see Silva et al. 1998; Efstathiou et al. 2000; Coppin et al. 2015; Reddy et al. 2015; Fudamoto et al. 2017). In this regime far-IR/(sub)mm observations becomes crucial to obtain sound estimates of the SFR; radio data can also be helpful, by eliciting the free-free emission associated with the ongoing SFR (e.g., Murphy et al 2012). In fact, over recent years far-IR/(sub)mm wide-area surveys (see Lapi et al. 2011; Gruppioni et al. 2013, 2015, 2019; Magnelli et al. 2013; cf. filled symbols in Fig. \[fig|SFRfunc\]) have been exploited to reconstruct, in combination with the deep UV data mentioned above, the SFR functions of galaxies for redshifts $z\la 3$ over the whole range of relevant SFR $\psi\sim 10^{-2}$ to a few $10^3\, M_\odot$ yr$^{-1}$. For redshifts $z\ga 3$ and large SFRs $\psi \ga 30-50\, M_{\odot}$ yr$^{-1}$ the shape of the SFR function is more uncertain, given the sensitivity limits of current wide-area far-IR surveys. However, relevant constraints have been obtained recently from deep radio surveys (Novak et al. 2017), from far-IR/(sub)mm stacking (see Rowan-Robinson et al. 2016; Dunlop et al. 2017) and super-deblending (see Liu et al. 2018) techniques, and from targeted far-IR/(sub)mm observations of significant yet not complete samples of starforming galaxies (e.g., Riechers et al. 2017; Marrone et al. 2018; Zavala et al. 2018) and quasar hosts (e.g., Venemans et al. 2017; Stacey et al. 2018). The resulting SFR functions at representative redshifts are illustrated in Fig. \[fig|SFRfunc\]. These can be smoothly rendered, over the redshift range $z\sim 0-8$ for SFR $\psi\sim 10^{-2}$ to a few $10^3\, M_\odot$ yr$^{-1}$, with a simple Schechter shape $${{\rm d}N\over {\rm d}\log\psi\,{\rm d}V}(\psi,t) = \mathcal{N}(z)\, \left[{\psi\over \psi_c(z)}\right]^{1-\alpha(z)}\,e^{-\psi/\psi_c(z)}~, \label{eq|SFRfunc}$$ in terms of three redshift-dependent parameters $\mathcal{N}(z)$, $\alpha(z)$, $\psi_c(z)$, as specified in Mancuso et al. (2016b; see their Table 1). In Mancuso et al. (2016a,b; 2017) and Lapi et al. (2017a,b) the SFR functions have been validated against independent datasets, including integrated galaxy number counts at relevant far-IR/(sub)mm/radio wavelengths, counts/redshift distributions of strongly gravitationally-lensed galaxies, main sequence of star-forming galaxies. An additional, straight test for the SFR functions, performed by Lapi et al. (2017b; see their Fig. 4), is the computation of the stellar mass function via the continuity equation, directly connecting the star formation to the building up of the stellar mass in galaxies, and the comparison with statistical observations at different redshifts for both quiescent and starforming objects (e.g., Davidzon et al. 2017). At $z\ga 1$ the bright end of the SFR function turns out to be populated by heavily dust obscured, strongly starforming galaxies, which constitute the progenitors of local massive spheroids with masses $M_\star\ga$ a few $10^{10}\, M_\odot$; the faint end is instead mainly populated by mildly obscured starforming galaxies, that will end up in spheroid-like objects with stellar masses $M_\star\la 10^{10}\, M_\odot$. On the other hand, disk-dominated galaxies with stellar masses $M_\star\la$ several $10^{10}\, M_\odot$ are found to be well traced by the UV-inferred SFR function at $z\la 2$. From the SFR functions, the cosmic SFR density (per unit comoving volume) is straightforwardly computed as $$\rho_{\psi}(z) = \int{\rm d}\log \psi\, {{\rm d}N\over {\rm d}\log \psi\,{\rm d}V}\, \psi~. \label{eq|cosmicSFR}$$ The outcome is illustrated in Fig. \[fig|SFRcosm\] (black solid line) and compared with available multi-wavelength datasets; the literature estimates from dust-corrected UV observations by Madau & Dickinson (2014), from SNI$a$ searches at high redshift by Strolger et al. (2004), and from high-redshift long GRBs by Kistler et al. (2013) are also reported for reference. Notice that the cosmic SFR density constructed from the latest determination of the SFR functions (see Fig. \[fig|SFRfunc\]) is appreciably higher than previous estimates, and peaks toward slightly higher redshift; this can be traced back to a more complete sampling of the dusty starforming galaxy population for $z\ga 2$ thanks to the most recent wide-area far-IR/(sub)mm/radio surveys (see Gruppioni et al. 2013, 2019; Rowan-Robinson et al. 2016; Novak et al. 2017; Liu et al. 2018). Star-formation and metal-enrichment history of individual galaxies {#sec|SFR_hist} ------------------------------------------------------------------ The second ingredient of our analysis is constituted by the history of star formation and chemical enrichment in individual galaxies. The quantities relevant for the present paper are the average behaviors of the SFR $\psi(\tau)$ and of the global metallicity $Z(\tau)$ as a function of the internal galactic age $\tau$ (i.e., the time since the beginning of significant star formation activity) for a galaxy observed at cosmological time $t$. As to the star-formation history, for high $z\ga 2$ starforming galaxies many SED-modeling studies (e.g., Papovich et al. 2011; Smit et al. 2012; Moustakas et al. 2013; Steinhardt et al. 2014; Citro et al. 2016; Cassará et al. 2016) suggest to describe the star formation history as a truncated power-law shape $$\psi(\tau) \propto \tau^\kappa\, \Theta_{\rm H}(\tau-\tau_\psi)~, \label{eq|SFRhist}$$ where $\kappa\la 0.5$ controls the slow power-law rise, and $\Theta_{\rm H}(\cdot)$ is the Heaviside function specifying the star formation duration $\tau_{\psi}$. The latter can be inferred from the galaxy main sequence (see Daddi et al. 2007; Rodighiero et al. 2011, 2015; Speagle et al. 2014, Dunlop et al. 2017), a well-known relationship linking the peak value of the SFR $\psi(\tau_\psi)$ to the relic stellar mass $M_\star(\tau_\psi)= \int_0^{\tau_\psi}{\rm d}\tau\, \psi(\tau)$; specifically, the redshift-dependent main sequence relation measured via multi-wavelength data by Speagle et al. (2014) is used to compute $\tau_\psi$. This yields a star formation duration of $\la$ Gyr for strongly starforming objects with $\psi\ga 10^2\, M_\odot$ yr$^{-1}$, which are the progenitors of massive spheroids with $M_\star\ga$ a few $10^{10}\, M_\odot$. Such a short timescale is also in line with local observations of the $\alpha-$enhancement in massive early-type galaxies; this represents an iron underabundance compared to $\alpha$ elements, that occurs because star formation is stopped, presumably by some form of energetic feedback (e.g., due to the central supermassive black hole), before type I$a$ SN explosions can pollute the interstellar medium with substantial iron amounts (e.g., Romano et al. 2002; Gallazzi et al. 2006; Thomas et al. 2005, 2010; Johansson et al. 2012). Conversely, in low-mass spheroidal galaxies with $M_\star\la 10^{10}\, M_\odot$ the star formation durations $\tau_\psi$ inferred from the main sequence are much longer, amounting to a few Gyr as also indicated by data on the age of stellar population and on chemical abundances (see review by Conroy 2013). Finally, in low redshift $z\la 2$ disk-dominated galaxies classic evidence indicates that on average the star formation rate declines exponentially $\psi(\tau)\propto e^{-\tau/\tau_\psi}$ over a long characteristic timescale $\tau_\psi\approx$ several Gyrs (see Chiappini et al. 1997; Courteau et al. 2014; Pezzulli & Fraternali 2016; Grisoni et al. 2017). Lapi et al. (2017a, 2018) have shown that the above star formation histories can be exploited to connect, via the continuity equation, the SFR functions to the observed stellar mass functions at different redshifts (e.g., Davidzon et al. 2017), for both starforming and quiescent galaxies. We caveat that the aforementioned star formation histories for spheroids and disks are meant to represent the average statistical behavior of the respective galaxy population, and to render for each galaxy the spatial and time integrated behaviors. This is clearly an approximation that may be not realistic in specific objects and/or around particular spatial locations. As an example, galaxies featuring multiple recurrent bursts of star formation may be preferential hosts of double compact objects mergers, especially if short time delays between the birth and the coalescence of the compact binaries are favored (see Sect. \[sec|mergerrates\]). As another example, in the Milky Way local constraints from observations of the solar neighborhood (e.g., Cignoni et al. 2006) seems not to favor an exponentially declining SFR but rather to suggest a low-level constant value with an enhancement around $3$ Gyr ago, although these findings are still somewhat debated (e.g., Bovy et al. 2017). As to the chemical enrichment history of individual galaxies, we have exploited the standard code `che-evo` incorporated into `GRASIL` (Silva et al. 1998, 2011; Bressan et al. 2002; Panuzzo et al. 2003; Vega et al. 2005). For spheroidal galaxies, it reproduces the observed local relationship between stellar metallicity and stellar mass, and its weak evolution with redshift (see Arrigoni et al. 2010; Spolaor et al. 2010; Gallazzi et al. 2014). For disk galaxies at $z\la 2$, it reproduces the observed relationship between gas metallicity and stellar mass, including its appreciable redshift-dependence (see Andrews & Martini 2013; Zahid et al. 2014; de la Rosa et al. 2015; Onodera et al. 2016). In both cases, within an individual galaxy the metallicity behavior is closely approximated by $$Z(\tau)\simeq \left\{ \begin{aligned} &Z_{\rm sat}\,{\tau\over \Delta\tau_\psi}~~~~~~~~ &{\tau\over \tau_\psi}\leq \Delta\\ \\ &Z_{\rm sat} ~~~~~~~~ &{\tau\over \tau_\psi}\geq \Delta \end{aligned} \right. \label{eq|metalevolution}$$ i.e., it increases from $Z=0$ almost linearly with the galactic age, and then after a time $\tau=\Delta\, \tau_\psi$, saturates to the value $Z_{\rm sat}$. The dependence of $Z_{\rm sat}$ and $\Delta$ on SFR/stellar mass can be described with an expression inspired by analytic chemical evolution models (see Cai et al. 2013; Feldmann 2015); this yields $$\left\{ \begin{aligned} &Z_{\rm sat}\propto {s\, y_Z\, (1-\mathcal{R})\over s\,(1-\mathcal{R}+\epsilon_{\rm out})-1}\\ &\\ &\Delta\simeq {1\over 3\,(1-\mathcal{R}+\epsilon_{\rm out})}\\ \end{aligned} \right.~~~~~~~\epsilon_{\rm out}\approx 2\, \left(M_\star\over 10^{10}\, M_\odot\right)^{-0.25}~; \label{eq|chemidetail}$$ here $s\approx 3$ is the ratio between the dynamical timescale of the infalling gas and the star formation timescale, $\mathcal{R}\approx 0.44$ is the recycling stellar mass fraction, $y_Z\,(1-\mathcal{R})\approx 0.034$ is the metal yield (assuming the Romano et al. 2010 stellar yields), and $\epsilon_{\rm out}$ is the mass loading factor of galactic outflows from stellar winds and supernova explosions. In the above equation we have provided a fit for $\epsilon_{\rm out}$ as a function of the final stellar mass $M_\star$ from the results of `che-evo` code; a similar expression concurrently describes the time-averaged outcome from hydrodynamical simulations of stellar feedback (e.g., Hopkins et al. 2012). As a result, typical values $Z_{\rm sat}\sim 0.3-1.5\, Z_{\odot}$ are obtained for galaxies with final stellar masses in the range $M_{\star}\sim 10^{9-11}\, M_\odot$, respectively (see, e.g., Chruslinska et al. 2019); the related quantity $\Delta\sim 0.1-0.3$ specifies how quickly the metallicity saturates to such values as a consequence of the interplay between cooling, dilution, and feedback processes. Note that several chemical evolution codes present in the literature, reproducing comparably well observations on the chemical abundances in galaxies of different stellar masses, also share a similar age-dependent metallicity behavior. In this paper we will exploit the above gas metallicity evolution of individual galaxies as a function of time and SFR/stellar mass to compute merging rates of compact remnants and related GW event detection rates. The metallicity evolution enters into play since the mass distribution of the compact remnants depends on the chemical composition of the star-forming gas (see Sect. \[sec|stellarevo\]). In previous works an alternative, simpler approach has often been adopted, that involves the use of the mean cosmic metallicity (cf. Madau & Dickinson 2014) $$\langle Z(z)\rangle \approx {y_Z\,(1-\mathcal{R})\over \rho_b}\int_z{\rm dz'}\,\rho_{\psi}(z')\, \left|{{\rm d}t\over {\rm d}z'}\right|~, \label{eq|cosmicmetallicity}$$ where $\rho_b\approx 2.8\times 10^{11}\, \Omega_b\, h^2\, M_\odot$ Mpc$^{-3}$ is the background baryon density, and $\rho_\psi(z)$ is the cosmic SFR density. We report as a thin line in the inset of Fig. \[fig|SFRcosm\] the result of this procedure (solid black line). The outcome turns out to be consistent with measurements of the IGM metallicity as inferred from Ly$\alpha$ forest absorption lines (e.g., Aguirre et al. 2008), while it falls short with respect to the metallicity of damped Ly$\alpha$ absorption systems (e.g., Rafelski et al. 2012) and to the metal abundances in the central regions of galaxy clusters (e.g., Balestra et al. 2007). This is why in previous works on merging rates (e.g., Belczynski et al. 2016; Cao et al. 2018), a floor value of $0.5$ dex in $\log\langle Z(z)\rangle$ has been added to better fit such observational data (thick line in the inset of Fig. \[fig|SFRcosm\]; see also Vangioni et al. 2015); moreover, a log-normal distribution of metallicity around this mean cosmic value with a $1\sigma$ dispersion $\sigma_{\log Z} = 0.5$ dex has been also usually adopted. Note that even after such renormalization and scatter, the cosmic metallicity stays appreciably lower than the saturation value of the gas metallicity in individual star-forming galaxies (see also Chruslinska et al. 2019). In the sequel, we will present results from the cosmic metallicity approach for comparison with our findings based on the metallicity evolution in individual galaxies. Remnant mass distribution from stellar evolution simulations {#sec|stellarevo} ------------------------------------------------------------ We adopt the metallicity-dependent relationships $m_\bullet(m_\star,Z)$ between compact remnant mass $m_\bullet$ and zero-age main sequence star mass $m_\star$ provided by Spera & Mapelli (2017). These have been obtained via specific simulations of single stellar population synthesis with the code `SEVN`, which couples the `PARSEC` stellar evolution tracks up to very massive stars (see Bressan et al. 2012; Tang et al. 2014; Chen et al. 2015) with up-to-date recipes for SN explosions (see Fryer et al. 2012). In particular, as a default we adopt their model based on the delayed SN engine, and including (P)PSNe (see also Woosley 2017). The mass $m_\bullet(m_\star,Z)$ of compact remnants is illustrated in Fig. \[fig|remnant\] for different metallicities. This has been obtained by interpolating on fine grids in $m_\star$ and $Z$ the tabulated data provided by Spera & Mapelli (2017). Fig. \[fig|remnant\] can also be helpful for the reader to recognize how our results presented in next Sections will depend on detailed features of the remnant mass distribution as a function of metallicity. To take into account modeling uncertainties and physical spread related mainly to stellar evolution processes like mass loss, SN mechanism, rotation/mixing, pulsations, etc.), we describe the mass distribution ${\rm d}p/{\rm d}\log m_\bullet$ of compact remnants with a log-normal function centered on the Spera et al. (2017) relation $m_\bullet(m_\star,Z)$, adopting a $1\sigma$ variance of $\sigma_{\log m_\bullet}=0.1$ dex: $${{\rm d}p\over {\rm d}\log m_\bullet}(m_\bullet|m_\star,Z) = {1\over \sqrt{2\pi}\,\sigma_{\log m_\bullet}}\, e^{-[\log m_\bullet -\log m_\bullet(m_\star,Z)]^2/2\,\sigma_{\log m_\bullet}^2}~. \label{eq|remnant}$$ We caveat the reader that the average relation $m_\bullet(m_\star,Z)$ by Spera et al. (2017) does not include binary evolutionary effects (e.g., mass transfers, common envelope and stellar mergers, tidal evolution, etc.), although incorporating these in the `SEVN` code yields a remnant mass distribution not significantly different from the one from single stellar evolution (see Spera et al. 2019). Birthrates of compact remnants in galaxies {#sec|birthrates} ========================================== We start by computing the birthrate for a remnant mass $m_\bullet$ at cosmic time $t$ per unit comoving volume. This can be written as $$R_{\rm birth}(m_\bullet,t) \simeq \int{\rm d}\log\psi\,\psi\,{{\rm d}N\over {\rm d}\log\psi\, {\rm d}V}(\psi,t)\,\int{\rm d}\log Z\, {{\rm d}p\over {\rm d}\log Z}(Z|\psi,t)\, \int {\rm d} m_\star \phi(m_{\star})\,{{\rm d} p\over {\rm d} m_\bullet}(m_\bullet|m_\star, Z)~ \label{eq|easybirthrate}$$ The rationale behind this expression is the following. In the inner integral the mass distribution of compact remnants ${\rm d}p/{\rm d}\log m_\bullet$ from Eq. (\[eq|remnant\]), dependent on star mass and metallicity, is weighted with the IMF[^2] $\phi(m_\star)$; the minimum star mass originating a NS remnant is set to $7\, M_\odot$. Then the outcome is averaged over the metallicity distribution, dependent on SFR and cosmic time, and then weighted by the SFR of the galaxy and the related galaxy number densities (namely, the SFR functions). The metallicity distribution is in turn derived from the metallicity evolution within individual galaxies as expressed by Eq. (\[eq|metalevolution\]), taking into account the fractional time spent by the galaxy in a given metallicity bin $${{\rm d} p\over {\rm d} \log{Z}}\approx \Delta\times {Z\over Z_{\rm sat}}\, \ln(10)\, \Theta_{\rm H}(Z-Z_{\rm sat})+(1-\Delta)\times \delta_D[\log Z-\log Z_{\rm sat}]~, \label{eq|metaldist}$$ where $\delta_D[\cdot]$ is the Dirac-delta function and $\Delta$ is provided by the chemical evolution code as a function of SFR and redshift. In our approach galaxies with the same final stellar mass picked up at the same redshift and galactic age would feature the same gas metallicity; however, the above metallicity distribution originates since a galaxy of given final mass, observed at redshift $z$ can have different ages depending on its formation redshift. Actually we convolve the above distribution with a Gaussian kernel featuring a dispersion of $0.15$ dex, that corresponds to the scatter estimated for the ISM metallicity in galaxies at given SFR but varying stellar mass (e.g., Mannucci et al. 2010; Salim et al. 2015; Sanders et al. 2018). To sum up, in each galaxy the metallicity $Z$ increases linearly with galactic age (cf. Eq. \[eq|metalevolution\]), so that stars born at different times have different $Z$; this creates an ever-changing distribution of metallicities in each individual galaxy, that depends on galaxy birthtime and mass. We note that very often in the literature the detailed star formation and chemical enrichment history of individual galaxies are neglected, and an approach based on the cosmic SFR density and cosmic metallicity from Eqs. (\[eq|cosmicSFR\]) and (\[eq|cosmicmetallicity\]) is adopted instead; in these studies the metallicity distribution ${\rm d} p/{\rm d} \log{Z}$ is taken to be a broad log-normal function with a $1\sigma$ dispersion of $\sigma_{\log Z}\approx 0.5$ dex around the mean cosmic value $\langle Z(z)\rangle$ of Eq. \[eq|cosmicmetallicity\] (including the $0.5$ dex increase, see Sect. \[sec|SFR\_hist\]). In such a case, in Eq. (\[eq|easybirthrate\]) the dependence on the galaxy SFR is eliminated, so that the outermost integral yields simply the cosmic SFR density and the birthrate is given by[^3] $$\begin{aligned} R_{\rm birth}(m_{\bullet}, t)\simeq \rho_{\psi}(t)\int {\rm d}\log Z\,{{\rm d} p\over {\rm d} \log Z}(Z|\langle Z\rangle[t])\int {\rm d} m_\star \phi(m_{\star})\,{{\rm d} p\over {\rm d} m_\bullet}(m_\bullet|m_\star, \langle Z\rangle[t]). \label{eq|cosmicbirthrate}\end{aligned}$$ In Fig. \[fig|Rbirth\] we illustrate the birthrate $R_{\rm birth}(m_\bullet,t)$ for different redshifts; in particular, black lines refer to our computation based on Eq. (\[eq|easybirthrate\]) taking into account the star formation and chemical enrichment histories of individual galaxies, blue lines highlight the contribution in low $z\la 2$ disks, while green lines show the result based on Eq. (\[eq|cosmicbirthrate\]) relying on the cosmic SFR density and cosmic metallicity. One can recognize three characteristic features in these curves. First, there is a prominent NS peak at around $m_\bullet\sim 1.4\, M_\odot$; this reflects the higher birthrate of NS with respect to BH by a factor of $2-3$, as expected given the bottom-heavy shape of the adopted Chabrier IMF. Second, there is a BH plateau for $m_\bullet\sim 2.5-25\, M_\odot$, produced by the steep increasing shape of the Spera et al. (2017) relation $m_\bullet(m_\star|Z)$ in that range, that offsets the IMF decline. Third, $R_{\rm birth}$ falls off for high remnant masses $m_\bullet\ga 30\, M_\odot$, due to the behavior of the Spera et al. (2017) relation (see Sect. \[sec|SFR\_hist\]). As to the redshift dependence, it mainly reflects the behavior of the SFR function (or of the cosmic SFR density), which increases out to $z\approx 2.5$ (solid line in Fig. \[fig|SFRcosm\]), and then recedes for higher redshift. The comparison between our computation taking into account the star formation and metal enrichment histories of individual galaxies, and the approach based on the cosmic SFR density and cosmic metallicity is easily understood. At low redshift, the birthrate shape between the two approaches is similar, since the cosmic metallicity values are close to those applying in galaxies. In moving toward higher redshifts the cosmic metallicity is on average lower than that in individual galaxies, causing an enhancement in the relative occurrence of more massive BHs. In the birthrate computed with the cosmic approach, this causes a lower BH plateau in the range $m_\bullet\approx 2.5-25\, M_\odot$, followed by a peak at around $m_\bullet\approx 25-50\, M_\odot$, and then a more extended tail toward higher masses. Merging rates of compact remnants in galaxies {#sec|mergerrates} ============================================= The merging rate per unit volume and mass of the primary (more massive) remnant $m_\bullet$ is given by $$R_{\rm merge}(m_{\bullet}, t)=f_{\rm eff}\int_{t_{d,{\rm min}}}^t {\rm d} t_d\, {{\rm d} p\over {\rm d} t_d}(t_d)\, R_{\rm birth}(m_{\bullet}, t-t_d)~, \label{eq|mergratetemp}$$ where $t_d$ is the delay time between the formation of the compact binary system and the merging event; a number of independent studies based on observations (see review by Maoz et al. 2014) and simulations (e.g., Dominik et al. 2012; Giacobbo & Mapelli 2018) suggest a delay time probability distribution with shape ${\rm d}p/{\rm d}t_d\propto t_{d}^{-1}$, normalized to unity between a minimum value $t_{d,{\rm min}}\approx 50$ Myr and the age of the Universe, independently of the compact binary type involved. The factor $f_{\rm eff}$ in Eq. (\[eq|mergratetemp\]) is defined as the fraction of primary compact remnants that are hosted in binary systems with characteristics apt to allow merging of the companions within a Hubble time; it will be discussed below. We are now in position to compute the merging rate of compact binaries as a function of redshift by integrating out $R_{\rm merge}(m_{\bullet}, t)$ with respect to the compact remnant mass. The chirp mass $\mathcal{M}_{\bullet\bullet}$ and the primary remnant mass $m_\bullet$ can be related by $\mathcal{M}_{\bullet\bullet}=m_{\bullet}\, q^{3/5}/(1+q)^{1/5}$ where $q$ is the mass ratio between the companion and the primary remnant. Introducing a mass ratio distribution ${\rm d}p/{\rm d}q$ and changing variable from $m_{\bullet}$ to $\mathcal{M}_{\bullet\bullet}$ via a simple jacobian, one obtains $$R_{\rm merge}(t)=\int{\rm d}\mathcal{M}_{\bullet\bullet}\,R_{\rm merge}(\mathcal{M}_{\bullet\bullet},t)=\int{\rm d}\mathcal{M}_{\bullet\bullet}\, \int{\rm d}q\, {{\rm d} p\over {\rm d} q}(q)\,{(1+q)^{1/5}\over q^{3/5}}\, R_{\rm merge}\left[\mathcal{M}_{\bullet\bullet}\,{(1+q)^{1/5}\over q^{3/5}}, t\right]~. \label{eq|mergerate}$$ This expression is general, and in the following we will exploit it to estimate the merging rates for BH-BH, NS-NS and BH-NS by inserting the appropriate $q-$distribution and integration limits. Specifically, we take the mass ratio distribution for BH-BH mergers to be ${\rm d} p/{\rm d} q\propto q$ with a minimum value $q_{\rm min}=0.5$; this yields an average mass ratio $\langle q\rangle \approx 0.8$, as suggested by stellar evolution simulations (see de Mink et al. 2013; Belczynski et al. 2016). On the other hand, for systems like NS-NS or BH-NS the shape of the mass-ratio distribution can be different (see Dominik et al. 2012, 2015; de Mink & Belczynski 2015; Chruslinska et al. 2018; Mapelli & Giacobbo 2018); specifically, it is found that low values $q<0.5$ apply for most BH-NS events and values $q\la 1$ for most of the NS-NS events ($q\approx 0.7-1$ for the GW170817 event, see Abbott et al. 2017c). On this basis, we assume a flat distribution for BH-NS in the range $q\sim 0-0.5$ and for NS-NS mergers in the range $q\sim 0.4-1$; we have also checked that the overall results for the merging rate depend very little on the adopted mass-ratio distributions. Note that the integrand $R_{\rm merg}(\mathcal{M}_{\bullet\bullet},t)$ in Eq. (\[eq|mergerate\]) represents the merging rate per unit chirp mass and will be extensively used below when dealing with GW event detection. We separate the merging rate for BH-BH, NS-NS and BH-NS events basing on the primary mass: if it is in the range $1-2.5\, M_\odot$ the event is considered a NS-NS merger, otherwise we consider the appropriate limits in the mass-ratio distribution to obtain BH-NS and BH-BH rates. Coming back to the parameter $f_{\rm eff}$, we caveat that it is the results of many different and complex physical processes related to stellar and dynamical evolution (e.g., binary fraction, common envelope development/survival, SN kicks, mass transfers, etc.). In ab-initio stellar evolution simulations (possibly including binary effects; e.g., O’ Shaughnessy et al. 2010; Dominik et al. 2015; Belczinski et al. 2016; Spera et al. 2017; Giacobbo & Mapelli 2018) this quantity is naturally obtained, though the outcomes are somewhat dependent on model assumptions. Alternatively, it can be set empirically by normalizing the local BH-BH merging rates (e.g., Dvorkin et al. 2016; Cao et al. 2018; Li et al. 2018) to the measurements by LIGO/Virgo (see Abbott et al. 2016c, 2017e, 2019); specifically, here we normalize the local BH-BH rate to the average logarithmic value $30$ Gpc$^{-3}$ yr$^{-1}$ from the latest interval estimation $9.7-101$ Gpc$^{-3}$ yr$^{-1}$ by LIGO/Virgo (see Fig. 12 in Abbott et al. 2019; cf. cyan shaded area in Fig. \[fig|Rmerg\]), implying $f_{\rm eff}\approx 2\times 10^{-4}$. We caveat that such a normalization is meaningful as long as the local BH-BH merger rate detected by LIGO can be traced back to the binary compact remnants considered in the present paper, and it is not substantially contributed by additional channels (e.g., primordial BHs, globular/open cluster BHs, pop-III star BHs, etc.). Note, however, that in principle $f_{\rm eff}$ could depend on the remnant masses and/or binary type. In the latter case, one could in principle set $f_{\rm eff}$ for NS-NS and for BH-NS events still relying on estimates of the local merging rate from various observations; however, these are very uncertain and somewhat in tension with each other. For example, local NS-NS rates $R_{\rm merge,NS-NS}\approx 110-3840$ Gpc$^{-3}$ yr$^{-1}$ are estimated by LIGO from the unique event GW170817 detected in GW so far (see Abbott et al. 2017b,c, 2019; cf. orange shaded area in Fig. \[fig|Rmerg\]). Chruslinska et al. (2018), using as input the Galactic merging rate $\approx 21^{+28}_{-14}$ Myr$^{-1}$ from observations of double pulsars (see Kim et al. 2015), obtain a low local NS-NS merging rate $R_{\rm merge, NS-NS}\approx 50$ Gpc$^{-3}$ yr$^{-1}$; these authors point out that, with a specific parameter choice in their models, this rate can be enhanced up to $R_{\rm merge, NS-NS}\approx 600^{+600}_{-300}$ Gpc$^{-3}$ yr$^{-1}$ but at the cost of overestimating the local BH-BH rate. Della Valle et al. (2018; see also Jin et al. 2018 and Pol et al. 2019) estimate a local NS-NS rate $R_{\rm merge, NS-NS}\approx 352^{+810}_{-281}$ Gpc$^{-3}$ yr$^{-1}$ for short GRB/kilonova events similar to GRB170817A, that can be made more consistent with the LIGO result by assuming a rather large viewing angle distribution for the beamed emission. NS-NS merging rates can also be inferred from the galactic abundance of elements produced via rapid neutron capture processes (in particular, Europium), but precise estimates are hindered by large uncertainties in chemical yields (e.g., Cote et al. 2018). As for the rate of BH-NS mergers only an upper limit is available $R_{\rm merge,BH-NS}\la 610$ Gpc$^{-3}$ yr$^{-1}$ (see Abbott et al. 2016d, 2019). Given the current large theoretical and observational uncertainties, in the following we assume the aforementioned value of $f_{\rm eff}$ based on the local BH-BH merging rate as a reference also for both NS-NS and BH-NS events. Correspondingly, this yields local rates $R_{\rm merge, NS-NS}\approx 70$ Gpc$^{-3}$ yr$^{-1}$ and $R_{\rm merge, BH-NS}\approx 20$ Gpc$^{-3}$ yr$^{-1}$, respectively; the NS-NS rate so obtained is smaller than the LIGO estimate, and more consistent with those from double pulsars in the Milky Way, while short GRBs/kilonova occurrence are in between. More statistics from GW detectors are needed before drawing definite conclusions about the difference between $f_{\rm eff}$ for BH-BH and NS-NS or BH-NS mergers; however, if the high LIGO/Virgo local NS-NS rate were confirmed, a consequence would be that binary effects may work diversely for binary NS with respect to binary BH progenitors (see Mapelli & Giacobbo 2018), leading in turn to an appreciable difference in the respective factors $f_{\rm eff}$. In that case, our results for NS-NS and BH-NS rates as a function of redshift can be simply rescaled by an overall normalization. Our results for the merging rates as a function of redshift are illustrated in Fig. \[fig|Rmerg\]. The cyan (orange) shaded area is the LIGO measurement of the BH-BH (NS-NS) merging rate at $z\approx 0$. Solid black lines illustrate the merging rates for BH-BH events, dashed for NS-NS and dotted for BH-NS. It can be seen that the merging rate for NS-NS events is appreciably higher than for BH-BH; this reflects the corresponding behavior of the compact remnant birthrate $R_{\rm birth}$, cf. Fig. \[fig|Rbirth\]. For the adopted Chabrier IMF, most of the stars evolving into a compact remnant become NSs rather than BHs; thus an intrinsically larger merging rate for NS-NS than for BH-BH is naturally originated. However, we anticipate that such a ratio between NS-NS and BH-BH mergers is no longer valid when considering GW detectable events, due to the dependence of the GW signal on the chirp/total mass of the binary (cf. Sect. \[sec|GWdetection\]). Note that the contribution from disk-dominated galaxies to the overall merging rate is subdominant. In particular, at $z\approx 0$ events hosted in disk galaxies contribute around $25\%$ to the overall rate. In the same Fig. \[fig|Rmerg\] black lines refer to our approach taking into account star formation and chemical evolution histories of individual galaxies, while green lines refer to the computation based on the cosmic SFR density and cosmic metallicity. The overall intrinsic merging rates $R_{\rm merge}(t)$ in the two approaches (cf. top panel in Fig. \[fig|Rmerg\]) are in agreement. In fact, when integrating over the chirp or equivalently over the remnant masses, the compact remnant mass distribution ${\rm d} p/{\rm d} m_\bullet$ appearing in Eq. (\[eq|easybirthrate\]) yields only a normalization factor independent of metallicity and SFR. Then the integral involving the SFR becomes simply $\rho_{\psi}$, as in the cosmic case of Eq. (\[eq|cosmicbirthrate\]). Some differences between the two approaches show up in the merging rate $R_{\rm merge}(\mathcal{M}_{\bullet\bullet}, t)$ as a function of the chirp mass (see bottom panels in Fig. \[fig|Rmerg\]), since this is determined by the birthrate shape (see Fig. \[fig|Rbirth\]). Specifically, $R_{\rm merge}(\mathcal{M}_{\bullet\bullet}, t)$ from our approach turns out to be shifted toward smaller chirp masses with respect to the cosmic case (due to the higher metallicity occurring in galaxies). We stress that such differences, as well as those in the birthrates, are currently not significant given that they can be hidden behind the overall astrophysical uncertainties. In Fig. \[fig|Rmerg\_3D\] we illustrate the probability of merging for compact binaries at $z\sim 0$, $3$ and $6$, as a function of the chirp mass $\mathcal{M}_{\bullet\bullet}$ and of the SFR $\psi$ in the host galaxy progenitor (plainly, if occurring in spheroids the SFR can be much lower at the time of the merger event); the colored surfaces refer to BH-BH, NS-NS and BH-NS events. The dependence in the shape of the surfaces on redshift reflects both the evolution in the SFR functions and the behavior of the Spera et al. (2017) relation $m_\bullet(m_\star|Z)$ at different metallicities. It is worth noticing that for larger chirp masses the local BH-BH merging probability distribution is more extended toward disk galaxy hosts with smaller SFR (see also bottom left panel in Fig. \[fig|Rmerg\]), which have also lower metallicity and hence an increased occurrence of massive BH remnants. GW detection rates from merging binaries in galaxies {#sec|GWdetection} ==================================================== We now turn to compute and discuss the GW detection rates from merging binaries. Taking up the formalism by Taylor & Gair (2012; see references therein), we compute the GW event detection rate per unit redshift, chirp mass $\mathcal{M}_{\bullet\bullet}$, and signal-to-noise ratio (SNR) $\rho$ as: $${{\rm d}\dot{N}\over {\rm d}\mathcal{M}_{\bullet\bullet}\,{\rm d}\rho\,{\rm d}z}(\mathcal{M}_{\bullet\bullet}|\rho,z)={{\rm d} V\over {\rm d}z}\,{R_{\rm merge}(\mathcal{M}_{\bullet\bullet}, z)\over (1+z)}\, P_\rho(\rho|\mathcal{M}_{\bullet\bullet},z)~; \label{eq|detectiorate}$$ here $R_{\rm merge}(\mathcal{M}_{\bullet\bullet}, z)$ is the merging rate per unit chirp mass from Eq. (\[eq|mergerate\]), ${\rm d}V/{\rm d}z$ is the comoving volume per unit redshift interval, the factor $1/(1+z)$ takes into account cosmological time dilation, and $P_\rho(\rho| \mathcal{M}_{\bullet\bullet},z)$ is the distribution of SNR at given chirp mass and redshift. The latter quantity is in turn computed as $$P_\rho(\rho|\mathcal{M}_{\bullet\bullet},z)=P_\Theta(\Theta_\rho)\,{\Theta_\rho\over \rho}~, \label{eq|Prho}$$ in terms of the orientation function $$\Theta_\rho={\rho\over 8}\,{D_L(z)\over R_0}\,\left[{1.2\,M_\odot\over (1+z)\,\mathcal{M}_{\bullet\bullet}}\right]^{5/6}\,{1\over \sqrt{\zeta_{\rm isco}+\zeta_{\rm insp}+\zeta_{\rm merg}+\zeta_{\rm ring}}}~ \label{eq|thetarho}$$ and of its distribution function $$P_\Theta(\Theta)=\left\{ \begin{aligned} &5\,\Theta\,(4-\Theta)^3/256 &0<\Theta<4 \\ &0 &{\rm otherwise}~. \end{aligned} \right. \label{eq|ptheta}$$ In the above expressions $D_L(z)$ is the luminosity distance from the GW source at redshift $z$. In addition, $R_0$ is the detector characteristic distance parameter given by[^4] $$R_0^2 = {5\, M_\odot^2\over 192\, \pi\, c^3}\,\left({3\, G\over 20}\right)^{5/3}\, x_{7/3} % N_{\rm det}\,$$ in terms of the auxiliary quantity $$x_{7/3} = \int_0^\infty{{\rm d}f\,\over (\pi\, M_\odot)^{1/3}\, f^{7/3}\, S(f)}$$ with $S(f)$ the noise power spectral density. Finally, $$\begin{aligned} \zeta_{\rm isco} &= {1\over (\pi\, M_\odot)^{1/3}\, x_{7/3}}\, \int_{0}^{2\, f_{\rm isco}}{{\rm d}f\over S(f)}\, {1\over f^{7/3}}\\ \\ \zeta_{\rm insp} &= {1\over (\pi\, M_\odot)^{1/3}\, x_{7/3}}\,\int_{2\, f_{\rm isco}}^{f_{\rm merg}}{{\rm d}f\over S(f)}\, {1\over f^{7/3}}\\ \\ \zeta_{\rm merg} &= {1\over (\pi\, M_\odot)^{1/3}\, x_{7/3}}\,\int_{f_{\rm merg}}^{f_{\rm ring}}{{\rm d}f\over S(f)}\, {1\over f^{4/3}\, f_{\rm merg}}\\ \\ \zeta_{\rm ring} &= {1\over (\pi\, M_\odot)^{1/3}\, x_{7/3}}\,\int_{f_{\rm ring}}^{f_{\rm cut}}{{\rm d}f\,\over S(f)}\, {1\over f_{\rm{merg}}\,f_{\rm ring}^{4/3}}\, \left[1+\left({f-f_{\rm ring}\over \sigma/2}\right)^2\right]^{-2}\\ \end{aligned}\label{eq|zmax}$$ are functions specifying the overlap of the waveform with the observational bandwidth during the inspiral ($\zeta_{\rm isco}+\zeta_{\rm insp}$), merger ($\zeta_{\rm merg}$), and ringdown ($\zeta_{\rm ring}$) phases of the event; the above expressions include the phenomenological waveforms by Ajith et al. (2008). In particular, $\zeta_{\rm isco}$ depends on the redshifted frequency at the innermost circular stable orbit $f_{\rm isco}$, which is also the maximum frequency at which the quadrupolar formula holds; this is given by $$f_{\rm isco}\simeq {2198\over 1+z}\, \left({M_{\rm bin}\over M_\odot}\right)^{-1}\,\,{\rm Hz}$$ where $M_{\rm bin}=\mathcal{M}_{\bullet\bullet}\,(1+q)^{6/5}/q^{3/5}$ is the total mass of the binary (see Finn 1996; Taylor & Gair 2012). The other parameters $f_{\rm merg}$, $f_{\rm ring}$, $f_{\rm cut}$ and $\sigma$ appearing in Eqs. (\[eq|zmax\]) also scale like $M_{\rm bin}^{-1}$, with coefficients weakly depending on the symmetric mass ratio $\eta=q/(1+q)^2$ and possibly on spin, as approximated by Ajith et al. (2008, 2011, 2014). The GW event detection rates per unit redshift are then obtained by integrating Eq. (\[eq|detectiorate\]) over the chirp mass $\mathcal{M}_{\bullet\bullet}$ and SNR $\rho$ from a minimum detection threshold $\rho_0$: $${{\rm d}\dot{N}\over {\rm d}z}(>\rho_0,z)=\int_{\rho_0}^\infty {\rm d}\rho\,{{\rm d}\dot{N}\over {\rm d}\rho\,{\rm d}z}(\rho,z)=\int_{\rho_0}^\infty {\rm d}\rho\,\int {\rm d}\mathcal{M}_{\bullet\bullet}\,{{\rm d}\dot{N}\over {\rm d}\mathcal{M}_{\bullet\bullet}\,{\rm d}\rho\,{\rm d}z}(\mathcal{M}_{\bullet\bullet}|\rho,z)~. \label{eq|zdist}$$ Finally, the GW number count rate $\dot N(>\rho_0)$ can be obtained by integrating the above expression over redshift. In Fig. \[fig|GWzdist\] we report our results concerning ${\rm d}\dot{N}(>\rho_0)/{\rm d}z$ for AdvLIGO/Virgo (top panel) and for the ET (bottom panel), with minimum SNR $\rho_0=8$. Black solid lines refer to BH-BH events, dashed to NS-NS and dotted to BH-NS ones. Although the intrinsic merging rate is larger for NS-NS than for BH-BH (cf. Sect. \[sec|mergerrates\]) the detector response makes the rate of GW events from BH-BH binaries to overcome that from NS-NS binaries toward increasing redshift; the crossover occurs at $z\sim 0.05$ for AdvLIGO/Virgo and around $z\sim 0.5$ for ET. The increasing dependence of detectability on the chirp mass implies that: GW event rate from BH-BH mergers peaks at $z\approx 0.3-0.4$ and then falls off rapidly at $z\ga 1$ for AdvLIGO/Virgo, while it has a broad shape peaking around $z\approx 1.5$ with an extended tail out to very high redshift for ET; GW event rate from NS-NS mergers can be practically detected only within a few hundred Mpcs for AdvLIGO/Virgo while out to $z\la 2.5$ with ET; GW event rates from BH-NS mergers peak at $z\approx 0.3$ and then steeply fall off for AdvLIGO/Virgo, while they have a more extended redshift distribution for ET, mirroring the shape of the BH-BH rate with a lower normalization. In the same Fig. \[fig|GWzdist\] we compare the GW event rate computed from the star formation and chemical enrichment history of individual galaxies (black lines) vs. the approach based on the cosmic SFR density and cosmic metallicity (green lines). Differences are appreciable in BH-BH and BH-NS rates toward increasing redshift for AdvLIGO/Virgo; e.g., the BH-BH event rate in the cosmic approach relative to our is larger by a factor of $\sim 2$ when integrated over all redshift, by a factor of $\sim 4.5$ when integrated over $z\ga 1$, and by a factor $\sim 15$ when integrated over $z\ga 2$. These outcomes can be traced back to the dependence of the quantity $R_{\rm merge}(\mathcal{M}_{\bullet\bullet},t)$ entering Eq. (\[eq|detectiorate\]) on the chirp mass. Galaxy metallicities are typically larger than the cosmic value, so lowering the formation efficiency of large BH masses and reducing the detectability (given the SNR threshold $\rho_0=8$) of GWs from BH-BH and BH-NS mergers toward high redshift; plainly the rate of light binaries like NS-NS are not affected. For the more sensitive ET the differences in the detected events between the two approaches is negligible at SNR threshold of $8$, since most of the merger events are detected out to high redshifts, though the distribution in chirp masses stays somewhat different (see Fig. \[fig|Rmerg\], bottom panels). We stress that the performances of current GW instruments allow to probe NS-NS mergers only at very low redshift $z\la 0.1$. The unique event GW170817 detected so far is located at $z\approx 0.01$ (see Abbott et al. 2017b,c); interestingly, its host galaxy NGC4993 is known to be an early-type with no ongoing star formation and old stellar populations with loosely constrained age $\ga 3-6-10$ Gyr (see Im et al. 2017; Troja et al. 2017; Blanchard et al. 2017). It has been pointed out (e.g., Palmese et al. 2017; Belczynski et al. 2018) that finding the very first NS-NS merger within a galaxy with old stellar populations and low SFR may be in tension with theoretical estimates. To check what happens in our framework, we first note that for NS-NS mergers (but not for BH-BH or BH-NS) in Eq. (\[eq|easybirthrate\]) the dependence on galaxy metallicity of the remnant mass distribution can be safely neglected (cf. Fig. \[fig|remnant\]), so that to a very good approximation $R_{\rm birth, NS-NS}(m_\bullet,t)\simeq \rho_{\psi}(t)\, F(m_\bullet)$ where $F(m_\bullet)$ is solely function of the primary mass; when inserting this expression in Eq. (\[eq|mergratetemp\]) and integrating over the relevant range of NS primary masses to find the number of mergers, such function enters just in an overall multiplicative factor $\int{\rm d}m_\bullet\, F(m_\bullet)$. Thus the fraction of NS-NS mergers occurring at the present cosmic time $t_0$ from galaxies with age older than $T$ is given by $$f_{\rm NS-NS}(t_0|{\rm age}>T)\simeq \frac{\int_{T}^{t_0}\, {\rm d} t_d\, \rho_\psi(t_0-t_d)\, {{\rm d} p/ {\rm d} t_d}}{\int_{t_{d,{\rm min}}}^{t_0} {\rm d} t_d\, \rho_\psi(t_0-t_d)\, {{\rm d} p/ {\rm d} t_d}}~;$$ in the same vein, the fraction of mergers occurring in disks and spheroids can be computed via the same expression, by replacing $\rho_\psi$ at the numerator with the corresponding contribution from these galaxy types (cf. Sect. \[sec|SFR\_func\] and Fig. \[fig|SFRcosm\]). Using the detailed redshift dependent shape of $\rho_\psi(t)$, we have computed that the overall contribution (in discs plus spheroids) to the local NS-NS rate from compact binaries in galaxies older than $3-6-10$ Gyr amounts to $60-45-20\%$, and in particular the contribution from spheroids older than $3-6-10$ Gyr amounts to $52-41-20\%$. For an instructive back-of-the-envelope calculation, one can approximately use in the integrands above the average values $\langle\rho_\psi\rangle$ over the relevant cosmic time intervals, to obtain $f_{\rm NS-NS}(t_0|{\rm age}>T)\simeq \langle \rho_\psi(t)\rangle_{|t<t_0-T} \ln{(t_0/T)}/\langle\rho_\psi(t)\rangle_{|t<t_0-t_{\rm d,min}}\,\ln{(t_0/t_{\rm d,min})}$; e.g., using $\langle\rho_\psi(t)\rangle_{|t<t_0-T}\approx 0.089-0.12\,M_\odot$ yr$^{-1}$ Mpc$^{-3}$ for $T=3-6-10$ Gyr and $\langle\rho_\psi(t)\rangle_{|t<t_0-t_{\rm d,min}}\approx 0.038\,M_\odot$ yr$^{-1}$ Mpc$^{-3}$, this approximation is seen to produce the same fractions $\approx 60-45-20\%$ of the computation above. Interestingly, these estimated fractions are independent of the parameter $f_{\rm eff}$ entering Eq. (\[eq|easybirthrate\]), that as discussed in Sect. \[sec|mergerrates\] is challenging to compute ab initio or to constrain on an observational basis. All in all, we expect that a substantial number of NS-NS binaries merging at $z\approx 0$ have been created in the starforming progenitors of local spheroids at appreciably earlier cosmic times. Catching in real time the mergers with a short delay time at redshift $z\approx 2.5$ (where the cosmic SFR density peaks) will likely become achievable with the ET. In Fig. \[fig|GWzdist\_complot\] we show how the GW event rate as a function of redshift for AdvLIGO/Virgo depends on some relevant parameters and assumptions used in our computations. In the top left panel, the minimum SNR for detection is varied from our fiducial value $\rho_0=8$ to $5$, to $13$, and to $24$. Plainly, considering larger SNR $\rho_0$ decreases the detectability of the GWs toward higher redshift. In the top middle panel we show the contribution to the event rates from different phases of the compact binary mergers. Our basic computation includes all phases, i.e., the inspiral, the merger, and the ringdown. Removing the ringdown ($\zeta_{\rm ring}$=0) and the merger ($\zeta_{\rm merg}=\zeta_{\rm ring}=0$) plainly keeps only events with inspiral phase crossing the detector bandwidth, so reducing the number of observable high-mass merging binaries and hence their event rates; the outcome is still interesting because for such events the parameter reconstruction from the waveform is expected to be most effective. However, it is worth noticing that in the literature two approximations in estimating event rates are often adopted. The first consists in taking frequencies up to $f_{\rm isco}$ at which the quadrupolar formula holds, corresponding to keep only $\zeta_{\rm isco}$ (i.e., setting $\zeta_{\rm insp}=\zeta_{\rm merg}=\zeta_{\rm ring}=0$) in Eq. (\[eq|thetarho\]); the outcome is found to constitute a conservative lower limit to the event rates. The second consists in setting $\zeta_{\rm insp}=\zeta_{\rm merg}=\zeta_{\rm ring}=0$ and $\zeta_{\rm isco}\simeq 1$, corresponding to consider that the inspiral phase of any event completely overlaps with the detector bandwith; we warn that this approximation actually holds only for NS-NS (see Taylor & Gair 2012), but for BH-BH and BH-NS it considerably overpredicts the rates, especially toward high-redshift. In the bottom left panel we vary the IMF from the fiducial shape by Chabrier (2003), to that by Salpeter (1955), by Kroupa (2002), to the top-heavy IMF by Lacey et al. (2010), to the metallicity-dependent IMF by Martin-Navarro et al. (2015). In the bottom middle panel, we show the effect of including or excluding (P)PSNe from the remnant mass spectrum by Spera et al. (2017). In the bottom right panel, we vary the time delay distribution from the fiducial shape ${\rm d}p/{\rm d}t_d\propto t_d^{-1}$ to a somewhat flatter one $\propto t_d^{-0.75}$ and a somewhat steeper one $\propto t_d^{-1.5}$. In all these cases, the shape of the GW event rate $z-$distribution is moderately affected, within factors a few at most; notice that the different curves plotted here have been computed self-consistently by normalizing the corresponding local BH-BH merging rate to the value observed by LIGO/Virgo; thus the reader should keep in mind that they underlie different values of $f_{\rm eff}$. The (redshift-integrated) Euclidean-normalized GW counts are shown in Fig. \[fig|GWcounts\], both for AdvLIGO/Virgo and ET. Here we just notice that for electromagnetic signals the counts of a uniform distribution of sources with a smooth distribution of luminosities (Euclidean counts) obeys the scaling $N(>S)\propto S^{-3/2}$ in terms of the flux $S$ (e.g., Weinberg 2008); this is basically because $N(>S)\propto V\propto D_L^3$ and $S\propto D_L^{-2}$ hold. In the case of GWs, the relation between SNR and distance is inverse linear $\rho\propto D_L^{-1}$ implying the Euclidean behavior $N(>\rho)\propto \rho^{-3}$ or in differential terms ${\rm d} N/{\rm d}\rho\propto \rho^{-4}$. When this dependence is normalized out, the counts are flat at high SNR which are mainly contributed by local sources, while the decrease toward lower SNRs mainly reflects the rapid evolution in the number density of increasingly distant galaxies. Galaxy-scale gravitational lensing of GW {#sec|lensing} ---------------------------------------- High-redshift $z\ga 2$ star-forming galaxies have a non-negligible probability of being gravitationally lensed by other galaxies (mostly low $z\la 1$ early-types) and by galaxy groups/clusters intervening between the source and the observer (e.g., Blain 1996; Perrotta et al. 2002; Negrello et al. 2007, 2010; Lapi et al. 2012). The GW emission from merging binaries in these sources can be gravitationally lensed too, so enhancing the detectability of high-redshift GW sources (see Ng et al. 2018; Li et al. 2018; Oguri 2018). The effect of a gravitational lensing event with magnification $\mu$ on the GWs emitted by a compact source is to enhance the SNR $\rho\propto\sqrt{\mu}$ without changing the observed frequency structure of the waveform (due to the achromaticity of lensing in the geometrical-optics limit; see Takahashi & Nakamura 2003). In the following we focus on galaxy-scale gravitational lensing, which is the most efficient for intermediate to high-redshift sources, close to the peak of the cosmic star formation history (see Lapi et al. 2012). The rate of gravitationally lensed events can be computed as: $${{\rm d}\dot{N}_{\rm lensed}\over {\rm d}\rho\, {\rm d} z}=\int^\infty_{\mu_{\rm min}}{\rm d}\mu\,{{\rm d}\dot{N}\over {\rm d}\rho\, {\rm d} z}(\rho/\sqrt{\mu},z)\,{{\rm d} p\over {\rm d}\mu}(\mu,z)~, \label{eq|lensing}$$ where ${\rm d}\dot{N}/{\rm d}\rho\, {\rm d} z$ is the unlensed statistics in Eq. (\[eq|zdist\]), and ${\rm d} p/{\rm d}\mu (\mu,z)$ is a probability distribution of amplification factors which depends on the redshift of the GW source and on the properties of the intervening galaxies acting as lenses. The minimum amplification $\mu_{\rm min}$ defines the strength of the lensing events under consideration. We use the amplification distribution derived by Lapi et al. (2012), which takes into account the redshift-dependent statistics of galactic halos, their inner radial distribution of dark matter and baryons, and possible non axisymmetric structure. The redshift distribution of GW events above a detection threshold $\rho_0$ is $${{\rm d}\dot{N}_{\rm lensed}\over {\rm d} z}(>\rho_0)=\int_{\rho_0}^\infty {\rm d}\rho\,{{\rm d}\dot{N}_{\rm lensed}\over {\rm d}\rho\, {\rm d} z}~, \label{eq|lensingrate}$$ and the lensed counts are instead obtained by integrating Eq. (\[eq|lensing\]) over redshift. Our results concerning the lensed GW redshift distribution and counts are shown as orange lines in Figs. \[fig|GWzdist\] and \[fig|GWcounts\]; for clarity we illustrate the case $\mu_{\rm min}=10$ to better highlight the overall impact of strong lensing events. Plainly, strongly lensed events have a redshift distribution shifted toward high redshift. GWs from NS-NS mergers, that in the unlensed case are detectable only locally with AdvLIGO/Virgo and to intermediate redshifts with ET, can in principle be revealed out to $z\la 1$ for AdvLIGO/Virgo and out to high $z$ with ET; however, the lensed rates are very small $\la 10^{-3}$ events per yr with AdvLIGO/Virgo, while they attain even $1$ event per yr with the ET. For AdvLIGO/Virgo lensed GWs rate from BH-BH peak around $z\approx 2$ and attain $\sim 0.1$ event per yr at $z\sim 1-4$, overwhelming the unlensed events for $z\ga 3$; for ET instead the lensed BH-BH rates are of the same order of the lensed NS-NS ones, still factors $\ga 10^3$ below the unlensed. The lensed BH-NS rates feature a similar behavior to the lensed BH-BH, with a lower normalization. In the top right panel of Fig. \[fig|GWzdist\_complot\], we show for AdvLIGO/Virgo how the redshift distribution of gravitationally lensed events is affected by varying the minimum amplification from our fiducial value $\mu_{\rm min}\approx 10$ to $2$ (defining the strong lensing limit) and to $30$ (a maximal value applying to moderately extended sources, see Lapi et al. 2012); plainly, lowering the minimum amplification yields generally higher lensing rates, though decreasing them toward very high redshift for small chirp mass systems like NS-NS. We stress that the detection of high-redshift, strongly lensed events can be particularly important for cosmological studies, related to the detection of multiple images and to the characterization of GW time delay distributions (e.g., Lapi et al. 2012; Eales 2015). This is especially true if there is an accompanying electromagnetic emission (e.g., from BH-NS or NS-NS mergers) that can provide independent measurement of the source redshift, and thus help in removing the well-known degeneracy $\rho\propto \sqrt{\mu}\, \mathcal{M}_{\bullet\bullet}^{5/6}/D_L(z)$ among chirp mass, redshift and lensing magnification. GW background from merging binaries in galaxies {#sec|GWback} =============================================== The incoherent superposition of weak, undetected GW sources originates a stochastic background (see Abbott et al. 2017f, 2018). In this section we aim to estimate the contribution to such a background by mergers of compact binaries in galaxies. We compute the background energy density at given observed frequency $f_{\rm obs}$ as: $$\Omega_{GW}(f_{\rm obs})={8\pi\, G\,f_{\rm obs}\over 3\,H_0^3\,c^2}\,\int{\rm d}z\, \int {\rm d}\mathcal{M}_{\bullet\bullet}\,{R_{\rm merge}(\mathcal{M}_{\bullet\bullet}, z)\over (1+z)\, h(z)}\,{{\rm d} E\over {\rm d}f}(f|\mathcal{M}_{\bullet\bullet})\,\int_{\rho<\rho_0}{\rm d}\rho\,P_\rho(\rho|\mathcal{M}_{\bullet\bullet}, z)~, \label{eq|GWback}$$ with $h(z)\equiv [\Omega_M\, (1+z)^3+1-\Omega_M]^{1/2}$. The GW energy spectrum ${\rm d} E/{\rm d}f$ emitted by the binary is taken as (e.g., Zhu et al. 2011) $${{\rm d} E\over{\rm d}f}\simeq {(\pi G)^{2/3}\,\mathcal{M}_{\bullet\bullet}^{5/3}\over 3} \times \left\{ \begin{aligned} &f^{-1/3} &f<f_{\rm merg}\\ &f_{\rm merg}^{-1}\,f^{2/3} & f_{\rm merg}\leq f<f_{\rm ring}\\ &{f_{\rm merg}^{-1}\,f_{\rm ring}^{-4/3}\,f^2\over \left[1+\left({f-f_{\rm ring}\over \sigma/2}\right)^2\right]^2} &f_{\rm ring}\leq f<f_{\rm cut}~, \end{aligned} \right. \label{eq|GWenergy}$$ in terms of the same parameters $f_{\rm merg}$, $f_{\rm ring}$, $f_{\rm cut}$, and $\sigma$ appearing in Eqs. (\[eq|zmax\]). The results for the stochastic background originated by BH-BH, NS-NS and BH-NS mergers in galaxies are shown in Fig. \[fig|GWback\] as black lines, both for AdvLIGO/Virgo and ET. The thick cyan lines reports the $1\sigma$ sensitivity curves for $1$ yr of observations and co-located detectors (Abbott et al. 2017f; Thrane & Romano 2013; Crocker et al. 2017). The stochastic background due to BH-BH in galaxies may only marginally be revealed by AdvLIGO/Virgo, while that from all kind of compact binary mergers should be detected with, and possibly characterized by the ET. Summary {#sec|summary} ======= We have investigated the merging rates of compact binaries in galaxies, and the related rates of GW detection events with AdvLIGO/Virgo and with the Einstein Telescope. We have based our analysis on three main ingredients (see Sect. \[sec|basics\]): (i) redshift-dependent galaxy statistics provided by the latest determination of the SFR functions from UV+far-IR/(sub)mm/radio data (see Sect. \[sec|SFR\_func\] and Fig. \[fig|SFRfunc\]); (ii) star formation and chemical enrichment histories for individual galaxies, modeled on the basis of observations (see Sect. \[sec|SFR\_hist\]); (iii) compact remnant mass distribution and prescriptions for merging of compact binaries from stellar evolution simulations (see Sect. \[sec|stellarevo\] and Fig. \[fig|remnant\]). We have presented results for the intrinsic birthrate of compact remnants (see Sect. \[sec|birthrates\] and Fig. \[fig|Rbirth\]), the merging rates of compact binaries (see Sect. \[sec|mergerrates\] and Fig. \[fig|Rmerg\]), and the related GW detection rates and counts (see Sect. \[sec|GWdetection\] and Figs. \[fig|GWzdist\], \[fig|GWcounts\]), attempting to differentiate the outcomes for BH-BH, NS-NS, and BH-NS mergers. We have compared our approach with the one based on cosmic SFR density and cosmic metallicity, exploited by many literature studies; the merging rates from the two approaches are in agreement within the overall astrophysical uncertainties. We have computed the joint probability distribution of chirp masses related to mergers of compact binaries, and SFR (or stellar mass, metallicity, etc.) of the host galaxy progenitor as a function of redshifts (see Sect. \[sec|mergerrates\] and Fig. \[fig|Rmerg\_3D\]). We have then investigated the impact of galaxy-scale strong gravitational lensing in enhancing the GW event rate of detectable events toward high-redshift (see Sect. \[sec|lensing\] and Figs. \[fig|GWzdist\], \[fig|GWcounts\]). Finally, we have discussed the contribution of undetected GW emission from compact binary mergers to the stochastic background (see Sect. \[sec|GWback\] and Fig. \[fig|GWback\]). In a nutshell, our work has been mainly focused on developing an approach to post-process the outcomes of stellar evolution simulations toward computing GW event rates of compact binary mergers (both intrinsic and strongly gravitationally lensed). Specifically, we have coupled the metallicity-dependent compact remnant mass spectrum from stellar evolution simulations to the most recent observational determinations of the galaxy SFR functions and to the star formation and chemical enrichment histories of individual galaxies; such an approach in principle adds extra layers of information with respect to methods based on the integrated cosmic SFR density and cosmic metallicity, like potentially the association of the GW event to the properties of the host galaxy; admittedly, this is a first step and with current data some degree of uncertainty also comes with it. Nevertheless, an accurate treatment of the galaxy-related post-processing along the lines designed here, that hopefully will become feasible in the near future with more precise determinations of the SFR functions and of the enrichment history of galaxies at increasingly higher redshifts $z\ga 3$, will help in fully exploiting future GW observations and stellar evolution simulations to constrain the fundamental processes of stellar astrophysics that ultimately rule the formation and coalescence of binary compact remnants. As a concluding remark, we point out that our approach can also be adapted with minimal change of formalism to multimessenger studies of various galaxy populations at different redshift. Most noticeably, it could be exploited to predict the rate of electromagnetic, neutrino, and cosmic ray emission events associated with NS-NS and/or BH-NS mergers as a function of host galaxy properties and of cosmic time, irrespective of detectability in the GW counterparts. We acknowledge the referee for helpful comments. We warmly thank A. Bressan, F. Ricci, M. Spera, and J. Miller for stimulating discussions and critical reading. This work has been partially supported by PRIN MIUR 2015 “Cosmology and Fundamental Physics: illuminating the Dark Universe with Euclid”, and by the RADIOFOREGROUNDS grant (COMPET-05-2015, agreement number 687312) of the European Union Horizon 2020 research and innovation program. AL acknowledges the MIUR grant ’Finanziamento annuale individuale attivitá base di ricerca’. Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2019, in press \[arXiv:1811.12907\] Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2018, 120, 091101 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017a, ApJL, 851, L35 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017b, ApJL, 848, L12 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017c, PRL, 119, 161101 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017d, PRL, 119, 141101 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017e, PRL, 118, 221101 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2017f, PRL, 118, 121101 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2016a, PRL, 116, 241103 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2016b, PRL, 116, 061102 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2016c, PRX, 6, 041015 Abbott, B.P., Abbott, R., Abbott, T.D., et al. 2016d, ApJ, 832, L21 Aguirre, A., Dow-Hygelund, C., Schaye, J., & Theuns, T. 2008, ApJ, 689, 851 Ajith, P., Fotopoulos, N., Privitera, S., et al. 2014, PRD, 89, 084041 Ajith, P., Hannam, M., Husa, S., et al. 2011, PRL, 106, 241101 Ajith, P., Babak, S., Chen, Y., et al. 2008, PRD, 77, 104017 Alavi, A., Siana, B., Richard, J., et al. 2016, ApJ, 832, 56 Andrews, B.H., & Martini, P. 2013, ApJ, 765, 140 Arrigoni, M, Trager, S.C., Somerville, R.S., & Gibson, B.K. 2010, MNRAS, 402, 173 Balestra, I., Tozzi, P., Ettori, S., et al. 2007, A&A, 462, 429 Barack, L., Cardoso, V., Nissanke, S., et al. 2018, white paper for COST action ’Gravitational Waves, Black Holes, and Fundamental Physics’ \[arXiv:1806.05195\] Bhatawdekar, R., Conselice, C., Margalef-Bentabol, B., & Duncan, K. 2019, MNRAS, 486, 3805 Belczynski, K., Bulik, T., Olejak, A., et al. 2018 \[arXiv:1812.10065\] Belczynski, K., Holz, D.E., Bulik, T., & O’Shaughnessy, R. 2016, Natur, 534, 512 Belczynski, K., Kalogera, V., & Bulik, T. 2012, ApJ, 572, 407 Belczynski, K., Holz, D.E., Fryer, C.L., et al. 2010a, ApJ, 708, 117 Belczynski, K., Dominik, M., Bulik, T., et al. 2010b, ApJ, 715, L138 Blain, A. W. 1996, MNRAS, 283, 1340 Blanchard, P. K., Berger, E., Fong, W., et al. 2017, ApJ, 848, L22 Bouwens, R. J., Oesch, P. A., Illingworth, G. D., Ellis, R. S., & Stefanon, M. 2017, ApJ, 843, 129 Bouwens, R. J., Aravena, M., De Carli, R., et al. 2016, ApJ, 833, 72 Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2015, ApJ, 803, 34 Bovy, J. 2017, MNRAS, 470, 1360 Bressan A., Marigo P., Girardi L., et al. 2012, MNRAS, 427, 127 Bressan A., Silva L., & Granato G.L., 2002, A&A, 392, 377 Caffau E., Ludwig H.-G., Steffen M., Freytag B., Bonifacio P., 2011, Sol. Phys., 268, 255 Cai, Z.-Y., Lapi, A., Xia, J.-Q., et al. 2013, ApJ, 768, 21 Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682 Cao, L., Lu, Y., & Zhao, Y. 2018, MNRAS, 474, 4997 Cassará, L. P., Maccagni, D., Garilli, B., et al. 2016, A&A, 593, A9 Chabrier, G. 2005, in The Initial Mass Function 50 years later, Astrop. Sp. Sci., 327, ed. by E. Corbelli and F. Palle (Springer: Dordrecht), p.41 Chabrier, G. 2003, ApJL, 586, L133 Chen Y., Bressan A., Girardi L., Marigo P., Kong X., Lanza A., 2015, MNRAS, 452, 1068 Chiappini, C., Matteucci, F., & Gratton, R. 1997, ApJ, 477, 765 Chruslinska, M., Nelemans, G., & Belczynski, K. 2019, MNRAS, 482, 5012 Chruslinska, M., Belczynski, K., Klencki, J., & Benacquista, M. 2018, MNRAS, 474, 2937 Cignoni, M., Sabbi, E., van der Marel, R. P., et al. 2016, ApJ, 833, 154 Citro, A., Pozzetti, L., Moresco, M., & Cimatti, A. 2016, A&A, 592, A19 Conroy, C. 2013, ARA&A, 51, 393 Coppin, K. E. K., Geach, J. E., Almaini, O., et al. 2015, MNRAS, 446, 1293 Cooray, A., Calanog, J., Wardlow, J. L., et al. 2014, ApJ, 790, 40 Cote, B., Fryer, C.L., Belczynski, K., et al. 2018, ApJ, 855, 99 Courteau, S., Cappellari, M., de Jong, R. S., et al. 2014, RvMP, 86, 47 Crocker, K., Prestegard, T., Mandic, V., et al. 2017, PhRvD, 95f3015 Cucciati, O., Tresse, L., Ilbert, O., et al. 2012, A&A, 539, A31 Daddi, E., Alexander, D. M., Dickinson, M., et al. 2007, ApJ, 670, 173 Davidzon, I., Ilbert, O., Laigle, C., et al. 2017, A&A, 605, A70 de la Rosa, I.G., La Barbera, F., Ferreras, I., et al. 2016, MNRAS, 457, 1916 Della Valle, M., Guetta, D., Cappellaro, E., et al. 2018, MNRAS, 481, 4355 de Mink, S. E., & Belczynski, K. 2015, ApJ, 814, 58 de Mink, S.E., Langer, N., Izzard, R.G., Sana, H., de Koter, A. 2013, ApJ, 764, 166 Dominik, M., Berti, E., O’Shaughnessy, R., et al. 2015, ApJ, 806, 263 Dominik, M., Belczynski, K., Fryer, C., et al. 2013, ApJ, 779, 72 Dominik, M., Belczynski, K., Fryer, C., et al. 2012, ApJ, 759, 52 Dunlop, J. S., McLure, R. J., Biggs, A. D., et al. 2017, MNRAS, 466, 861 Dvorkin, I., Uzan, J.-P., Vangioni, E., & Silk, J. 2018, MNRAS, 479, 121 Dvorkin, I., Vangioni, E., Silk, J., Uzan, J.-P., & Olive, K.A. 2016, MNRAS, 461, 3877 Eales, S. A. 2015, MNRAS, 446, 3224 Efstathiou, A., Rowan-Robinson, M., & Siebenmorgen, R. 2000, MNRAS, 313, 734 Elbert,O.D., Bullock, J., S., & Kaplinghat, M., 2018, MNRAS, 473, 1186 Feldmann, R. 2015, MNRAS, 449, 3274 Finkelstein, S. L., Ryan, R. E., Jr., Papovich, C., et al. 2015, ApJ, 810, 71 Finn, L.S. 1996, Phys. Rev. D, 53, 6 Fishbach, M., Gray, R., Magana Hernandez, I., et al. 2019, ApJ, 871, L13 Fryer, C. L., Belczynski, K.,Wiktorowicz, G., et al. 2012, ApJ, 749, 91 Fudamoto, Y., Oesch, P. A., Schinnerer, E., et al. 2017, MNRAS, 472, 483 Gallazzi, A., Bell, E.F., Zibetti, S., Brinchmann, J., & Kelson, D.D. 2014, ApJ, 788, 72 Gallazzi, A., Charlot, S., Brinchmann, J., & White, S. D. M. 2006, MNRAS, 370, 1106 Giacobbo, N., Mapelli, M., & Spera, M. 2018, MNRAS, 474, 2959 Giacobbo, N., & Mapelli, M. 2018, MNRAS, 480, 2011 Grisoni, V., Spitoni, E., Matteucci, F., et al. 2017, MNRAS, 472, 3637 Gruppioni, C., & Pozzi, F. 2019, MNRAS, 483, 1993 Gruppioni, C., Calura, F., Pozzi, F., et al. 2015, MNRAS, 451, 3419 Gruppioni, C., Pozzi, F., Rodighiero, G., et al. 2013, MNRAS, 432, 23 Hopkins, P.F., Quataert, E., & Murray, N. 2012, MNRAS, 421, 3522 Hopkins, A. M., & Beacom, J. F. 2006, ApJ, 651, 142 Im, M., Yoon, Y., Lee, S.-K. J.m et al. 2017, ApJ, 849, L16 Johansson, J., Thomas, D., Maraston, C. 2012, MNRAS, 421, 1908 Jin, Z.-P., Li, X., Wang, H., et al. 2018, ApJ, 857, 128 Kidder, L.E., Will, C.M., & Wiseman, A.G. 1993, Phys. Rev. D, 47, 8 Kim, C., Perera, B.B.P. & McLaughlin, M.A. 2015, MNRAS, 448, 928 Kistler, M. D., Yuksel, H., Beacom, J. F., et al. 2009, ApJL, 705, L104 Kistler, M. D., Yuksel, H., & Hopkins, A. M. 2013 \[arXiv:1305.1630\] Kroupa, P. 2002, Sci, 295, 82 Lacey, C. G., Baugh, C. M., Frenk, C. S., et al. 2010, MNRAS, 405, 2 Lamberts, A., Garrison-Kimmel, S., Hopkins, P.F., et al. 2018, MNRAS, 480, 2704 Lamberts, A., Garrison-Kimmel, S., Clausen, D.R., & Hopkins, P.F. 2016, MNRAS, 463, L31 Lapi, A., Pantoni, L., Zanisi, L., et al. 2018, 857, 22 Lapi, A., Mancuso, C., Bressan, A., & Danese, L. 2017a, ApJ, 847, 13 Lapi, A., Mancuso, C., Celotti, A., & Danese, L. 2017b, ApJ, 835, 37 Lapi, A., Negrello, M., Gonzalez-Nuevo, J., et al. 2012, ApJ, 755, 46 Lapi, A., Gonzalez-Nuevo, J., Fan, L., et al. 2011, ApJ, 742, 24 Li, S.-S., Mao, S., Zhao, Y., & Lu, Y. 2018, MNRAS, 476, 2220 Liao, K., Fan, X.-L., Ding, X., Biesiada, M., & Zhu, Z.-H. 2017, Natur Comm., 8, 1148 Licquia, T.C., & Newman, J.A. 2015, ApJ, 806, 6 Liu, D., Daddi, E., Dickinson, M., et al. 2018, ApJ, 853, 172 Madau, P., & Dickinson, M. 2014, ARA&A, 52, 415 Magnelli, B., Popesso, P., Berta, S., et al. 2013, A&A, 553, A132 Mancuso, C., Lapi, A., Prandoni, I., et al. 2017, ApJ, 842, 95 Mancuso, C., Lapi, A., Shi, J., et al. 2016a, ApJ, 833, 152 Mancuso, C., Lapi, A., Shi, J., et al. 2016b, ApJ, 823, 128 Mannucci, F., Cresci, G., Maiolino, R., Marconi, A., & Gnerucci, A. 2010, MNRAS, 408, 2115 Maoz, D., Mannucci, F., & Nelemans, G. 2014, ARA&A, 52, 107 Mapelli, M., Giacobbo, N., Ripamonti, E., & Spera, M. 2017, MNRAS, 472, 2422 Marrone, D. P., Spilker, J. S., Hayward, C. C., et al. 2018, Natur, 553, 51 Martin-Navarro, I., Vazdekis, A., La Barbera, F., et al. 2015, ApJ, 806, L31 Meurer, G. R., Heckman, T. M., & Calzetti, D. 1999, ApJ, 521, 64 Mo, H., van den Bosch, F., & White, S.D.M. 2010, Galaxy Formation and Evolution (Cambridge: Cambridge Univ. Press) Moustakas, J., Coil, A. L., Aird, J., et al. 2013, ApJ, 767, 50 Murphy, E. J., Bremseth, J., Mason, B. S., et al. 2012, ApJ, 761, 97 Negrello, M., Hopwood, R., De Zotti, G., et al. 2010, Science, 330, 800 Negrello, M., Perrotta, F., Gonz´alez-Nuevo, J., et al. 2007, MNRAS, 377, 1557 Ng, K.K.Y., Wong, K.W.K., Broadhurst, T., & Li, T.G.F. 2018, PRD, 97, 023012 Nissanke, S., Holz, D.E., Dalal, N., et al. 2013 \[arXiv:1307.2638\] Novak, M., Smolcic, V., Delhaize, J., et al. 2017, A&A, 602, 5 Oesch, P. A., Bouwens, R. J., Carollo, C. M., et al. 2010, ApJL, 725, L150 Oguri, M. 2018, MNRAS, 480, 3842 Onodera, M., Carollo, C.M, Lilly, S., et al. 2016, ApJ, 822, 42 O’Shaughnessy, R., Bellovary, J. M., Brooks, A., et al. 2017, MNRAS, 464, 2831 O’Shaughnessy, R., Kalogera, V., Belczynski, K. 2010, ApJ, 716, 615 Palmese, A., Hartley, W., Tarsitano, F., et al. 2017, ApJL, 849, L34 Panuzzo, P., Bressan, A., Granato, G. L., Silva, L., & Danese, L. 2003, A&A, 409, 99 Papovich, C., Finkelstein, S. L., Ferguson, H. C., Lotz, J. M., & Giavalisco, M. 2011, MNRAS, 412, 1123 Perrotta, F., Baccigalupi, C., Bartelmann, M., De Zotti, G., & Granato, G. L. 2002, MNRAS, 329, 445 Petrillo, C.E., Dietz, A., & Cavagliá, M. 2013, ApJ, 767, 140 Pezzulli, G., & Fraternali, F. 2016, MNRAS, 455, 2308 Planck Collaboration 2019, A&A, in press \[arXiv:1807.06209\] Pol, N., McLaughlin, M., & Lorimer, D.R. 2019, ApJ, 870, 71 Rafelski, M., Wolfe, A.M., Prochaska, J.X., Neeleman, M., & Mendez, A.J. 2012, ApJ, 755, 89 Reddy, N. A., Kriek, M., Shapley, A. E., et al. 2015, ApJ, 806, 259 Regimbau, T., Dent, T., Del Pozzo, W., et al. 2012 Phys. Rev. D, 86, 122001 Riechers, D.A., Leung, T. K. D., Ivison, R.J., et al. 2017, ApJ, 850, 1 Rodighiero, G., Brusa, M, Daddi, E., et al. 2015, ApJL, 800, L10 Rodighiero, G., Daddi, E., Baronchelli, I., et al. 2011, ApJL, 739, L40 Romano, D., Karakas, A. I., Tosi, M., & Matteucci, F. 2010, A&A, 522, A32 Romano, D., Silva, L., Matteucci, F., & Danese, L. 2002, MNRAS, 334, 444 Rowan-Robinson, M., Oliver, S., Wang, L., et al. 2016, MNRAS, 461, 1100 Salim, S., Lee, J.C., Davé, R., & Dickinson, M. 2015, ApJ, 808, 25 Sanders, R.L., Shapley, A.E., Kriek, M., et al. 2018, ApJ, 858, 99 Salpeter, E. E. 1955, ApJ, 121, 161 Sathyaprakash, B., Abernathy, M., Acernese, F., et al. 2012, CQGra, 29, l4013 Schiminovich, D., Ilbert, O., Arnouts, S., et al. 2005, ApJL, 619, L47 Silva, L., Schurer, A., Granato, G.L., et al. 2011, MNRAS, 410, 2043 Silva, L., Granato, G. L., Bressan, A., & Danese, L. 1998, ApJ, 509, 103 Smith, G.P., Jauzac, M., Veitch, J., et al. 2018, MNRAS, 475, 3823 Smit, R., Bouwens, R. J., Franx, M., et al. 2012, ApJ, 756, 14 Speagle, J. S., Steinhardt,C. L., Capak, P. L.,& Silverman, J. 2014, ApJS, 214, 15 Spera, M., Mapelli, M., Giacobbo, N., et al. 2019, MNRAS, 485, 889 Spera, M., & Mapelli, M. 2017, MNRAS, 470, 4739 Spera, M., Mapelli, M., & Bressan, A. 2015, MNRAS, 451, 4086 Spolaor, M, Kobayashi, C., Forbes, D.A., Couch, W.J., & Hau, G.K.T. 2010, MNRAS, 408, 272 Stacey, H. R., McKean, J. P., Robertson, N. C., et al. 2018, MNRAS, 476, 5075 Steinhardt, C. L., Speagle, J. S., & Capak, P. 2014, ApJL, 791, L25 Strolger, L.-G., Riess, A. G., Dahlen, T., et al. 2004, ApJ, 613, 200 Takahashi, R., & Nakamura, T. 2003, ApJ, 595, 1039 Tang, J., Bressan, A., Rosenfield, P., et al. 2014, MNRAS, 445, 4287 Taylor, S.R., & Gair, J.R. 2012, PhRD, 86, 023502 Taylor, S.R., Gair, J.R., & Mandel, I. 2012, PhRD, 85, 023535 Thrane, E., & Romano, D. 2013, PhRD, 88, 124032 Thomas D., Maraston C., Schawinski K., Sarzi M., & Silk J. 2010, MNRAS, 404, 1775 Thomas, D., Maraston, C., Bender, R., & Mendes de Oliveira, C. 2005, ApJ, 621, 673 Troja, E., Piro, L., van Eerten, H., et al. 2017, Natur, 551, 71 van der Burg, R. F. J., Hildebrandt, H., & Erben, T. 2010, A&A, 523, A74 Vangioni, E., Olive, K.A., Prestegard, Tanner, et al. 2015, MNRAS, 447, 2575 Vega, O., Silva, L., Panuzzo, P., et al. 2005, MNRAS, 364, 1286 Venemans, B.P., Decarli, R., Walter, F., et al. 2018, ApJ, 866, 159 Vitale, S., & Farr, W.M. 2018 \[arXiv:1808.00901\] Wei, J.-J., & Wu, X.-F. 2017, MNRAS, 472, 2906 Weinberg, S. 2008, Cosmology (Oxford: Oxford Univ. Press) Weinstein, A.J. 2012, CQGra, 29, 124012 Woosley, S.E., Heger, A., & Weaver, T. A. 2002, Rev. Modern Phys., 74, 1015 Zahid, H. J., Kashino, D., Silverman, J. D., et al. 2014, ApJ, 792, 75 Zavala, J.A., Montana, A., Hughes, D.H., et al. 2018, Natur Astron, 2, 56 Zhu, X.-J., Howell, E., Regimbau, T., Blair, D., & Zhu, Z.-H. 2011, ApJ, 739, 86 ![image](SFRfunc.pdf){width="16cm"} ![image](SFRcosm.pdf){width="16cm"} ![image](Mrem.pdf){width="16cm"} ![image](Rbirth.pdf){width="16cm"} ![image](Rmerg.pdf){width="16cm"} ![image](Rmerg_chirp.pdf){width="16cm"} ![image](Rmerg_3D_z0.pdf){width="9cm"} ![image](Rmerg_3D_z3.pdf){width="9cm"} ![image](Rmerg_3D_z6.pdf){width="9cm"} ![image](GWzdist_AdvLIGO.pdf){width="11cm"} ![image](GWzdist_ET.pdf){width="11cm"} ![image](GWzdist_AdvLIGO_complot.pdf){width="\textwidth"} ![image](GWcounts_AdvLIGO.pdf){width="11cm"} ![image](GWcounts_ET.pdf){width="11cm"} ![image](GWbackground_AdvLIGO.pdf){width="11cm"} ![image](GWbackground_ET.pdf){width="11cm"} [^1]: Throughout the paper we refer to AdvLIGO/Virgo in the design configuration and to the ET in the ET-D xylophone configuration. [^2]: Note that in Eq. (\[eq|easybirthrate\]) and following ones, the inner integral over star masses should contain the quantity $\phi(m_\star)/\int{\rm d}m_\star\, \phi(m_\star)\, m_\star$; however, in the literature the denominator is usually left implicit because of the IMF normalization condition $\int{\rm d}m_\star\, \phi(m_\star)\, m_\star\equiv 1\, M_\odot$, though the reader should keep track of the measure units. [^3]: Actually $\rho_{\psi}$ and $\langle Z\rangle$ should be computed at $t-\tau_{m_\star}$ where $\tau_{m_\star}$ is the star lifetime; however, since $\tau_{m_\star}\ll t$ this delay is safely neglected. [^4]: Hereafter $1\, M_\odot\approx 2\times 10^{33}$ g.
--- abstract: 'Let ${X}$ be a proper Hadamard space and $\ \Gamma<{\mbox{Is}}({X})$ a non-elementary discrete group of isometries with a rank one isometry. We discuss and prove Hopf-Tsuji-Sullivan dichotomy for the geodesic flow on the set of parametrized geodesics of the quotient $\quotient{\Gamma}{{X}}$ and with respect to Ricks’ measure introduced in [@Ricks]. This generalizes previous work of the author and J. C. Picaud on Hopf-Tsuji-Sullivan dichotomy in the analogous manifold setting and with respect to Knieper’s measure.' title: - 'The [P]{}oincaré series of [$\mathbb C\setminus \mathbb Z$]{}' - Rational ergodicity of geodesic flows - Axial isometries of manifolds of nonpositive curvature - Nonpositively curved manifolds of higher rank - Orbihedra of nonpositive curvature - 'Structure of manifolds of nonpositive curvature. [I]{}' - 'Metric spaces of non-positive curvature' - A course in Metric Geometry - Hyperbolic behaviour of geodesic flows on manifolds with no focal points - 'Rank-one isometries of buildings and quasi-morphisms of [K]{}ac-[M]{}oody groups' - 'Une dichotomie de [H]{}opf pour les flots géodésiques associés aux groupes discrets d’isométries des arbres' - 'Séries de [P]{}oincaré des groupes géométriquement finis' - 'Rank-one isometries of proper CAT(0)-spaces' - Ergodic theory and the geodesic flow on surfaces of constant negative curvature - Ergodicity of harmonic invariant measures for the geodesic flow on hyperbolic spaces - On the asymptotic geometry of nonpositively curved manifolds - 'The uniqueness of the measure of maximal entropy for geodesic flows on rank [$1$]{} manifolds' - 'Darstellungss[ä]{}tze f[ü]{}r Str[ö]{}mungen und Halbstr[ö]{}mungen. I' - Asymptotic geometry and growth of conjugacy classes of nonpositively curved manifolds - Asymptotic geometry in products of Hadamard spaces with rank one isometries - Principe variationnel et groupes kleiniens - 'The limit set of a [F]{}uchsian group' - 'Autour de l’exposant de [P]{}oincaré d’un groupe kleinien' - 'Flat strips, Bowen-Margulis measures, and mixing of the geodesic flow for rank one CAT$(0)$ spaces' - 'Flat strips, Bowen-Margulis measures, and mixing of the geodesic flow for rank one CAT$(0)$ spaces' - 'Un théorème de [F]{}atou pour les densités conformes avec applications aux revêtements galoisiens en courbure négative' - Existence of immersed tori in manifolds of nonpositive curvature - 'Structure of flat subspaces in low-dimensional manifolds of nonpositive curvature' - Codimension one tori in manifolds of nonpositive curvature - The ergodic theory of discrete isometry groups on manifolds of variable negative curvature --- <span style="font-variant:small-caps;">Gabriele Link$^*$</span> Introduction ============ Let $({X},d)$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete group. Let ${{\mathcal G}}$ denote the set of parametrized geodesic lines in ${X}$ endowed with the compact-open topology (which can be identified with the unit tangent bundle $S{X}$ if ${X}$ is a Riemannian manifold) and consider the action of ${\mathbb{R}}$ on ${{\mathcal G}}$ by reparametrization. This action induces a flow $g_\Gamma$ on the quotient space $\quotient{\Gamma}{{{\mathcal G}}}$. Let $m_\Gamma$ be an appropriate Radon measure on $\quotient{\Gamma}{{{\mathcal G}}}$ which is invariant by the flow $g_\Gamma$. Hopf-Tsuji-Sullivan dichotomy then states that – under certain conditions on the space ${X}$ and the group $\Gamma$ – there are precisely two mutually exclusive possibilities for the dynamical system $( \quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$: Either it is conservative (that is almost every orbit is recurrent) and ergodic (which means that the only invariant sets have zero or full measure) or it is dissipative (that is almost every orbit is divergent) and non-ergodic. For a precise definition of the previous notions the reader is referred to Section \[dyndef\]. The story of Hopf-Tsuji-Sullivan dichotomy probably began with Poincaré’s recurrence theorem applied to Riemann surfaces and with Hopf’s seminal work later in the 1930’s (see [@Hopf] and [@MR0284564]). For quotients of the hyperbolic plane by Fuchsian groups it was observed that with respect to Liouville measure the geodesic flow is either conservative and ergodic or dissipative and non-ergodic. Later, with the invention of the remarkable Patterson-Sullivan measures on the boundary of ${X}$ (see [@MR0450547] and [@MR556586] for the original constructions, then [@MR1348871], [@MR1293874], [@MR1207579] for extensions, and [@MR2057305] for a clear account and deep applications of this theory) and then the construction of Bowen-Margulis measure on $\quotient{\Gamma}{ S {X}}$ using these, generalizations to a wider class of spaces and groups have been obtained by several authors. Among them I only want to mention here the work of M. Coornaert and A. Papadopoulos ([@MR1207579]) which deals with locally compact metric trees and the work of V. Kaimanovich ([@MR1293874]) in the setting of Gromov hyperbolic spaces with some additional properties; these were probably the first ones considering non-Riemannian spaces. T. Roblin ([@MR2057305 Th[é]{}or[è]{}me 1.7]) then gave a unified version for all proper CAT$(-1)$-spaces. Recently, in [@LinkPicaud], Hopf-Tsuji-Sullivan dichotomy was proved for quotients of Hadamard [[**]{}manifolds]{} by discrete isometry groups containing an element which translates a geodesic without parallel perpendicular Jacobi field and with respect to Knieper’s measure ([@MR1652924]) on the unit tangent bundle. The main goal of the present paper is to prove Hopf-Tsuji-Sullivan dichotomy in the setting of proper Hadamard [[**]{}spaces]{} with a [[**]{}rank one]{} isometry (that is an isometry translating a geodesic which does not bound a flat half-plane) and hence to generalize the Main Theorem of [@LinkPicaud] to non-Riemannian spaces; compared to [@LinkPicaud] we also impose an a priori weaker condition on the discrete group $\Gamma$ of the Hadamard [[**]{}manifold]{} ${X}$: In fact, we only need a discrete group with infinite limit set which contains the fixed point of a rank one isometry of ${X}$. So in particular ${X}$ need not a priori possess a geodesic without parallel perpendicular Jacobi field, but only one without a flat half-plane. However, this can only happen when ${X}$ does not admit a quotient of finite volume according to the rank rigidity theorem of Ballmann [@MR819559] and Burns-Spatzier [@MR908215], which asserts that otherwise ${X}$ has a geodesic without parallel perpendicular Jacobi field. Even though some of the results from the above mentioned paper [@LinkPicaud] remain true in this more general setting, there are several obstructions occurring when singular spaces are involved. The probably most important one is the fact that Knieper’s measure cannot be constructed without a volume form on the closed and convex subsets corresponding to the parallel sets of geodesic lines. We will therefore follow here the construction proposed by R. Ricks in [@Ricks] and first define weak Bowen-Margulis measure on the quotient $\quotient{\Gamma}{[{{\mathcal G}}]}$ of parallel classes of [[**]{}parametrized]{} geodesic lines by $\Gamma$. With respect to this measure we have the following Let $X$ be a proper Hadamard space and $\ \Gamma<{\mbox{Is}}(X)$ a discrete group with the fixed point of a rank one isometry of $X$ in its infinite limit set. Then with respect to Ricks’ weak Bowen-Margulis measure either the geodesic flow on $\quotient{\Gamma}{[{{\mathcal G}}]}$ is conservative, or it is dissipative and non-ergodic unless the measure is supported on a single orbit by the geodesic flow on $\quotient{\Gamma}{[{{\mathcal G}}]}$. Notice that since Ricks’ construction of weak Bowen-Margulis measure depends on the choice of a conformal density, a priori there may exist many distinct weak Bowen-Margulis measures. In the conservative case however, it is well-known that up to scaling there exists only one conformal density; hence there is precisely one Ricks’ weak Bowen-Margulis measure in this setting. We remark that we do not manage to deduce ergodicity from conservativity in this weakest setting (only requiring the fixed point of an [[**]{}arbitrary]{} rank one isometry in the limit set of $\Gamma$) as neither the Hopf argument nor Kaimanovich’s method for the proof of Theorem 2.5 in [@MR1293874] can be applied in this case. However, if ${X}$ is [[**]{}geodesically complete]{} then thanks to Proposition \[largewidthgiveszerowidth\] this weak assumption implies the existence of a [[**]{}zero width]{} rank one geodesic (that is one which does not even bound a flat [[**]{}strip]{}) with extremities in the limit set of $\Gamma$. Under this additional assumption any weak Bowen-Margulis measure induces a so-called Ricks’ Bowen-Margulis measure $m_\Gamma$ on the quotient $\quotient{\Gamma}{{{\mathcal G}}}$. Notice that by the remark following Theorem A there is only one Ricks’ Bowen-Margulis measure in the conservative case. We finally get Theorem \[HTS\], the full Hopf-Tsuji-Sullivan dichotomy including ergodicity in the conservative case; a short version reads as follows: Let $X$ be a proper Hadamard space and $\ \Gamma<{\mbox{Is}}(X)$ a discrete group with the fixed point of a rank one isometry of ${X}$ [[**]{}and]{} the extremities of a [[**]{}zero width]{} rank one geodesic in its infinite limit set. Then with respect to any Ricks’ Bowen-Margulis measure either the geodesic flow on $\quotient{\Gamma}{{{\mathcal G}}}$ is conservative and ergodic, or it is dissipative and non-ergodic unless the measure is supported on a single orbit by the geodesic flow in $\quotient{\Gamma}{{{\mathcal G}}}$. We finally want to mention here that if ${X}$ is a Hadamard [[**]{}manifold]{}, then in the conservative case Ricks’ Bowen-Margulis measure $m_\Gamma$ is equal to Knieper’s measure which was used in [@LinkPicaud]. If moreover $\Gamma$ is cocompact, then Knieper’s work [@MR1652924] implies that the Rick’s Bowen-Margulis measure is the unique measure of maximal entropy on the unit tangent bundle $\quotient{\Gamma}{{{\mathcal G}}}$. We summarize now what is known (from the Main Theorem of [@LinkPicaud] and Theorem B above) in the special case of Hadamard [[**]{}manifolds]{}: Let $X$ be a Hadamard manifold and $\ \Gamma<{\mbox{Is}}(X)$ a discrete group with the fixed point of an arbitrary rank one isometry of ${X}$ in its infinite limit set. Then either Knieper’s measure and Ricks’ Bowen-Margulis measure on $\quotient{\Gamma}{{{\mathcal G}}}$ coincide, and the geodesic flow is conservative and ergodic with respect to this measure, or the geodesic flow is dissipative with respect to any Knieper’s measure and with respect to any Ricks’ Bowen-Margulis measure on $\quotient{\Gamma}{{{\mathcal G}}}$. Moreover, in the second case it is non-ergodic unless the considered measure is supported on a single orbit by the geodesic flow. Again, in the dissipative case there may be several choices for Knieper’s measure and for Ricks’ Bowen-Margulis measure on $\quotient{\Gamma}{{{\mathcal G}}}$ as both measures are constructed from a conformal density. And even if the same conformal density is used in the construction, Knieper’s measure and Ricks’ Bowen-Margulis measure might be different. Actually, in this article we will consider slightly more general classes of measures on $\quotient{\Gamma}{[{{\mathcal G}}]}$ respectively $\quotient{\Gamma}{{{\mathcal G}}}$: Instead of using the geodesic current associated to a conformal density for the construction we allow for an arbitrary quasi-product geodesic current (see Section \[geodcurrentmeasures\] for a precise definition). The paper is organized as follows: In Section \[prelim\] we fix some notation and recall basic facts concerning Hadamard spaces; in Section \[rank1prelim\] the notion of rank one isometry is recalled and basic properties are listed. Section \[rankonegroups\] discusses conditions under which a subgroup $\Gamma$ of the isometry group of a proper Hadamard space ${X}$ is [[**]{}rank one]{} (that is contains a pair of independent rank one elements), and under which hypotheses the presence of a rank one geodesic of zero width in ${X}$ with extremities in the limit set of $\Gamma$ can be guaranteed. This section is of independent interest. In Section \[dyndef\] basic notions and useful facts from ergodic theory and dynamical systems are recalled, and the important notion of quasi-product geodesic current is introduced. We also recall from [@Ricks] Ricks’ construction of a geodesic flow invariant measure associated to such a geodesic current first on the quotient $\quotient{\Gamma}{[{{\mathcal G}}]}$ of parallel classes of parametrized geodesic lines and finally on the quotient $\quotient{\Gamma}{{{\mathcal G}}}$ of parametrized geodesic lines. Section \[propradlimset\] deals with the relation between the radial limit set of the group $\Gamma$ and recurrence in $\quotient{\Gamma}{[{{\mathcal G}}]}$ respectively $\quotient{\Gamma}{{{\mathcal G}}}$. We deduce the crucial Theorem \[zerofull\], which in particular implies that for a rank one group $\Gamma$ with the extremity of a zero width rank one geodesic in its limit set any conservative quasi-product geodesic current $\overline\mu$ is supported on the set of end point pairs of zero width rank one geodesics. In Section \[HopfArgument\] we use the Hopf argument to show that under the presence of a [[**]{}zero width]{} rank one geodesic with extremities in the limit set conservativity of a quasi-product geodesic current $\overline\mu$ satisfying a mild growth condition implies ergodicity of the geodesic flow with respect to the associated geodesic flow invarant Ricks’ measure. Compared to the classical case a few technical issues need to be addressed there. In Section \[currentsfromconfdens\] we then specialize to geodesic currents coming from a conformal density. We recall a few properties of conformal densities and prove Proposition \[confgivesdissipative\], which states that for convergent groups every Ricks’ measure on $\quotient{\Gamma}{[{{\mathcal G}}]}$ is dissipative. Section \[divergentmeansconservative\] is devoted to the proof of Proposition \[divseries\], namely that divergent groups always induce conservative Ricks’ measure. The minimal requirement that $\Gamma$ contains only a rank one element of [[**]{}arbitrary width]{} makes the proof a bit more technical than it would be with the presence of a zero width geodesic with extremities in the limit set; however, it is needed in this form to obtain Theorem \[HTSweak\] which is Theorem A above. In the final section \[conclusion\] we summarize our results to deduce Theorems A, B and C. Following an idea of F. Dal’bo, M. Peign[é]{} and J.P. Otal ([@MR1776078], [@MR3220550]) we also show how to construct plenty of convergent discrete rank one isometry groups of any Hadamard space admitting a rank one isometry. Preliminaries on Hadamard spaces {#prelim} ================================ The purpose of this section is to introduce terminology and notation and to summarize basic results about Hadamard spaces. Most of the material can be found in [@MR1744486] and [@MR1377265] (see also [@MR823981] in the special case of Hadamard manifolds and [@Ricks] for more recent results). Let $({X},d)$ be a metric space. For $y\in {X}$ and $r>0$ we will denote $B_y(r)\subseteq{X}$ the open ball of radius $r$ centered at $y\in{X}$. A [[**]{}geodesic]{} is a map $\sigma$ from a closed interval $I\subseteq{\mathbb{R}}$ or $I={\mathbb{R}}$ to ${X}$ [  ]{}$d(\sigma(t), \sigma(t'))=|t-t'|$ for all $t,t'\in I$. For more precision we use the term [[**]{}geodesic ray]{} if $I=[0,\infty)$ and [[**]{}geodesic line]{} if $I={\mathbb{R}}$. We will deal here with [[**]{}Hadamard spaces]{} $({X},d)$, that is complete metric spaces in which for any two points $x,y\in{X}$ there exists a geodesic $\sigma:[0,d(x,y)]\to {X}$ with $\sigma(0)=x$ and $\sigma(d(x,y))=y$ and in which all geodesic triangles satisfy the CAT$(0)$-inequality. This implies in particular that ${X}$ is simply connected and that the geodesic joining an arbitrary pair of points in ${X}$ is unique. Notice however that in the non-Riemannian setting completeness of ${X}$ does not imply that every geodesic can be extended to a geodesic line, so ${X}$ need not be geodesically complete. The geometric boundary ${\partial{X}}$ of ${X}$ is the set of equivalence classes of asymptotic geodesic rays endowed with the cone topology (see for example Chapter II in [@MR1377265]). We remark that for all $x\in{X}$ and all $ \xi\in{\partial{X}}$ there exists a unique geodesic ray $\sigma_{x,\xi}$ with origin $x=\sigma_{x,\xi}(0)$ representing $\xi$. From here on we will require that ${X}$ is proper; in this case the geometric boundary ${\partial{X}}$ is compact and the space ${X}$ is a dense and open subset of the compact space ${\overline{{X}}}:={X}\cup{\partial{X}}$. Moreover, the action of the isometry group ${\mbox{Is}}({X})$ on ${X}$ naturally extends to an action by homeomorphisms on the geometric boundary. If $x, y\in {X}$, $\xi\in{\partial{X}}$ and $\sigma$ is a geodesic ray in the class of $\xi$, we set $$\label{buseman} {{\mathcal B}}_{\xi}(x, y)\,:= \lim_{s\to\infty}\big( d(x,\sigma(s))-d(y,\sigma(s))\big).$$ This number exists, is independent of the chosen ray $\sigma$, and the function $${{\mathcal B}}_{\xi}(\cdot , y): {X}\to {\mathbb{R}},\quad x \mapsto {{\mathcal B}}_{\xi}(x, y)$$ is called the [[**]{}Busemann function]{} centered at $\xi$ based at $y$ (see also Chapter II in [@MR1377265]). Obviously we have $${{\mathcal B}}_{g\cdot\xi}(g{\!\cdot\!}x,g{\!\cdot\!}y) = {{\mathcal B}}_{\xi}(x, y)\quad\text{for all }\ x,y\in{X}\quad\text{and }\ g\in{\mbox{Is}}({X}),$$ and the cocycle identity $$\label{cocycle} {{\mathcal B}}_{\xi}(x, z)={{\mathcal B}}_{\xi}(x, y)+{{\mathcal B}}_{\xi}(y,z)$$ holds for all $x,y,z\in{X}$. Since ${X}$ is non-Riemannian in general, we consider (as a substitute of the unit tangent bundle $S{X}$) the set of parametrized geodesic lines in ${X}$ which we will denote ${{\mathcal G}}$. We endow this set with the metric $d_1$ given by $$\label{metriconSX} d_1(u,v):=\sup \{ {\mathrm{e}}^{-|t|} d\bigl(v(t), u(t)\bigr) \colon t\in{\mathbb{R}}\}\ \mbox{ for} \ u,v\in {{\mathcal G}};$$ this metric induces the compact-open topology, and every isometry of ${X}$ naturally extends to an isometry of the metric space $({{\mathcal G}},d_1)$. Moreover, there is a natural map $p:{{\mathcal G}}\to{X}$ defined as follows: To a geodesic line $v:{\mathbb{R}}\to {X}$ in ${{\mathcal G}}$ we assign its origin $pv:=v(0)\in{X}$. Notice that $p$ is proper, $1$-Lipschitz and ${\mbox{Is}}({X})$-equivariant; if ${X}$ is geodesically complete, then $p$ is surjective. For a geodesic line $v\in {{\mathcal G}}$ we denote its extremities $v^-:=v(-\infty)\in{\partial{X}}$ and $v^+:=v(+\infty)\in{\partial{X}}$ the negative and positive end point of $v$; in particular, we can define the end point map $${\partial}:{{\mathcal G}}\to {\partial{X}}\times{\partial{X}},\quad v\mapsto (v^-,v^+).$$ We say that a point $\xi\in{\partial{X}}$ can be joined to $\eta\in{\partial{X}}$ by a geodesic $v\in {{\mathcal G}}$ if $v^-=\xi$ and $v^+=\eta$. Obviously the set of pairs $(\xi,\eta)\in{\partial{X}}\times{\partial{X}}$ [  ]{}$\xi$ and $\eta$ can be joined by a geodesic coincides with $ {\partial}{{\mathcal G}}$, the image of ${{\mathcal G}}$ under the end point map ${\partial}$. It is well-known that if ${X}$ is CAT$(-1)$, then any pair of distinct boundary points $(\xi,\eta)$ belongs to ${\partial}{{\mathcal G}}$ and the geodesic joining $\xi$ to $\eta$ is unique up to reparametrization. In general however, the set ${\partial}{{\mathcal G}}$ is much smaller compared to ${\partial{X}}\times{\partial{X}}$ minus the diagonal due to the possible existence of flat subspaces in ${X}$. For $(\xi,\eta)\in{\partial}{{\mathcal G}}$ we denote by $$\label{joiningflat} (\xi\eta):=p\bigl(\{ v\in {{\mathcal G}}\colon v^-=\xi,\ v^+=\eta\}\bigr)=p\circ {\partial}^{-1}(\xi,\eta)$$ the subset of points in ${X}$ which lie on a geodesic line joining $\xi$ to $\eta$. It is well-known that $(\xi\eta)=(\eta\xi)\subseteq {X}$ is a closed and convex subset of ${X}$ which is isometric to a product $C_{(\xi\eta)}\times{\mathbb{R}}$, where $C_{(\xi\eta)}=C_{(\eta\xi)}$ is again a closed and convex set. In order to describe the sets $(\xi\eta)$ and $C_{(\xi\eta)}$ more precisely and for later use we introduce as in [@Ricks Definition 5.4] for $x\in{X}$ the so-called [[**]{}Hopf parametrization]{} map $$\label{HopfPar} {H}_x: {{\mathcal G}}\to {\partial}{{\mathcal G}}\times {\mathbb{R}},\quad v\mapsto \bigl(v^-,v^+,{{\mathcal B}}_{v^-}(x, v(0))\bigr)$$ of ${{\mathcal G}}$ with respect to $x$. It is immediate that for a CAT$(-1)$-space ${X}$ this map is a homeomorphism; in general it is only continuous and surjective. Moreover, it depends on the point $x\in{X}$ as follows: If $y\in {X}$, $v\in {{\mathcal G}}$ and ${H}_x(v)=(\xi,\eta,s)$, then $${H}_y(v)=\bigl(\xi,\eta,s+{{\mathcal B}}_{\xi}(y,x)\bigr)$$ by the cocycle identity (\[cocycle\]) for the Busemann function (compare also [@MR1207579 Section 3]). The Hopf parametrization map allows to define an equivalence relation $\sim$ on ${{\mathcal G}}$ as follows: If $u,v\in {{\mathcal G}}$, then $u\sim v$ if and only if ${H}_{{o}}(u)={H}_{{o}}(v)$. Notice that this definition does not depend on the choice of ${{o}}\in{X}$ and that every point $(\xi,\eta,s)\in{\partial}{{\mathcal G}}\times {\mathbb{R}}$ uniquely determines an equivalence class $[v]$ with $v\in{{\mathcal G}}$. Moreover, the closed and convex set $C_{(\xi\eta)}$ from above can be identified with the set $$\label{transversal} C_v:=p\bigl(\{u\in{{\mathcal G}}\colon u\sim v\}\bigr)\subseteq{X},$$ which we will call the [[**]{}transversal]{} of $v$. We remark that for all $w\in {\partial}^{-1}(\xi,\eta)$ the transversal $C_w$ is isometric to $C_v$. Moreover, if ${X}$ is CAT$(-1)$ then for all $v\in{{\mathcal G}}$ the transversal $C_v$ is simply a point; in general, the transversals can be unbounded. As stated in [@Ricks Proposition 5.10] the ${\mbox{Is}}({X})$-action on ${{\mathcal G}}$ descends to an action on ${\partial}{{\mathcal G}}\times {\mathbb{R}}={H}_{{o}}({{\mathcal G}})$ by homeomorphisms via $$\gamma (\xi,\eta, s):=\bigl(\gamma \xi,\gamma \eta, s+{{\mathcal B}}_{\gamma\xi}({{o}},\gamma {{o}})\bigr).$$ Moreover, the action of ${\mbox{Is}}({X})$ is well-defined on the set of equivalence classes $[{{\mathcal G}}]$ of elements in ${{\mathcal G}}$, and the (well-defined) map $$\label{equivHopf} [{{\mathcal G}}]\to {\partial}{{\mathcal G}}\times {\mathbb{R}},\quad [v]\mapsto {H}_{{o}}(v)$$ is an ${\mbox{Is}}({X})$-equivariant homeomorphism. For convenience we will frequently identify $ {\partial}{{\mathcal G}}\times {\mathbb{R}}$ with $[{{\mathcal G}}]$. We also remark that the end point map ${\partial}:{{\mathcal G}}\to {\partial{X}}\times {\partial{X}}$ induces a well-defined map $[{{\mathcal G}}]\to{\partial{X}}\times{\partial{X}}$ which we will also denote ${\partial}$. As in Definition 5.4 of [@Ricks] we will say that a sequence $(v_n)\subseteq{{\mathcal G}}$ [[**]{}converges weakly]{} to $v\in {{\mathcal G}}$ if and only if $$\label{defweakconvergence} v_n^-\to v^-,\quad v_n^+\to v^+\quad\text{and }\ {{\mathcal B}}_{v_n^-}\bigl({{o}},v_n(0)\bigr)\to {{\mathcal B}}_{v^-}\bigl({{o}},v(0)\bigr).$$ Obviously, weak convergence $v_n\to v$ is equivalent to the convergence $[v_n]\to [v]$ in $[{{\mathcal G}}]$, and $v_n\to v$ in ${{\mathcal G}}$ always implies $[v_n]\to [v]$ in $[{{\mathcal G}}]$. The topological space ${{\mathcal G}}$ can be endowed with the [[**]{}geodesic flow]{} $(g^t)_{t\in{\mathbb{R}}}$ which is naturally defined by reparametrization of $v\in {{\mathcal G}}$. In particular we have $$(g^t v)(0)=v(t) \quad\text{for all } \ v\in {{\mathcal G}}\quad\text{and all }\ t\in{\mathbb{R}}.$$ The geodesic flow induces a flow on the set of equivalence classes $[{{\mathcal G}}]$ which we will also denote $(g^t)_{t\in{\mathbb{R}}}$; via the ${\mbox{Is}}({X})$-equivariant homeomorphism $[{{\mathcal G}}]\to{\partial}{{\mathcal G}}\times {\mathbb{R}}$ the action of the geodesic flow $(g^t)_{t\in{\mathbb{R}}}$ on $[{{\mathcal G}}]$ is equivalent to the translation action on the last factor of ${\partial}{{\mathcal G}}\times {\mathbb{R}}$ given by $$g^t (\xi,\eta,s):=(\xi,\eta, s+t).$$ Facts about rank one isometries {#rank1prelim} =============================== The purpose of this section is to introduce the notion of rank one geodesic and rank one isometry. Many useful well-known facts about Hadamard spaces with a rank one isometry are recalled. Most of the material can be found in [@MR1377265] and [@MR1383216] (see also [@MR656659] for the special case of Hadamard manifolds and [@Ricks] for more recent results). As in the previous section we assume that $(X,d)$ is a proper Hadamard space. A geodesic line $v\in {{\mathcal G}}$ is called [[**]{}rank one]{} if its transversal $C_v$ is bounded. In this case the number $${\mathrm{width}}(v):= \sup\{ d(x,y)\colon x,y\in C_v\}$$ is called the [[**]{}width]{} of $v$; if $C_v$ reduces to a point, then $v$ is said to have zero width. In the sequel we will use as in [@Ricks] the notation $$\begin{aligned} \mathcal{R}&:=\{v\in {{\mathcal G}}\colon v\ \text{is rank one}\}\quad\text{respectively}\\ \mathcal{Z}&:=\{v\in {{\mathcal G}}\colon v\ \text{is rank one of zero width}\}.\end{aligned}$$ We remark that the existence of a rank one geodesic imposes severe restrictions on the Hadamard space ${X}$. For example, ${X}$ can neither be a symmetric space or Euclidean building of higher rank nor a product of Hadamard spaces. Notice that if ${X}$ is a Hadamard [[**]{}manifold]{}, then there is a more restrictive notion of rank one: If $v\in{{\mathcal G}}$ the number $ J$-rank$(v)$ is defined as the dimension of the vector space of parallel Jacobi fields along $v$ (compare Section IV.4 in [@MR1377265]); clearly, for all $w$ in a sufficiently small neighborhood of $v$ we have $J$-rank$(w)\le J$-rank$(v)$. As in [@LinkPicaud] we will call $v\in{{\mathcal G}}$ [[**]{}strong rank one]{} if $ J$-rank$(v)=1$, that is if $v$ does not admit a parallel perpendicular Jacobi field; we further define $${{\mathcal J}}:=\{v\in{{\mathcal G}}\colon v \ \text{is strong rank one}\}$$ which is obviously a subset of ${{\mathcal Z}}$. Notice that in general ${{\mathcal J}}\ne {{\mathcal Z}}$: Take for example a surface with negative Gaußian curvature except along a simple closed geodesic where the curvature vanishes; then the lift of the closed geodesic has zero width, but possesses a parallel perpendicular Jacobi field. The following important lemma states that even though we cannot join any two distinct points in the geometric boundary ${\partial{X}}$ of the Hadamard space ${X}$, given a rank one geodesic we can at least join all points in a neighborhood of its end points. More precisely, we have the following result which is a reformulation of Lemma III.3.1 in [@MR1377265]: \[joinrankone\] Let $v\in\mathcal{R}$ be a rank one geodesic and $c>{\mathrm{width}}(v)$. Then there exist open disjoint neighborhoods $U^-$ of $\,v^-$ and $U^+$ of $\,v^+$ in ${\overline{{X}}}$ with the following properties: If $\xi\in U^-$ and $\eta \in U^+$ then there exists a rank one geodesic joining $\xi$ and $\eta$. For any such geodesic $w\in\mathcal{R}$ we have $d(w(t), v(0))< c$ for some $t\in{\mathbb{R}}$ and ${\mathrm{width}}(w)\le 2c$. This lemma implies that the set ${{\mathcal R}}$ is open in ${{\mathcal G}}$; we emphasize that ${{\mathcal Z}}$ in general need not be an open subset of ${{\mathcal G}}$: In every open neighborhood of a [[**]{}zero width]{} rank one geodesic there may exist a rank one geodesic of arbitrarily small but strictly positive width. However, if ${X}$ is a Hadamard [[**]{}manifold]{}, then ${{\mathcal J}}\subseteq{{\mathcal Z}}$ is open in ${{\mathcal G}}$ (as the $J$-rank cannot be bigger in a suffiently small open neighborhood). So Lemma \[joinrankone\] has the following \[manifoldJopen\] Let $v\in{{\mathcal J}}$. Then there exist disjoint neighborhoods $U^-$ of $\,v^-$ and $U^+$ of $\,v^+$ in ${\overline{{X}}}$ such that any pair of points $(\xi,\eta)\in U^-\times U^+$ can be joined by a geodesic $u\in{{\mathcal J}}$. We will also need the following result due to R. Ricks; recall that $(v_n)\to v$ weakly as defined in (\[defweakconvergence\]) means that $[v_n]\to [v]$ in $[{{\mathcal G}}]$. \[weakimpliesstrong\] If a sequence $(v_n)\subseteq{{\mathcal G}}$ converges weakly to , then some subsequence of $(v_n)$ converges to some $u\sim v$. Notice that this lemma implies that the restriction of the Hopf parametrization map (\[HopfPar\]) to the subset $\mathcal{R}$ is closed, hence a topological quotient map. In combination with Lemma 8.4 in [@Ricks] we get the following statement concerning transversals of a weakly convergent sequence in ${{\mathcal G}}$: \[Hausdorffconv\] If a sequence $(v_n)\subseteq{{\mathcal G}}$ converges weakly to $v\in{{\mathcal R}}$, then some subsequence of $(C_{v_n})$ converges, in the Hausdorff metric, to a closed subset $A\subseteq C_v$. From this we immediately get the following complement to Lemma \[joinrankone\]: \[Hausdorffonboundary\] Let $v\in{{\mathcal Z}}$ and $\bigl((\xi_n,\eta_n)\bigr)\subseteq{\partial{X}}\times{\partial{X}}$ be a sequence converging to $(v^-,v^+)$. Then for $n$ sufficiently large $(\xi_n,\eta_n)\in{\partial}{{\mathcal R}}$ and some subsequence of $\bigl(C_{(\xi_n\eta_n)}\bigr)$ converges, in the Hausdorff metric, to a point. \[hypaxiso\]An isometry $\gamma\neq{\mbox{\rm id}}$ of ${X}$ is called [[**]{}axial]{} if there exists a constant $\ell=\ell(\gamma)>0$ and a geodesic $v\in {{\mathcal G}}$ [  ]{}$\gamma v=g^{\ell} v$. We call $\ell(\gamma)$ the [[**]{}translation length]{} of $\gamma$, and $v$ an [[**]{}invariant geodesic]{} of $\gamma$. The boundary point $\gamma^+:=v^+$ (which is independent of the chosen invariant geodesic $v$) is called the [[**]{}attractive fixed point]{}, and $\gamma^-:=v^-$ the [[**]{}repulsive fixed point]{} of $\gamma$. An axial isometry $h$ is called [[**]{}rank one]{} if one (and hence any) invariant geodesic of $h$ belongs to ${{\mathcal R}}$; the [[**]{}width]{} of $h$ is then defined as the width of an arbitrary invariant geodesic of $h$. $h$ is said to have [[**]{}zero width]{} if up to reparametrization $h$ has only one invariant geodesic. Notice that if $\gamma\in{\mbox{Is}}({X})$ is axial, then $\partial^{-1}(\gamma^-,\gamma^+)\subseteq{{\mathcal G}}$ is the set of parametrized invariant geodesics of $\gamma$, and every axial isometry $\widetilde\gamma$ commuting with $\gamma$ satisfies $p \partial^{-1}(\widetilde\gamma^-,\widetilde\gamma^+)=p \partial^{-1}(\gamma^-,\gamma^+)$. If $h$ is rank one, then the fixed point set of $h$ equals $\{h^-, h^+\}$, and every axial isometry commuting with $h$ belongs to the subgroup $\langle h\rangle<{\mbox{Is}}({X})$ generated by $h$. The following important lemma describes the north-south dynamics of rank one isometries: \[dynrankone\]([@MR1377265], Lemma III.3.3) Let $h$ be a rank one isometry. Then 1. every point $\xi\in{\partial{X}}\setminus\{h^+\}$ can be joined to $h^+$ by a geodesic, and all these geodesics are rank one, 2. given neighborhoods $U^-$ of $h^-$ and $U^+$ of $h^+$ in ${\overline{{X}}}$ there exists $N\in{\mathbb{N}}$ [  ]{} $\ h^{-n}({\overline{{X}}}\setminus U^+)\subseteq U^-$ and $h^{n}({\overline{{X}}}\setminus U^-)\subseteq U^+$ for all $n\ge N$. The following lemma shows that under the presence of a rank one geodesic in ${X}$ with [[**]{}${\mbox{Is}}({X})$-dual]{} end points (the interested reader is referred to Section III.1 in [@MR1377265] for a definition) the rank one isometries are numerous: \[elementsarerankone\] ([@MR1377265], Lemma III.3.2)Let $v\in {{\mathcal R}}$ be a rank one geodesic, and $(g_n)\subseteq{\mbox{Is}}({X})$ a sequence of isometries [  ]{}$g_n x\to v^+$ and $g_n^{-1}x\to v^-$ for one (and hence any) $x\in{X}$. Then, for $n$ sufficiently large, $g_n$ is rank one with an invariant geodesic $v_n$ [  ]{}$v_n^+\to v^+$ and $v_n^-\to v^-$. We next prepare for an extension of Lemma \[dynrankone\] (a) which replaces the fixed point $h^+$ of the rank one isometry $h$ by the end point of a certain geodesic: \[weakstrongrecurrencedef\] Let $G<{\mbox{Is}}({X})$ be any subgroup. An element $v\in{{\mathcal G}}$ is said to [[**]{}(weakly) $G$-accumulate]{} on $u\in{{\mathcal G}}$ if there exist sequences $(g_n)\subseteq G$ and $(t_n)\nearrow \infty$ [  ]{}$g_n g^{t_n} v$ converges (weakly) to $u$ as $n\to\infty$; $v$ is said to be [[**]{}(weakly) $G$-recurrent]{} if $v$ (weakly) $G$-accumulates on $v$. Notice that if $v$ is an invariant geodesic of an axial isometry $\gamma\in{\mbox{Is}}({X})$, then $v$ is $\langle \gamma\rangle$-recurrent and hence in particular ${\mbox{Is}}({X})$-recurrent. Moreover, if $v\in{{\mathcal G}}$ weakly $G$-accumulates on , then by Lemma \[weakimpliesstrong\] $v$ $G$-accumulates on some element $w\sim u$. However, in general $v\in{{\mathcal G}}$ weakly $G$-recurrent does imply that some representative of the equivalence class $[v]$ is $G$-recurrent. Even in the case $v\in{{\mathcal R}}$ it is possible that every representative $u$ of the class $[v]$ $G$-accumulates on $w\sim u$ with $w\ne u$. The following statements show the relevance of the previous notions. \[Gammaconv\] If $w\in{{\mathcal G}}$ ${\mbox{Is}}({X})$-accumulates on $v\in{{\mathcal G}}$, then there exists an isometric embedding $C_{w}\hookrightarrow C_v$ which maps $w(0)$ to $v(0)$. Notice that if is weakly $G$-recurrent for some subgroup $G<{\mbox{Is}}({X})$, then every $w\in{{\mathcal G}}$ with $w^+=v^+$ $G$-accumulates on an element $u\sim v$ according to Lemma 6.9 in [@Ricks]. Hence we have \[weakgivesisometricembeddings\] If $v\in{{\mathcal R}}$ is weakly ${\mbox{Is}}({X})$-recurrent, then for every $w\in{{\mathcal G}}$ with $w^+=v^+$ there exists an isometric embedding $C_w\hookrightarrow C_v$. Moreover, the proof of Lemma 6.12 in [@Ricks] shows that every point $\xi\in{\partial{X}}\setminus\{v^+\}$ can be joined to $v^+$ by a geodesic $w\in{{\mathcal G}}$. So we finally get \[jointoweakrecurrent\] If $v\in{{\mathcal R}}$ is weakly ${\mbox{Is}}({X})$-recurrent then for every $\xi\in{\partial{X}}\setminus\{v^+\}$ there exists $w\in{{\mathcal R}}$ with ${\mathrm{width}}(w)\le {\mathrm{width}}(v)$ such that $w^-=\xi$ and $w^+=v^+$. Rank one groups {#rankonegroups} =============== Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ an arbitrary subgroup. The [[**]{}geometric limit set]{} ${L_\Gamma}$ of $\Gamma$ is defined by ${L_\Gamma}:=\overline{\Gamma\cdot x}\cap{\partial{X}},$ where $x\in{X}$ is an arbitrary point. If ${X}$ is a CAT$(-1)$-space, then a group $\Gamma<{\mbox{Is}}({X})$ is called [[**]{}non-elementary]{} if its limit set is infinite and if $\Gamma$ does not globally fix a point in ${L_\Gamma}$. It is well-known that this implies that $\Gamma$ contains two axial isometries with disjoint fixed point sets (which are actually rank one of zero width as ${{\mathcal G}}={{\mathcal Z}}$ for any CAT$(-1)$-space). In the general setting this motivates the following We say that two rank one isometries $g,h\in{\mbox{Is}}({X})$ are [[**]{}independent]{} if and only if $\{g^+,g^-\}\cap \{h^+,h^-\}\ne\emptyset$ (see for example Section 2 of [@MR2629900]). Moreover, a group $\Gamma< {\mbox{Is}}({X})$ is called [[**]{}rank one]{} if $\Gamma$ contains a pair of independent rank one elements. Obviously, if ${X}$ is CAT$(-1)$ then every non-elementary isometry group is rank one. In general however, the notion of rank one group seems very restrictive at first sight. The goal of this section – which may be of independent interest – is to discuss conditions which ensure that $\Gamma$ is a rank one group. Let $\Gamma < {\mbox{Is}}({X})$ be an arbitrary subgroup. If ${L_\Gamma}$ contains the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent element $v\in{{\mathcal R}}$, and if $v^+$ is not globally fixed by $\Gamma$, then $\Gamma$ contains a rank one isometry. Let $v\in{{\mathcal R}}$ be weakly ${\mbox{Is}}({X})$-recurrent and $x\in{X}$. As $v^+\in {L_\Gamma}$ there exists a sequence $(\gamma_n)\subseteq\Gamma$ such that $\gamma_n x\to v^+$ as $n\to\infty$. Passing to a subsequence if necessary we may assume that $\gamma_n^{-1}x$ converges, say to a point $\xi\in{\overline{{X}}}$ which obviously belongs to ${L_\Gamma}\subseteq{\partial{X}}$. If $\xi=v^+$, there exists $\gamma\in\Gamma$ [  ]{}$\gamma\xi\ne v^+$ since $\Gamma$ does not globally fix $v^+$. Replacing the sequence $(\gamma_n)$ by $(\gamma_n\gamma^{-1})$ in this case we may assume that $\xi\ne v^+$. According to Lemma \[jointoweakrecurrent\] there exists $w\in{{\mathcal R}}$ [  ]{}$w^-=\xi$ and $w^+=v^+$. Lemma \[elementsarerankone\] then states that for $n$ sufficiently large $\gamma_n$ is rank one with an invariant geodesic $v_n$ [  ]{}$v_n^+\to w^+=v^+$ and $v_n^-\to w^-=\xi$ as $n\to \infty$. Since the geodesic $w$ is rank one, the geodesics $v_n$ are rank one for $n$ sufficiently large by Lemma \[joinrankone\]. This implies that for some fixed $n$ large enough the element $\gamma_n\in \Gamma$ is rank one. Notice that the conclusion is obviously true when $v^+$ is a fixed point of a rank one isometry of ${X}$. The following statements show that a group is rank one under very weak conditions. \[get2independent\] If $\Gamma< {\mbox{Is}}({X})$ neither globally fixes a point in ${\partial{X}}$ nor stabilizes a geodesic line in ${X}$, and if ${L_\Gamma}$ contains the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent element $v\in{{\mathcal R}}$, then $\Gamma$ contains a pair of independent rank one elements. Since ${X}$ is proper and $\Gamma <{\mbox{Is}}({X})$ contains a rank one element by the previous lemma, Proposition 3.4 of [@MR2585575] applies: Its first possibility is excluded by the assumption that $\Gamma$ neither globally fixes a point in ${\partial{X}}$ nor stabilizes a geodesic line in ${X}$, hence $\Gamma$ contains a pair of independent rank one elements. \[inflimset\] A [*discrete*]{} subgroup $\Gamma<{\mbox{Is}}({X})$ is rank one if and only if its limit set ${L_\Gamma}$ is infinite and contains the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent element $v\in{{\mathcal R}}$. We first assume that ${L_\Gamma}$ is infinite and contains the positive end point $v^+$ of a weakly ${\mbox{Is}}({X})$-recurrent element $v\in{{\mathcal R}}$. As $\Gamma$ is discrete and ${L_\Gamma}$ is infinite, $\Gamma$ cannot globally fix a point in ${\partial{X}}$ nor stabilize a geodesic line in ${X}$, so Lemma \[get2independent\] above implies that $\Gamma$ is rank one. The other direction is obvious. The proof of the following criterion relies heavily on the work of R. Ricks: \[largewidthgiveszerowidth\] If ${X}$ is geodesically complete and $\Gamma<{\mbox{Is}}({X})$ is a discrete rank one group, then $${\mathcal Z}_\Gamma:=\{v\in{{\mathcal Z}}\colon v^-,v^+\in{L_\Gamma}\}\ne \emptyset.$$ We first notice that the proof of Theorem III.2.3 in [@MR1377265] shows that the geodesic flow restricted to $${{\mathcal G}}_\Gamma:=\{v\in {{\mathcal G}}\colon v^-,v^+\in{L_\Gamma}\}$$ is topologically transitive mod $\Gamma$; this means that there exists $v\in {{\mathcal G}}_\Gamma$ [  ]{}for any $u\in {{\mathcal G}}_\Gamma$ $v$ $\Gamma$-accumulates on $u$. We first claim that the element $v\in {{\mathcal G}}_\Gamma$ as above belongs to ${{\mathcal R}}$: We choose a rank one element $h\in\Gamma$ and an invariant geodesic $u\in {{\mathcal G}}_\Gamma$ of $h$ and neighborhoods $U^-, U^+\subseteq {\overline{{X}}}$ of $h^-,h^+$ as in Lemma \[joinrankone\]. In particular, every $w\in{{\mathcal G}}$ with $(w^-,w^+)\in U^-\times U^+$ satisfies $w\in{{\mathcal R}}$. As $v$ $\Gamma$-accumulates on $u$ there exist sequences $(\gamma_n)\subseteq \Gamma$, $(t_n)\nearrow\infty$ [  ]{}$\gamma_n g^{t_n} v\to u$ and hence in particular $\gamma_n ( v^-,v^+)\to (u^-,u^+)=(h^-,h^+)$ as $n\to \infty$. This implies $\gamma_n (v^-,v^+)\in U^-\times U^+\subseteq{\partial}{{\mathcal R}}$ for some $n$ sufficiently large and therefore $v\in {{\mathcal R}}$. Assume for a contradiction that $v\notin{{\mathcal Z}}$; then there exists $\overline{v}\sim v$ with $\overline{v}\neq v$. We will further denote $v_C\in p^{-1} C_v=\{w\in{{\mathcal R}}\colon w\sim v\}$ the [[**]{}central geodesic]{} defined by the condition that its origin $v_C(0)$ is the unique circumcenter of the bounded closed and convex set $C_v\subseteq {X}$ (compare also Section 5 in [@Ricks]). As $v_C$, $\overline{v}\in{{\mathcal G}}_\Gamma$, $v$$\Gamma$-accumulates both on $v_C$ and on $\overline{v}$; so according to Lemma \[Gammaconv\] there exist isometric embeddings $$\iota: C_v \hookrightarrow C_{v_C},\qquad \overline{\iota}: C_v \hookrightarrow C_{\overline{v}}$$ with $\iota\bigl(v(0)\bigr)= v_C(0)$ and $\overline{\iota}\bigl(v(0)\bigr)=\overline{v}(0)$. Since $C_{v_C} =C_{\overline{v}}=C_v$, the maps $\iota$ and $\overline{\iota}$ are surjective by Theorem 1.6.15 in [@MR1835418] and hence isometries. As the circumcenter of $C_v$ is invariant by isometries of $C_{v}$ we first get $$v(0)=\iota^{-1}\bigl(v_C(0)\bigr)=v_C(0),$$ which implies $$\overline{v}(0)= \overline{\iota}\bigl(v(0)\bigr)= \overline{\iota}\bigl(v_C(0)\bigr)=v_C(0)=v(0).$$ This is a contradiction to the choice of $\overline{v}\ne v$, so we conclude that $v\in{{\mathcal Z}}$. Notice that a discrete rank one group $\Gamma$ with ${{\mathcal Z}}_\Gamma\ne\emptyset$ need not possess a [[**]{}zero width]{} rank one [[**]{}isometry]{} since ${{\mathcal Z}}$ is not open in ${{\mathcal G}}$. However, as for a Hadamard [[**]{}manifold]{} the set ${{\mathcal J}}$ of vectors not admitting a parallel perpendicular Jacobi field is open in ${{\mathcal G}}$, we have the following If ${X}$ is a [[**]{}manifold]{} and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group [  ]{} $${{\mathcal J}}_\Gamma:=\{v\in{{\mathcal J}}\colon v^-,v^+\in{L_\Gamma}\}\ne \emptyset,$$ then $\Gamma$ contains a pair of independent rank one elements with [[**]{}strong]{} rank one invariant geodesics (which necessarily have zero width). Since ${X}$ is geodesically complete, the geodesic flow restricted to $${{\mathcal G}}_\Gamma:=\{v\in {{\mathcal G}}\colon v^-,v^+\in{L_\Gamma}\}$$ is topologically transitive mod $\Gamma$; this means that there exists $v\in {{\mathcal G}}_\Gamma$ [  ]{}for any $u\in {{\mathcal G}}_\Gamma$ $v$ $\Gamma$-accumulates on $u$. Assume for a contradiction that $v\notin{{\mathcal J}}$; then $\gamma g^t v\notin {{\mathcal J}}$ for all $\gamma\in\Gamma$ and for all $t\in{\mathbb{R}}$. But since $v$ $\Gamma$-accumulates on $u\in {{\mathcal J}}_\Gamma$ this implies $J$-rank$(u)\ge 2$ which is a contradiction. So we conclude that $v\in{{\mathcal J}}_\Gamma$. Since $v^-,v^+\in{L_\Gamma}$, there exists a sequence $(\gamma_n)\subseteq\Gamma$ [  ]{}$\gamma_n x\to v^+$ and $\gamma_n^{-1} x\to v^-$ for some $x\in{X}$ (see for example the proof of Proposition 3.5 in [@MR2585575]). By Lemma \[elementsarerankone\], for $n$ sufficiently large $\gamma_n$ is rank one with invariant geodesic $v_n$ [  ]{}$(v_n^-, v_n^+)\to (v^-,v^+)$ as $n\to \infty$. So according to Corollary \[manifoldJopen\] we have $v_n\in {{\mathcal J}}$ for $n$ sufficiently large, hence there exists a rank one element $\gamma_n$ with a strong rank one invariant geodesic. As $\Gamma$ is rank one there exists one element (actually an infinite number) in $\Gamma$ not commuting with $\gamma_n$, and conjugating $\gamma_n$ by such an element provides another rank one isometry in $\Gamma$ independent from $\gamma_n$ which also has a strong rank one invariant geodesic. This implies that the hypothesis of the Main Theorem in [@LinkPicaud] is satisfied for Hadamard manifolds ${X}$ with a rank one group $\Gamma<{\mbox{Is}}({X})$ such that ${{\mathcal J}}_\Gamma\ne\emptyset$; we will see later that the conclusion of the Main Theorem in [@LinkPicaud] remains true under the weaker condition that $\Gamma<{\mbox{Is}}({X})$ is an arbitrary rank one group. Basic notions in ergodic theory and geodesic currents {#dyndef} ====================================================== \[geodcurrentmeasures\] In this section we want to recall a few general notions from topological dynamics and ergodic theory which will be needed later; our main references here are [@Hopf] and [@MR1293874]. Let $\Omega$ be a locally compact and $\sigma$-compact Hausdorff topological space and $\varphi$ a [[**]{}flow]{} on $\Omega$, that is a continuous map $\varphi :{\mathbb{R}}\times \Omega\to \Omega$ [  ]{}$\varphi(0,\omega)=\omega$ and $\varphi\bigl(s,\varphi(t,\omega)\bigr)=\varphi(s+t,\omega)$ for all $s,t\in{\mathbb{R}}$ and all $\omega\in \Omega$. A point $\omega\in\Omega$ is said to be [[**]{}positively recurrent]{} respectively [[**]{}negatively recurrent]{} if there exists a sequence $(t_n)\nearrow\infty$ of real numbers [  ]{} $$\varphi^{t_n}\omega=\varphi(t_n,\omega)\to \omega\quad\text{respectively }\ \varphi^{-t_n}\omega=\varphi(-t_n,\omega)\to \omega ;$$ $\omega\in\Omega$ is said to be [[**]{}positively divergent]{} respectively [[**]{}negatively divergent]{} if for every compact set $K\subseteq \Omega$ there exists a constant $T>0$ [  ]{}for all $ t\ge T$ $$\varphi^t\omega=\varphi(t,\omega)\notin K\quad\text{respectively }\ \varphi^{-t}\omega=\varphi(-t,\omega)\notin K.$$ Assume that $M$ is a Borel measure on $\Omega$ invariant by the flow $\varphi$. Then the Hopf decomposition theorem (see for instance [@MR797411 Theorem 3.2],[@Hopf Satz 13.1] ) asserts that the space $\Omega$ decomposes into a disjoint union of $\varphi$-invariant Borel sets $\Omega_C$ and $\Omega_D$ which satisfy the following properties: - There does not exist a Borel subset $E\subseteq \Omega_C$ with $M(E)>0$ and such that the sets $\bigl( \varphi^k(E)\bigr)_{k\in{\mathbb{Z}}}$ are pairwise disjoint. - There exists a Borel set $W\subseteq \Omega_D$ [  ]{}$\Omega_D$ is the disjoint union of sets $(W_k)_{k\in{\mathbb{Z}}}$, where each $W_k$ is a translate of $W$ under the flow $\varphi$. According to Poincar[é]{}’s recurrence theorem (see for example [@Hopf Satz 13.2]) every point of $\Omega_C$ is positively recurrent. On the other hand, by Hopf’s divergence theorem (see again [@Hopf Satz 13.2]), $M$-almost every point of $\Omega_D$ is positively divergent. This implies in particular that the sets $\Omega_C$ and $\Omega_D$ are unique up to sets of measure zero. The dynamical system $(\Omega,\varphi,M)$ is said to be [[**]{}conservative]{} if $M(\Omega_D)=0$, and [[**]{}dissipative]{} if $M(\Omega_C)=0$. Notice that if the measure $M$ is finite, then due to (D) above $(\Omega,\varphi,M)$ is conservative. Moreover, since the decomposition is the same for $\varphi^1$ and for $\varphi^{-1}$, Poincar[é]{}’s recurrence theorem and Hopf’s divergence theorem imply that $M$-almost every point of $\Omega_C$ is positively and negatively recurrent, and $M$-almost every point of $\Omega_D$ is positively and negatively divergent. Moreover, if $\rho\in{\mbox{\rm L}}^1(M)$ is $M$-almost everywhere strictly positive, then – up to a set of measure zero – the conservative part $\Omega_C$ can be written as $$\Omega_C=\{ \omega\in\Omega\colon \int_{0}^\infty \rho(\varphi^t \omega){\mathrm{d}}t=\infty\}.$$ Finally, the dynamical system $(\Omega,\varphi,M)$ is called [[**]{}ergodic]{} if every $\varphi$-invariant Borel set $E\subseteq\Omega$ either satisfies $M(E)=0$ or $M(\Omega\setminus E)=0$. Hence if a dynamical system $(\Omega,\varphi,M)$ is ergodic, then it is either conservative or dissipative; the second possibility can only occur for an infinite measure $M$ which is supported on a single orbit $$\{ \varphi^t \omega \colon t\in{\mathbb{R}}\}\quad\text{with }\ \omega\in \Omega.$$ In Section \[HopfArgument\] we will need the following generalization of the Birkhoff ergodic theorem which is stated and proved on p. 53 in [@Hopf]: \[Hopfindividual\] Assume that $(\Omega,\varphi,M)$ is conservative, and let $\rho\in {\mbox{\rm L}}^1(M)$ be a function which is strictly positive $M$-almost everywhere. Then for any function $f\in {\mbox{\rm L}}^1(M)$ the limits $$f^\pm(\omega)=\lim_{T\to +\infty} \frac{\int_0^T f(\varphi^{\pm t}(\omega)){\mathrm{d}}t}{\int_0^T \rho(\varphi^{\pm t}(\omega)){\mathrm{d}}t}$$ exist and are equal for $M$-almost every $\omega\in\Omega$. Moreover, the functions $f^+, f^-$ are measurable and flow invariant, $\rho\cdot f^+, \rho\cdot f^-\in{\mbox{\rm L}}^1(M)$, and for every bounded measurable flow-invariant function $h$ we have $$\int_\Omega \rho(\omega) f^\pm (\omega) h(\omega){\mathrm{d}}M(\omega)= \int_\Omega f (\omega) h(\omega){\mathrm{d}}M(\omega).$$ Finally, $(\Omega,\varphi,M)$ is ergodic if and only if for every function $f\in {\mbox{\rm L}}^1(M)$ the associated limit function $f^+$ is constant $M$-almost everywhere. We now want to recall the concept of geodesic current introduced for example in [@MR1293874]. From here on we let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete group. We will also use the notation introduced in Section \[prelim\] and Section \[rank1prelim\]. The geodesic flow on the quotient $\quotient{\Gamma}{{{\mathcal G}}}$ will be denoted $g_\Gamma=(g_\Gamma^t)_{t\in{\mathbb{R}}}$. Recall that a Borel measure on a locally compact Hausdorff space is called [[**]{}Radon]{} if it is finite for all compact subsets. \[geodcurrent\] \ A [[**]{}geodesic current]{} on $\quotient{\Gamma}{{X}}$ is a $\Gamma$-invariant Radon measure on ${\partial}{{\mathcal G}}\subseteq {\partial{X}}\times {\partial{X}}$. A geodesic current $\overline\mu\,$ is said to be a [[**]{}quasi-product geodesic current]{}, if there exist probability measures $\mu_-$, $\mu_+$ on ${\partial{X}}$ [  ]{}$\overline\mu\,$ is absolutely continuous with respect to the product measure $\mu_-\otimes \mu_+$. A geodesic current $\overline \mu\,$ hence yields a dynamical system $({\partial}{{\mathcal G}}, \Gamma, \overline\mu)$ which is closely related to the dynamical system $({\partial{X}}\times{\partial{X}}, \Gamma, \mu_-\otimes\mu_+)$ with the diagonal action of $\Gamma$ on ${\partial{X}}\times{\partial{X}}$. As in [@MR2057305 p.17] a Borel set $W\subseteq{\partial}{{\mathcal G}}$ is called [[**]{}wandering]{} if for $\overline\mu$-almost every $(\xi,\eta)\in W$ the number $$\#\{\gamma\in\Gamma\colon \gamma (\xi,\eta)\in W\}\quad\text{is finite}.$$ The $\Gamma$-action on ${\partial}{{\mathcal G}}$ is called [[**]{}dissipative]{} if up to sets of measure zero the set ${\partial}{{\mathcal G}}$ is a countable union of wandering sets; it is called [[**]{}conservative]{} if every wandering subset $W\subseteq{\partial}{{\mathcal G}}$ satisfies $\overline\mu(W)=0$. Let $\overline \mu\,$ be a geodesic current such that for $\overline\mu$-almost every $(\xi,\eta)\in{\partial}{{\mathcal G}}$ a geodesic flow invariant Radon measure $\lambda_{(\xi\eta)}$ on the closed and convex subset $(\xi\eta)\subseteq{X}$ exists. Then we get a $\Gamma$-invariant and geodesic flow invariant Borel measure $m$ on ${{\mathcal G}}$ by integrating $\overline\mu\,$ with respect to the measure $\lambda_{(\xi\eta)}$ along the sets $(\xi\eta)\subseteq{X}$, that is via the assignment $$m(E):= \int_{{\partial}{{\mathcal G}}} \lambda_{(\xi\eta)}\bigl(p(E)\cap(\xi\eta)\bigr)\mathrm{d}\overline\mu(\xi,\eta)\quad\text{for any Borel set }\ E\subseteq {{\mathcal G}}.$$ Notice that by continuity of the maps $p:{{\mathcal G}}\to{X}$ and ${\partial}:{{\mathcal G}}\to{\partial}{{\mathcal G}}$ the Borel measure $m$ is Radon as well. If $(\xi,\eta)\in{\partial}{{\mathcal Z}}$, then we use the convention that the Radon measure $\lambda_{(\xi\eta)}$ on $(\xi\eta)\cong{\mathbb{R}}$ is Lebesgue measure on ${\mathbb{R}}$ (which in addition is inner and outer regular). The Radon measure $m$ then induces a geodesic flow invariant measure $m_\Gamma$ on the quotient $\quotient{\Gamma}{{{\mathcal G}}}$ which we will call a [[**]{}Knieper’s measure]{} on $\quotient{\Gamma}{{{\mathcal G}}}$ for the following reason: In [@MR1652924], G. Knieper constructed for a Hadamard [[**]{}manifold]{} ${X}$ a measure on $\quotient{\Gamma}{{{\mathcal G}}}$ precisely in this way with $\lambda_{(\xi\eta)}$ the induced Riemannian volume element on the submanifolds $(\xi\eta)\subseteq{X}$ and $\overline\mu\,$ the quasi-product geodesic current induced by a conformal density for $\Gamma$ (see Section \[currentsfromconfdens\] for the precise definition). Unfortunately, if ${X}$ is not a manifold then in general there is no natural geodesic flow invariant measure on the closed and convex subsets $(\xi\eta)$ for $(\xi,\eta)\in{\partial}({{\mathcal G}}\setminus{{\mathcal Z}})$. Hence we will follow Ricks’ approach to obtain from a geodesic current a geodesic flow and $\Gamma$-invariant measure on the set of parallel classes of parametrized geodesic lines $[{{\mathcal G}}]$: Given a geodesic current $\overline \mu\,$ on ${\partial}{{\mathcal G}}={\partial}[{{\mathcal G}}]$ we want to define a Radon measure $\overline m$ on $[{{\mathcal G}}]\cong {\partial}{{\mathcal G}}\times{\mathbb{R}}$ by $\overline\mu\otimes\lambda$, where $\lambda$ denotes Lebesgue measure on ${\mathbb{R}}$. However, the $\Gamma$-action on $[{{\mathcal G}}]$ need not be proper: If $\Gamma$ contains an axial isometry $\gamma$ with invariant geodesic $w\in {{\mathcal G}}\setminus{{\mathcal R}}$ whose image $w({\mathbb{R}})$ belongs to an isometric copy $E\subset(\gamma^-\gamma^+)$ of a Euclidean plane, then for any geodesic $u\in{{\mathcal G}}$ orthogonal to $w$ and with image $u({\mathbb{R}})\subseteq E$ we have $\gamma^k u\sim u$ and hence $\gamma^k [u]=[u]$ for all $k\in{\mathbb{Z}}$. So in particular we do not necessarily obtain from $\overline m$ a geodesic flow invariant measure on the quotient $\quotient{\Gamma}{[{{\mathcal G}}]}$. For that reason we will consider only geodesic currents $\overline\mu\,$ which are defined on ${\partial}{{\mathcal R}}$ instead of ${\partial}{{\mathcal G}}$. According to Lemma \[joinrankone\], $\Gamma$ acts properly on $[{{\mathcal R}}]\cong{\partial}{{\mathcal R}}\times {\mathbb{R}}$ which admits a proper metric. Since the action is by homeomorphisms and preserves the Borel measure $\overline m=\overline\mu\otimes\lambda$, there is (see for instance, [@RicksThesis Appendix A]) a unique Borel quotient measure $\overline m_\Gamma$ on $\quotient{\Gamma}{[{{\mathcal R}}]}$ 􏱂 satisfying the characterizing property $$\int_{\bar A} \widetilde h{\mathrm{d}}\overline m=\int_{\quotient{\Gamma}{[{{\mathcal R}}]}} \bigl( h\cdot f_{\bar A}\bigr) {\mathrm{d}}\overline m_\Gamma$$ for all Borel sets $ \bar A\subseteq [{{\mathcal R}}]$ and $\Gamma$-invariant Borel maps $ \widetilde h:[{{\mathcal R}}]\to [0,\infty]$ and$\widetilde f_{\bar A}:[{{\mathcal R}}]\to [0,\infty]$ defined by $\widetilde f_{\bar A}([v]):= \#\{\gamma\in\Gamma\colon \gamma [v]\in \bar A\}$ for $[v]\in{{\mathcal R}}$, and with $h$ and $f_{\bar A}$ the maps on $\quotient{\Gamma}{[{{\mathcal R}}]}$ induced from $\widetilde h$ and $\widetilde f_{\bar A}$. According to the characterizing property above, a Borel set $\bar A\subset[{{\mathcal G}}]$ satisfies $\overline m(\bar A)=0$ if and only if its projection $\bar A_\Gamma$ to $\quotient{\Gamma}{[{{\mathcal G}}]}$ satisfies $m_\Gamma(\bar A_\Gamma)=0$. So in fact we can consider $\overline m_\Gamma$ as a Borel measure on $\quotient{\Gamma}{[{{\mathcal G}}]}$; we will call $\overline m_\Gamma$ the [[**]{}weak Ricks’ measure]{} associated to the geodesic current $\overline\mu\,$ on ${\partial}{{\mathcal R}}$. Our final goal is to construct from a weak Ricks’ measure $\overline m_\Gamma$ a geodesic flow invariant measure on $\quotient{\Gamma}{{{\mathcal G}}}$. So let us first remark that ${{\mathcal Z}}\subseteq{{\mathcal R}}$ is a Borel subset by semicontinuity (see Lemma \[Hausdorffconv\]) of the width function; as ${H}_{{o}}{\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\mathcal R}}:{{\mathcal R}}\to{\partial}{{\mathcal R}}\times{\mathbb{R}}\cong[{{\mathcal R}}]$ is a topological quotient map by Lemma \[weakimpliesstrong\], $[{{\mathcal Z}}] \subseteq [{{\mathcal R}}]$ is also a Borel subset. Notice also that ${H}_{{o}}{\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{{\mathcal Z}}}:{{\mathcal Z}}\to{\partial}{{\mathcal Z}}\times{\mathbb{R}}\cong [{{\mathcal Z}}]$ is a homeomorphism. So if $\quotient{\Gamma}{[{{\mathcal Z}}]}$ has positive mass with respect to the weak Ricks’ measure $\overline m_\Gamma$ we may define (as in [@Ricks Definition 8.12]) a geodesic flow and $\Gamma$-invariant measure $m^0$ on ${{\mathcal G}}$ by setting $$\label{defstrongRicks} m^0(E):= \overline m \bigl({H}_{{o}}(E\cap {{\mathcal Z}})\bigr)\quad\text{for any Borel set }\ E\subseteq{{\mathcal G}};$$ this measure $m^0$ then induces the [[**]{}Ricks’ measure]{} $m^0_\Gamma$ on $\quotient{\Gamma}{ {{\mathcal G}}}$. Notice that in general $\overline m_\Gamma (\quotient{\Gamma}{[{{\mathcal Z}}]})= 0\ $ is possible; obviously this is always the case when ${{\mathcal Z}}=\emptyset$. However, we will see later that under certain conditions the Ricks’ measure is actually equal to the weak Ricks’ measure used for its construction. The radial limit set and recurrence {#propradlimset} =================================== As before ${X}$ will always be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. We further fix a base point ${{o}}\in{X}$. We will begin this section with a few definitions. A point $\xi\in{\partial{X}}$ is called a [[**]{}radial limit point]{} if there exists $c>0$ and sequences $(\gamma_n)\subseteq\Gamma$ and $(t_n)\nearrow\infty$ such that $$\label{radlimpoint} d\bigl(\gamma_n {{o}}, \sigma_{{{o}},\xi}(t_n)\bigr)\le c\quad\text{for all }\ n\in{\mathbb{N}}.$$ Notice that by the triangle inequality this condition is independent of the choice of ${{o}}\in{X}$. The [[**]{}radial limit set]{} ${L_\Gamma^{\small{\mathrm{rad}}}}\subseteq{L_\Gamma}$ of $\Gamma$ is defined as the set of radial limit points. Recall the notion of (weakly) $\Gamma$-recurrent elements from Definition \[weakstrongrecurrencedef\]. Moreover, an element $v\in {{\mathcal G}}$ is called [[**]{}$\Gamma$-divergent]{} if for every compact set $K\subseteq{{\mathcal G}}$ there exists $T>0$ such that for all $t\ge T$ $$g^tv\notin \bigcup_{\gamma\in\Gamma}\gamma K;$$ it is called [[**]{}weakly $\Gamma$-divergent]{} if for every compact set $\overline K\subset[{{\mathcal G}}]$ there exists $T>0$ such that for all $t\ge T$ $$g^t [v]\notin \bigcup_{\gamma\in\Gamma}\gamma \overline K.$$ For the convenience of the reader we state the following easy fact. \[critradlim\] Let $ u\in {{\mathcal G}}$. Then $$u\ \ \Gamma\text{-recurrent}\quad\Longrightarrow\quad u^+\in{L_\Gamma^{\small{\mathrm{rad}}}}\quad\Longrightarrow\quad u \ \text{ \underline{not} }\ \Gamma\text{-divergent}.$$ We want to emphasize here that in general $u$ weakly $\Gamma$-recurrent does imply $u^+\in{L_\Gamma^{\small{\mathrm{rad}}}}$, while $u$ not $\Gamma$-divergent always implies $u$ not weakly $\Gamma$-divergent. However, if is weakly $\Gamma$-recurrent, then according to Lemma \[weakimpliesstrong\] $u$ $\Gamma$-accumulates to some $w\sim u$. This again implies that $w^+=u^+\in{L_\Gamma^{\small{\mathrm{rad}}}}$ and we get the following \[critregradlim\] If $ u\in {{\mathcal R}}$ then $$u\ \text{ weakly }\ \Gamma\text{-recurrent}\quad\Longrightarrow\quad u^+\in{L_\Gamma^{\small{\mathrm{rad}}}}\quad\Longrightarrow\quad u \ \text{ \underline{not} weakly }\ \Gamma\text{-divergent}.$$ In the sequel the following subsets of ${{\mathcal G}}$ will be convenient. Notice that for $v\in{{\mathcal G}}$ the reverse geodesic $-v\in{{\mathcal G}}$ is defined by $-v(s):=v(-s)$ for all $s\in{\mathbb{R}}$. $$\begin{aligned} {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}&:=\{v\in{{\mathcal G}}\colon v^-\in{L_\Gamma^{\small{\mathrm{rad}}}}, \ v^+\in{L_\Gamma^{\small{\mathrm{rad}}}}\},\\ {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rec}}}&:=\{v\in{{\mathcal G}}\colon v\ \text{and } -v\ \text{are } \Gamma\text{-recurrent}\},\\ {{\mathcal G}}_\Gamma^{\small{\mathrm{div}}}&:=\{v\in{{\mathcal G}}\colon v\ \text{and } -v\ \text{are } \Gamma\text{-divergent}\},\\ {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}&:=\{v\in{{\mathcal G}}\colon v\ \text{and } -v\ \text{are weakly } \Gamma\text{-recurrent}\},\\ {{\mathcal G}}_\Gamma^{\small{\mathrm{wdiv}}}&:=\{v\in{{\mathcal G}}\colon v\ \text{and } -v\ \text{are weakly } \Gamma\text{-divergent}\}.\end{aligned}$$ Notice that in general $[{{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}]\subsetneq [{{\mathcal G}}_\Gamma^{\small{\mathrm{rec}}}]$ and even $$[{{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\cap{{\mathcal R}}]\subsetneq [{{\mathcal G}}_\Gamma^{\small{\mathrm{rec}}}\cap{{\mathcal R}}]$$ by the remark following Definition \[weakstrongrecurrencedef\]. From now on we will also deal with the quotient $\quotient{\Gamma}{{{\mathcal G}}}$; for the remainder of this section we will therefore denote elements in the quotient by $u,v, w$ and elements in ${{\mathcal G}}$ by $\widetilde u, \widetilde v,\widetilde w$. According to the definitions given in Section \[dyndef\], $v\in \quotient{\Gamma}{{{\mathcal G}}}$ is positively and negatively recurrent if and only if every lift $\widetilde v$ of $v$ belongs to ${{\mathcal G}}_{\Gamma}^{\small{\mathrm{rec}}}$; $ v\in \quotient{\Gamma}{{{\mathcal G}}}$ is positively and negatively divergent if and only if every lift $\widetilde v$ of $v$ belongs to ${{\mathcal G}}_{\Gamma}^{\small{\mathrm{div}}}$. Similarly, $[ v]\in \quotient{\Gamma}{ [{{\mathcal G}}]}$ is positively and negatively recurrent if and only if for every lift $[\widetilde v]\in[{{\mathcal G}}]$ and every representative $\widetilde u\in{{\mathcal G}}$ of $[\widetilde v]$ we have $\widetilde u\in {{\mathcal G}}_{\Gamma}^{\small{\mathrm{wrec}}}$; $[ v]\in \quotient{\Gamma}{[{{\mathcal G}}]}$ is positively and negatively divergent if and only if for every lift $[\widetilde v]\in[{{\mathcal G}}]$ and every representative $\widetilde u\in{{\mathcal G}}$ of $[\widetilde v]$ we have $\widetilde u\in {{\mathcal G}}_{\Gamma}^{\small{\mathrm{wdiv}}}$. We now assume that $m_\Gamma$ is a Knieper’s measure on $\quotient{\Gamma}{{{\mathcal G}}}$ constructed from an arbitrary geodesic current $\overline \mu$ and that $\overline m_\Gamma$ is a weak Ricks’ measure on $\quotient{\Gamma}{[{{\mathcal G}}]}$ coming from a geodesic current $\overline \mu\,$ defined on ${\partial}{{\mathcal R}}$. For the convenience of the reader we state and prove the following easy \[consdiss\] \ The dynamical systems $\bigl(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma\bigr)$ respectively $\bigl(\quotient{\Gamma}{ [{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma\bigr)$ are 1. conservative if and only if $\ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})\bigr)=0$, 2. dissipative if and only if $\ \overline\mu({\partial}{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})=0$. Moreover, in the dissipative case the measures $m_\Gamma$ and $\overline m_\Gamma$ are infinite, and the corresponding dynamical systems are non-ergodic unless $\overline\mu\,$ is supported on a single orbit $\,\Gamma\cdot (\xi,\eta)\subseteq{\partial}{{\mathcal G}}$. We first treat the dynamical system $\bigl(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma\bigr)$ with Knieper’s measure $m_\Gamma$; let $\Omega_D$ denote its dissipiative part and $\Omega_C$ its conservative part. Then by Poincar[é]{}’s recurrence theorem and Hopf’s divergence theorem we have $$m_\Gamma (\Omega_D)= m_\Gamma\bigl(\quotient{\Gamma}{{{\mathcal G}}_\Gamma^{\small{\mathrm{div}}}}\bigr)\quad\text{and }\ m_\Gamma (\Omega_C)= m_\Gamma\bigl(\quotient{\Gamma}{{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rec}}}}\bigr).$$ Moreover, Lemma \[critradlim\] implies $${{\mathcal G}}_\Gamma^{\small{\mathrm{div}}} \subseteq {{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}\quad\text{and }\ {{\mathcal G}}_\Gamma^{\small{\mathrm{rec}}}\subseteq {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}},$$ and as ${{\mathcal G}}= {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}\sqcup {{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}$ we get $$m_\Gamma (\Omega_D)= m_\Gamma\bigl(\quotient{\Gamma}{ ({{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})}\bigr)\quad\text{and }\ m_\Gamma (\Omega_C)= m_\Gamma\bigl(\quotient{\Gamma}{{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}}\bigr).$$ Hence by construction of Knieper’s measure from the geodesic current $\overline\mu$, the dynamical system $\bigl(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma\bigr)$ is conservative if and only if $\ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})\bigr)=0$, and it is dissipative if and only if $\ \overline\mu({\partial}{{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}})=0$. We next treat the dynamical system $\bigl(\quotient{\Gamma}{[ {{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma\bigr)$; let $\overline\Omega_D$ denote its dissipative part and $\overline\Omega_C$ its conservative part. Then again by Poincar[é]{}’s recurrence theorem and Hopf’s divergence theorem we have $$\overline m_\Gamma (\overline \Omega_D)= \overline m_\Gamma\bigl(\quotient{\Gamma}{[{{\mathcal G}}_\Gamma^{\small{\mathrm{wdiv}}}]}\bigr)\quad\text{and }\ \overline m_\Gamma (\overline \Omega_C)=\overline m_\Gamma\bigl(\quotient{\Gamma}{[{{\mathcal G}}_{\Gamma}^{\small{\mathrm{wrec}}}]}\bigr).$$ From Lemma \[critregradlim\] we further get $$[{{\mathcal R}}\cap {{\mathcal G}}_\Gamma^{\small{\mathrm{wdiv}}}] \subseteq [{{\mathcal R}}\cap {{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}]\quad\text{and }\ [{{\mathcal R}}\cap {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}]\subseteq [{{\mathcal R}}\cap{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}].$$ Since $[{{\mathcal R}}]= [ {{\mathcal R}}\cap {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}]\sqcup[{{\mathcal R}}\cap {{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}]$ and as the weak Ricks’ measure is supported on $\quotient{\Gamma}{[{{\mathcal R}}]}$, we conclude $$\overline m_\Gamma (\overline \Omega_D)= \overline m_\Gamma\bigl(\quotient{\Gamma}{ [{{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}]}\bigr)\quad\text{and }\ \overline m_\Gamma (\overline \Omega_C)= \overline m_\Gamma\bigl(\quotient{\Gamma}{[{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}}]\bigr).$$ So by construction of the weak Ricks’ measure from the geodesic current $\overline\mu\,$ defined on ${\partial}{{\mathcal R}}$, the dynamical system $\bigl(\quotient{\Gamma}{ [{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma\bigr)$ is conservative if and only if $\ \overline\mu\bigl({\partial}[{{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}}]\bigr)=\overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})\bigr)=0$, and it is dissipative if and only if $\ \overline\mu\bigl({\partial}[{{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}}]\bigr)=\overline\mu({\partial}{{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}})=0$. The last statement is obvious (see the paragraph before Theorem \[Hopfindividual\]). As a consequence we get the following statement which generalizes Lemma 7.5 in [@Ricks] (where the stronger assumption of a [[**]{}finite]{} weak Ricks’ measure $\overline m_\Gamma$ is needed): \[weakregisfull\] Let $\overline\mu\,$ be a geodesic current defined on ${\partial}{{\mathcal R}}$. Then $$\overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}})\bigr)=0\quad\Longrightarrow\quad \overline \mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}})\bigr)=0.$$ For the weak Ricks’ measure $\overline m_\Gamma$ associated to the geodesic current $\overline\mu\,$ the conservative part $\overline\Omega_C$ satisfies $$\overline m_\Gamma(\overline\Omega_C) = \overline m_\Gamma(\quotient{\Gamma}{[{{\mathcal G}}]})$$ according to Lemma \[consdiss\] (b); from the proof above we further have $$\overline m_\Gamma(\overline\Omega_C)=\overline m_\Gamma\bigl(\quotient{\Gamma}{[{{\mathcal G}}_{\Gamma}^{\small{\mathrm{wrec}}}}]\bigr).$$ Hence by construction of the weak Ricks’ measure we conclude $$\overline \mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}})\bigr)=\overline \mu\bigl({\partial}[{{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}]\bigr)=0.$$ In the sequel we will use this result to prove the necessary generalizations of Corollary 8.3, Lemma 8.5 and Lemma 8.6 in [@Ricks], which were only proved for geodesic currents coming from a conformal density as defined in (\[overlinemudef\]), and which induce a [[**]{}finite]{} Ricks’ measure. For the remainder of this section we fix non-atomic probability measures $\mu_-$, $\mu_+$ on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$, and let $$\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}}$$ be a quasi-product geodesic current defined on ${\partial}{{\mathcal R}}$. Notice that since the support of $\mu_-$ and $\mu_+$ equals ${L_\Gamma}$, minimality of the limit set ${L_\Gamma}$ (see for example [@MR656659 Proposition 2.8]) implies that every open subset $U\subseteq{\partial{X}}$ with $U\cap{L_\Gamma}\ne\emptyset$ satisfies $\mu_{\pm}(U)>0$. Hence if $h\in\Gamma$ is a rank one element, then for the open neighborhoods $U^-$, $U^+\subseteq{\overline{{X}}}$ of $h^-$, $h^+$ provided by Lemma \[joinrankone\] we know that $$\label{overlinemunotzero} (\mu_-\otimes \mu_+)({\partial}{{\mathcal R}})\ge \mu_-(U^-)\cdot \mu_+(U^+)>0;$$ so $\overline\mu$ is non-trivial. Moreover, according to the Main Theorem in [@MR2581914] (see also Proposition 6.6 (3) in [@Ricks]), the set ${\partial}{{\mathcal R}}\cap({L_\Gamma}\times{L_\Gamma})$ is dense in ${L_\Gamma}\times{L_\Gamma}$, hence $${\mbox{supp}}(\overline\mu) = {L_\Gamma}\times {L_\Gamma}.$$ The first Lemma shows that in the setting of Lemma \[consdiss\] (a) – that is when the weak Ricks’ measure associated to $\overline\mu\,$ is conservative, but not necessarily finite – we have $\overline\mu\sim \mu_-\otimes \mu_+$; in other words we may omit the restriction to ${\partial}{{\mathcal R}}$. \[regnotnecessary\] If $\ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}})\bigr)=0,$ then $$(\mu_-\otimes\mu_+)({\partial}{{\mathcal R}})= (\mu_-\otimes\mu_+)({\partial{X}}\times{\partial{X}})=1.$$ From the hypothesis and Corollary \[weakregisfull\] we get $\ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}})\bigr)=0$ and hence $$\label{weakreczero} (\mu_-\otimes\mu_+) \bigl({\partial}({{\mathcal R}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}})\bigr)=0.$$ In a first step we prove that the set $$A:=\{\xi\in{\partial{X}}\colon (\xi,\eta)\in{\partial}{{\mathcal R}}\quad\text{for all }\ \eta\in{\partial{X}},\ \eta\ne \xi \}$$ satisfies $\mu_-(A)=\mu_+(A)=1$. So let $\xi\in{\partial{X}}$ be arbitrary. Our goal is to show that $\xi$ possesses an open neighborhood $U\subseteq{\partial{X}}$ with $\mu_-(U\setminus A)=0$; the claim then follows by compactness of ${\partial{X}}$ (and analogously for $\mu_+$ instead of $\mu_-$). Let $h\in\Gamma\,$ be a rank one element. According to Lemma \[dynrankone\] (a) there exists $w\in{{\mathcal R}}$ with $w^-=\xi$ and $w^+=h^+$. Lemma \[joinrankone\] then provides open neighborhoods $U$, $V\subseteq{\partial{X}}$ of $\xi$, $h^+$ [  ]{}$U\times V\subseteq{\partial}{{\mathcal R}}$. From (\[weakreczero\]) we get $(\mu_-\otimes\mu_+) \bigl((U\times V)\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\bigr)=0$. For the subset $$W=\{\zeta\in U\colon \exists\, u\in{{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\ \ {\mbox{such}\ \mbox{that}\ }\ \ u^-=\zeta,\ u^+\in V\}\subseteq U$$ of $U$ we have the inclusion $(U\setminus W)\times V\subseteq (U\times V)\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}$. Hence $$0= (\mu_-\otimes\mu_+) \bigl((U\times V)\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\bigr)\ge \mu_-(U\setminus W)\cdot\mu_+(V),$$ and from $\mu_+(V)>0$ we get $\mu_-(U\setminus W)=0$. As Lemma \[jointoweakrecurrent\] implies $W\subseteq A$, we conclude $ \mu_-(U\setminus A)\le \mu_-(U\setminus W)=0$. Finally we let $\xi\in A$ arbitrary. So for all $\eta\in{\partial{X}}\setminus\{\xi\}$ we have $(\xi,\eta)\in{\partial}{{\mathcal R}}$. Since $\mu_+(\{\xi\})=0$ by non-atomicity of $\mu_+$, we have $(\xi,\eta)\in{\partial}{{\mathcal R}}$ for $\mu_+$-almost every $\eta\in{\partial{X}}$. The claim then follows from $\mu_-(A)=1$ and Fubini’s Theorem. From the previous lemma and the proof of Lemma \[consdiss\] we immediately get $\ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}})\bigr)=0\, $ if and only if $\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})=1$. For the remainder of this section we use the previous assumptions on $\mu_-$, $\mu_+$ and $\overline\mu$; moreover we will require that $$\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})=1.$$ \[mapisconstantae\] Let $S$ be any set and $\Psi:{\partial}{{\mathcal R}}\to S$ an arbitrary map. If $\Omega\subseteq{\partial}{{\mathcal R}}$ is a set of full $\overline\mu$-measure in ${\partial}{{\mathcal R}}$ such that for all $(\xi,\eta)$, $(\xi,\eta')$, $(\xi',\eta')\in\Omega$ we have $$\Psi\bigl((\xi,\eta)\bigr)=\Psi\bigl((\xi,\eta')\bigr)=\Psi\bigl((\xi',\eta')\bigr),$$ then $\Psi$ is constant $\overline\mu$-almost everywhere on ${\partial}{{\mathcal R}}$. From Lemma \[regnotnecessary\] and $\overline\mu({\partial}{{\mathcal R}}\setminus\Omega)=0$ we get $$(\mu_-\otimes\mu_+)(\Omega)=(\mu_-\otimes\mu_+)({\partial}{{\mathcal R}})=(\mu_-\otimes\mu_+)({\partial{X}}\times{\partial{X}}).$$ Hence for $\mu_-$-almost every $\xi\in{\partial{X}}$ the set $$B_\xi:=\{\eta\in{\partial{X}}\colon (\xi,\eta)\in\Omega\}$$ has full $\mu_+$-measure in ${\partial{X}}$; in particular, the set $$A:=\{\xi\in{\partial{X}}\colon \mu_+(B_\xi)=\mu_+({\partial{X}})=1\}$$ satisfies $\mu_-(A)=\mu_-({\partial{X}})=1$. We now fix $(\xi,\eta)\in (A\times{\partial{X}})\cap\Omega$. Then for any $(\xi',\eta')\in (A\times B_\xi)\cap\Omega$ we have $(\xi,\eta')\in (A\times B_\xi)\cap \Omega$, hence by hypothesis on $\Omega$ $$\Psi\bigl((\xi',\eta')\bigr)=\Psi\bigl((\xi,\eta')\bigr)=\Psi\bigl((\xi,\eta)\bigr).$$ Since the set $(A\times B_\xi)\cap\Omega\subseteq{\partial}{{\mathcal R}}$ has full $(\mu_-\otimes\mu_+)$-measure in ${\partial{X}}\times{\partial{X}}$, it also has full $\overline\mu$-measure in ${\partial}{{\mathcal R}}$. So we get $\Psi\bigl((\xi',\eta')\bigr)=\Psi\bigl((\xi,\eta)\bigr)$ for $\overline\mu$-almost every $(\xi',\eta')\in{\partial}{{\mathcal R}}$, and hence $\Psi$ is constant $\overline\mu$-almost everywhere on ${\partial}{{\mathcal R}}$. The following lemma together with Lemma \[Hausdorffconv\] is the clue to the proof of Theorem \[zerofull\]. \[isometrytypeconstant\] For $\overline\mu$-almost every $(\xi,\eta)\in{\partial}{{\mathcal R}}$ the isometry type of $C_{(\xi\eta)}$ is the same. According to Corollary \[weakregisfull\] the set ${\partial}({{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\cap{{\mathcal R}})$ has full $\overline\mu$-measure in ${\partial}{{\mathcal R}}$. Moreover, if $u,v\in {{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\cap{{\mathcal R}}$ satisfy $u^-=v^-$ or $u^+=v^+$, then by Lemma \[weakgivesisometricembeddings\] there exist isometric embeddings between the compact metric spaces $C_u$ and $C_v$; hence $C_u$ and $C_v$ are isometric according to Theorem 1.6.14 in [@MR1835418]. The claim now follows by applying Lemma \[mapisconstantae\] to the map which sends $(\xi,\eta)\in {\partial}({{\mathcal G}}_\Gamma^{\small{\mathrm{wrec}}}\cap{{\mathcal R}})$ to the isometry type of $C_{(\xi\eta)}$. We will now prove the appropriate generalization of Theorem 8.8 in [@Ricks], which states that under the additional hypothesis ${{\mathcal Z}}_\Gamma\ne \emptyset$ – which is satisfied in particular if ${X}$ is geodesically complete – the set ${\partial}{{\mathcal Z}}$ of end-point pairs of zero width geodesics has full $(\mu_-\otimes\mu_+)$-measure in ${\partial{X}}\times{\partial{X}}$. This will provide the key in the proof of ergodicity in Section \[HopfArgument\]. Moreover, it implies that any weak Ricks’ measure $\overline m_\Gamma$ on $\quotient{\Gamma}{ [{{\mathcal G}}]}$ associated to a quasi-product geodesic current $\overline\mu\sim(\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}} $ is equivalent to the induced Ricks’ measure $m^0_\Gamma$ on $\quotient{\Gamma}{{{\mathcal G}}}$. \[zerofull\] Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group [  ]{}${{\mathcal Z}}_\Gamma\ne \emptyset$. If $\mu_-$, $\mu_+$ are non-atomic probability measures on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$ and $\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})= 1$, then $$(\mu_-\otimes\mu_+)({\partial}{{\mathcal Z}})=1.$$ Moreover, if $\,\overline\mu$ is a quasi-product geodesic current absolutely continuous with respect to $(\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}} $, then $$\overline\mu\bigl({\partial}({{\mathcal G}}\setminus {{\mathcal Z}})\bigr)=0.$$ By Lemma \[isometrytypeconstant\] there exists a set $\Omega \subseteq {\partial}{{\mathcal R}}$ of full $\overline\mu$-measure in ${\partial}{{\mathcal R}}$ such that the isometry type of $C_{(\xi\eta)}$ is the same for all $(\xi,\eta)\in\Omega$. Lemma \[regnotnecessary\] then implies $$\label{Omegafull} (\mu_-\otimes\mu_+)(\Omega)=1.$$ Fix $v\in{{\mathcal Z}}_\Gamma$ and let $U^-$, $U^+\subseteq{\overline{{X}}}$ be open neighborhoods of $v^-$, $v^+$ according to Lemma \[joinrankone\]. Consider decreasing sequences of open subsets $(U_n^-)\subseteq U^-\cap{\partial{X}}$, $(U_n^+)\subseteq U^+\cap{\partial{X}}$ [  ]{} $$\bigcap_{n\in{\mathbb{N}}} U_n^-=\{v^-\}\quad\text{and }\ \bigcap_{n\in{\mathbb{N}}} U_n^+=\{v^+\}.$$ Let $n\in{\mathbb{N}}$. As ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$, we get $(\mu_-\otimes\mu_+)(U_n^- \times U_n^+)=\mu_-(U_n^-)\cdot\mu_+(U_n^+) >0$, hence by (\[Omegafull\]) $$(\mu_-\otimes\mu_+)\bigl(\Omega\cap (U_n^- \times U_n^+)\bigr)>0.$$ So in particular there exists $(\xi_n,\eta_n)\in (U_n^-\times U_n^+)\cap\Omega$. By choice of the sets $U_n^-$, $U_n^+$ we get a sequence $\bigl((\xi_n,\eta_n)\bigr)\subseteq \Omega\subseteq{\partial}{{\mathcal R}}$ which converges to $(v^-,v^+)\in{\partial}{{\mathcal Z}}_\Gamma$. Now Lemma \[Hausdorffonboundary\] implies that some subsequence of $\bigl(C_{(\xi_n\eta_n)}\bigr)$ converges, in the Hausdorff metric, to a point. As the isometry type of $C_{(\xi\eta)}$ is the same for all $(\xi,\eta)\in\Omega$, this implies that $C_{(\xi\eta)}$ is a point for all $(\xi,\eta)\in\Omega$, hence $ \Omega\subseteq{\partial}{{\mathcal Z}}$. We conclude $$(\mu_-\otimes\mu_+)({\partial}{{\mathcal Z}})\ge (\mu_-\otimes\mu_+)(\Omega)=1,$$ hence $ \overline\mu\bigl({\partial}({{\mathcal G}}\setminus{{\mathcal Z}})\bigr) =0$. \[weakisstrongisKnieper\] Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group [  ]{}${{\mathcal Z}}_\Gamma\ne \emptyset$. Let $\mu_-$, $\mu_+$ be non-atomic probability measures on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$ and $\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})= 1$, and $\,\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}} $ a quasi-product geodesic current defined on ${\partial}{{\mathcal R}}$. Then the weak Ricks’ measure associated to $\overline\mu$ is equal to the Ricks’ measure defined by (\[defstrongRicks\]) and also to any Knieper’s measure associated to the quasi-product geodesic current $\overline\mu$ (if it exists). Conservativity versus ergodicity {#HopfArgument} ================================ As before let ${X}$ be a proper Hadamard space with fixed base point ${{o}}\in{X}$. For $R>0$ we denote ${\mathcal B}(R)\subseteq {{\mathcal G}}$ the set of all parametrized geodesic lines with origin in $B_{{o}}(R)$. In this section we assume that $\Gamma<{\mbox{Is}}({X})$ is a discrete rank one group with $${{\mathcal Z}}_\Gamma:=\{v\in{{\mathcal Z}}\colon v^+, v^-\in{L_\Gamma}\}\ne \emptyset.$$ Notice that if ${X}$ is geodesically complete, then according to Proposition \[largewidthgiveszerowidth\] the latter condition is automatically satisfied. Throughout the whole section we fix non-atomic probability measures $\mu_-$, $\mu_+$ on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$ and $\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})= 1$. Let $\overline\mu\sim (\mu_-\otimes\mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}}$ be a quasi-product geodesic current defined on ${\partial}{{\mathcal R}}$ for which $$\label{boundongrowth} \Delta:=\sup \Big\{ \frac{\ln \overline\mu\bigl({\partial}{\mathcal B}(R)\bigr)}{R}\colon R>0\Big\}$$ is finite. We next consider Ricks’ measure $m_\Gamma^0$ associated to the geodesic current $\overline\mu\,$ as defined in (\[defstrongRicks\]). Since in the given setting Corollary \[weakisstrongisKnieper\] implies that Ricks’ measure is equal to weak Ricks’ measure and also to Knieper’s measure associated to the same geodesic current $\overline\mu$, we will denote Ricks’ measure by $m_\Gamma$ instead of $m_\Gamma^0$. Notice that by assumption on $\mu_-$ and $\mu_+$ the set ${{\mathcal G}}_\Gamma^{\small{\mathrm{rad}}} $ has full $\overline\mu$-measure; so we already know from Lemma \[consdiss\] that $(\quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is conservative. The goal of this section is to prove that it is also ergodic. The proof of ergodicity will make use of the famous Hopf argument (see [@Hopf], [@MR0284564]) as in [@MR2057305] and [@LinkPicaud], for which Theorem \[zerofull\] is indispensable. In our more general setting including singular spaces we first need an analogon to Knieper’s Proposition 4.1 which is valid only for manifolds. We remark that in view of Lemma \[jointoweakrecurrent\] our generalization of Knieper’s Proposition 4.1 is not very surprising. \[KniepersProp\]    Let $u\in {\mathcal Z}$ be a $\Gamma$-recurrent rank one geodesic of zero width. Then for all $v\in {{\mathcal G}}$ with $v^+=u^+$ and ${{\mathcal B}}_{v^+}(v(0),u(0))=0$ we have $$\lim_{t\to\infty} d_1(g^t v, g^tu)=0.$$ Since $u$ is $\Gamma$-recurrent, there exist sequences $(\gamma_n)\subseteq\Gamma$ and $(t_n)\nearrow\infty$ [  ]{}$\gamma_n g^{t_n}u$ converges to $u$. Let $v\in {{\mathcal G}}$ be a geodesic such that $v^+=u^+$ and ${{\mathcal B}}_{v^+}(v(0),u(0))=0$. Then the function $$[0,\infty)\to [0,\infty),\quad t\mapsto d_1(g^t v,g^t u)=\sup\{ {\mathrm{e}}^{-|s|}d(v(t+s),u(t+s))\colon s\in{\mathbb{R}}\}$$ is monotone decreasing as the geodesic rays determined by $u$ and $v$ are asymptotic. If the function does not converge to zero as $t$ tends to infinity, there exists a constant $\epsilon>0$ [  ]{} $$d_1(g^t v,g^t u)\ge \epsilon$$ for all $t\ge 0$ and hence $$\epsilon \le d_1 (g^{t_n+s} v,g^{t_n+s} u)\le d_1(v,u)$$ for all $s\ge -t_n$. By $\Gamma$-invariance of $d_1$ we get for all $n\in{\mathbb{N}}$ and for all $s\ge -t_n$ $$\epsilon \le d_1 (g^s \gamma_n g^{t_n} v,g^s\gamma_n g^{t_n} u)\le d_1(v,u).$$ Passing to a subsequence if necessary we may assume that $\gamma_n g^{t_n} v$ converges to some $\overline{v}\in {{\mathcal G}}$. Hence in the limit as $n\to\infty$ we get $$\epsilon \le d_1 (g^s \overline{v},g^s u)\le d_1(v,u)\le \max\{2, d( v(0),u(0))\}$$ for all $s\in{\mathbb{R}}$. Now the first inequality shows that $\overline{v}\ne u$ and the second inequality gives $(\overline{v}^-,\overline{v}^+)=(u^-,u^+)$, which means that the geodesic lines $\overline{v}$ and $u$ are parallel. Notice that in this case $H_{{o}}(\overline{v})=H_{{o}}(u)$ if and only if ${{\mathcal B}}_{u^-}\bigl(\overline{v}(0),u(0)\bigr)=0$ if and only if ${{\mathcal B}}_{u^+}\bigl(\overline{v}(0),u(0)\bigr)=0$. By choice of $v$ we have for all $n\in{\mathbb{N}}$ $$\begin{aligned} 0&={{\mathcal B}}_{u^+}(v(t_n),u(t_n)) = \lim_{s\to\infty}\bigl( d(v(t_n), u(t_n+s))-d(u(t_n),u(t_n+s)\bigr)\\ & = \lim_{s\to\infty}\bigl( d(\gamma_n v(t_n), \gamma_n u(t_n+s))-s\bigr)= \lim_{s\to\infty}\bigl( d\bigl((\gamma_n g^{t_n} v)(0), (\gamma_n g^{t_n} u)(s)\bigr)-s\bigr); \end{aligned}$$ by definition of $\overline{v}$ and $\Gamma$-recurrence of $u$ this gives $$\begin{aligned} 0&= \lim_{s\to\infty}\bigl( d( \overline{v}(0), u(s))-s\bigr) ={{\mathcal B}}_{u^+}(\overline{v}(0),u(0)). \end{aligned}$$ Hence $\overline{v}\sim u$ which is a contradiction to $\overline{v}\ne u$ and $u\in{{\mathcal Z}}$. Since we want to apply Hopf’s criterion for ergodicity Theorem \[Hopfindividual\] we need to find an appropriate function $\rho:\quotient{\Gamma}{{{\mathcal G}}}\to {\mathbb{R}}$ in ${\mbox{\rm L}}^1(m_\Gamma)$ which is strictly positive $m_\Gamma$-almost everywhere. Let $\Delta\ge 0$ be the constant defined by (\[boundongrowth\]). \[definerho\] The function $$\widetilde \rho:{{\mathcal G}}\to{\mathbb{R}},\quad u\mapsto\Biggl\{\begin{array}{cl} \displaystyle \max\{ {\mathrm{e}}^{-2\Delta d(u(0),\gamma{{o}})}\colon \gamma\in\Gamma\} & \text{if } \ u\in {{\mathcal Z}}\\[1mm] 0 & \text{if } \ u\in {{\mathcal G}}\setminus{{\mathcal Z}}\end{array}$$ descends to a function $\rho: \quotient{\Gamma}{{{\mathcal G}}}\to{\mathbb{R}}$ which is strictly positive $m_\Gamma$-almost everywhere and belongs to ${\mbox{\rm L}}^1(m_\Gamma)$. Moreover, if $u,v\in{{\mathcal Z}}$ satisfy $d\bigl(u(0),v(0)\bigr)\le 1$, then $$|\widetilde\rho(u)-\widetilde\rho(v)|\le \widetilde\rho(u)\cdot 2\Delta {\mathrm{e}}^{2\Delta} d\bigl(u(0),v(0)\bigr).$$ We first notice that by definition $\widetilde \rho$ is $\Gamma$-invariant and strictly positive on ${{\mathcal Z}}$, hence $\rho$ is well-defined and strictly positive $m_\Gamma$-almost everywhere (as $m_\Gamma(\quotient{\Gamma}{ {{\mathcal Z}}})= m_\Gamma(\quotient{\Gamma}{ {{\mathcal G}}})$ by construction of Ricks’ measure). By definition (\[boundongrowth\]) of $\Delta$ we get $$m\bigl( {\mathcal B}(R)\bigr) \le 2R\cdot \overline\mu\big( {\partial}{\mathcal B}(R)\bigr) \le 2R {\mathrm{e}}^{\Delta R}.$$ Let ${\mathcal D}_\Gamma\subseteq{{\mathcal G}}$ denote the Dirichlet domain for $\Gamma$ with center ${{o}}$, that is the set of all parametrized geodesic lines with origin in $$\{x\in{X}\colon d(x,{{o}})\le d(x,\gamma{{o}})\quad\text{for all }\ \gamma\in\Gamma\};$$ then for all $u\in {\mathcal D}_\Gamma\cap{{\mathcal Z}}$ we have $$\widetilde\rho(u)= {\mathrm{e}}^{-2\Delta d(u(0),{{o}})}.$$ Notice that if $u \in {\mathcal S}(R):=\bigl({\mathcal B(R)\setminus {\mathcal B}(R-1)\bigr)\cap {\mathcal D}_\Gamma}\cap{{\mathcal Z}}$, then $d(u(0),{{o}})\ge R-1$ and we estimate $$\begin{aligned} \int_{{\mathcal S}(R)} \widetilde\rho(u){\mathrm{d}}m(u) & \le {\mathrm{e}}^{-2\Delta (R-1)} \int_{{\mathcal B}(R)} {\mathrm{d}}m(u) \le 2R{\mathrm{e}}^{2\Delta} {\mathrm{e}}^{-\Delta R} ;\end{aligned}$$ this shows that $\rho\in {\mbox{\rm L}}^1(m_\Gamma)$. We finally let $u,v\in {{\mathcal Z}}$ arbitrary with $d\bigl(u(0),v(0)\bigr)\le 1$. Let $\gamma,\gamma'\in\Gamma$ [  ]{}$\widetilde\rho(u) ={\mathrm{e}}^{-2\Delta d(u(0),\gamma{{o}})}$, $\widetilde\rho(v) ={\mathrm{e}}^{-2\Delta d(v(0),\gamma'{{o}})}$. Then $$\begin{aligned} \widetilde\rho(u)-\widetilde\rho(v) &\le & {\mathrm{e}}^{-2\Delta d(u(0),\gamma{{o}})}\bigl(1- {\mathrm{e}}^{-2\Delta d(u(0),v(0))}\bigr) ,\\ \widetilde\rho(v)-\widetilde\rho(u) &\ge & {\mathrm{e}}^{-2\Delta d(u(0),\gamma{{o}})}\bigl( {\mathrm{e}}^{2\Delta d(u(0),v(0))}-1\bigr),\end{aligned}$$ hence $$\begin{aligned} |\widetilde\rho(u)-\widetilde\rho(v)| &\le \widetilde\rho(u) \cdot \max\{1- {\mathrm{e}}^{-2\Delta d(u(0),v(0))}, {\mathrm{e}}^{2\Delta d(u(0),v(0))}-1\}\\ & \le \widetilde\rho(u) 2\Delta {\mathrm{e}}^{2\Delta } d(u(0),v(0)).\end{aligned}$$ For the remainder of this section we will again denote elements in the quotient $\quotient{\Gamma}{{{\mathcal G}}}$ be $u,v,w$ and elements in ${{\mathcal G}}$ by $\widetilde u, \widetilde v, \widetilde w$. As we want to apply Theorem \[Hopfindividual\], we state the following auxiliary result. Its proof is a straightforward computation as performed in [@MR1041575 page 144] using the property of $\widetilde \rho\,$ stated in the last line of Lemma \[definerho\]. \[pluslimitequal\] Let $f\in{\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$ be arbitrary. If $u,v\in\quotient{\Gamma}{{{\mathcal Z}}}$ are positively recurrent with lifts $\widetilde u$, $\widetilde v$ satisfying $\widetilde u^+=\widetilde v^+$, ${{\mathcal B}}_{\widetilde v^+}(\widetilde u(0),\widetilde v(0))=0$ and such that $$f^+(u):= \lim_{T\to\infty} \frac{\int_0^T f(g_\Gamma^{t} u){\mathrm{d}}t}{\int_0^T \rho(g_\Gamma^{ t} u){\mathrm{d}}t}\quad\text{and }\ f^+( v)=\lim_{T\to \infty} \frac{\int_0^T f(g_\Gamma^{t} v){\mathrm{d}}t}{\int_0^T \rho(g_\Gamma^{t} v){\mathrm{d}}t}$$ exist, then $ f^+( u)= f^+(v)$. \[conservativeimpliesergodic\]The dynamical system $(\quotient{\Gamma}{ {{\mathcal G}}}, (g^t_\Gamma)_{t\in{\mathbb{R}}}, m_\Gamma)$ is ergodic. Using the last statement of Theorem \[Hopfindividual\] we have to show that for every function $f\in {\mbox{\rm L}}^1(m_\Gamma)$ the associated limit function $f^+$ defined by $$f^+( u):= \lim_{T\to\infty} \frac{\int_0^T f(g_\Gamma^{t}u){\mathrm{d}}t}{\int_0^T \rho(g_\Gamma^{ t} u){\mathrm{d}}t}\quad\text{for }\ m_\Gamma\text{-almost every } \ u\in \quotient{\Gamma}{{{\mathcal G}}}$$ is constant $m_\Gamma$-almost everywhere; here $\rho\in {\mbox{\rm L}}^1(m_\Gamma)$ is the function defined in Lemma \[definerho\]. As ${\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$ is dense in ${\mbox{\rm L}}^1(m_\Gamma)$ it will suffice to prove the claim for $f\in{\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$. So we choose $f\in{\mbox{\rm C}}_c(\quotient{\Gamma}{{{\mathcal G}}})$ arbitrary. Since $(\quotient{\Gamma}{{{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is conservative, Theorem \[Hopfindividual\] states that for $m_\Gamma$-almost every $ u\in \quotient{\Gamma}{{{\mathcal G}}}$ the limits $$f^\pm(u)=\lim_{T\to +\infty} \frac{\int_0^T f(g^{\pm t}_\Gamma u){\mathrm{d}}t}{\int_0^T \rho(g_\Gamma^{\pm t}u){\mathrm{d}}t}$$ exist and are equal. As $m_\Gamma$ is conservative and supported on $\quotient{\Gamma}{{{\mathcal Z}}}$, the set of recurrent elements in $\quotient{\Gamma}{{{\mathcal Z}}}$ has full measure in $\quotient{\Gamma}{{{\mathcal G}}}$ with respect to $m_\Gamma$. So altogether the set $$\begin{aligned} \Omega &:=\{ u\in\quotient{\Gamma}{{{\mathcal Z}}} \colon u\ \text{is positively and negatively recurrent}, \\ &\hspace*{2.5cm} f^{+}(u),\, f^-(u)\ \text{ exist and } \ f^+( u)= f^-( u)\} \end{aligned}$$ has full measure in $\quotient{\Gamma}{{{\mathcal G}}}$. Moreover, from the local product structure of $m$ and Lemma \[regnotnecessary\] we know that there exists a lift $\widetilde w\in{{\mathcal G}}$ of some $w\in \Omega$ [  ]{}$$G_{\widetilde w^-}:=\{\eta\in {\partial{X}}\colon \exists\, u\in\Omega \ \mbox{ with a lift }\ \widetilde u\in{{\mathcal Z}}\ \mbox{ satisfying }\ \widetilde u^-=\widetilde w^-,\ \widetilde u^+=\eta\}$$ has full measure in ${\partial{X}}$ [   ]{}$\mu_+$. This implies in particular that $$\label{fullinxi} m_\Gamma\bigl(\{ v\in \Omega\colon \exists \ \text{lift } \ \widetilde v\in{{\mathcal Z}}\ \text{ satisfying } \ \widetilde v^+\in G_{\widetilde w^-}\}\bigr)=m_\Gamma(\quotient{\Gamma}{{{\mathcal G}}}).$$ We will next show that $ f^+$ is constant $m_\Gamma$-almost everywhere on $\quotient{\Gamma}{{{\mathcal G}}}$; according to (\[fullinxi\]) above it suffices to show that for every $v\in\Omega$ with a lift $\widetilde v\in {{\mathcal Z}}$ satisfying $\widetilde v^+\in G_{\widetilde w^-}$ we have $ f^+(v)= f^+( w)$. So let $v\in\Omega$ be arbitrary with a lift $\widetilde v\in{{\mathcal Z}}$ satisfying $\widetilde v^+\in G_{\widetilde w^-}$. By definition of $G_{\widetilde w^-}$ there exists $u\in\Omega$ with a lift $\widetilde u\in {{\mathcal Z}}$ satisfying $\widetilde u^-=\widetilde w^-$ and $\widetilde u^+=\widetilde v^+$; replacing $\widetilde u$ by $g^s \widetilde u$ for an appropriate $s\in{\mathbb{R}}$ if necessary we may further assume that ${{\mathcal B}}_{\widetilde v^+}(\widetilde u(0),\widetilde v(0))=0$. Then the choice of $\widetilde w$, the definition of $\Omega$ and Lemma \[pluslimitequal\] directly imply $$f^+(v)=f^+(u)=f^-( u).$$ We next choose $s\in{\mathbb{R}}$ such that ${{\mathcal B}}_{\widetilde w^-}(\widetilde w(0),\widetilde u(s))=0$; from the fact that $u$ is negatively recurrent, $\widetilde u^-=\widetilde w^-$ and Lemma \[pluslimitequal\] we then get $$f^-(w)=f^-(g^s_\Gamma u).$$ As $f^\pm$ are $(g_\Gamma^t)$-invariant and $ w\in\Omega$, we conclude $$f^+( v)=f^-( u) =f^-(g^s_\Gamma u)=f^-( w)=f^+( w).$$ So we have shown that $m_\Gamma$-almost every $ v\in\quotient{\Gamma}{{{\mathcal G}}}$ satisfies $ f^+( v)=f^+( w)$. We now summarize the previous results to obtain \[conservativestatement\]Let $\Gamma<{\mbox{Is}}({X})$ be a discrete rank one group with ${{\mathcal Z}}_\Gamma\ne\emptyset$. Let $\mu_-$, $\mu_+$ be non-atomic probability measures on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$, and $\overline\mu\sim (\mu_-\otimes \mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}}$ a quasi-product geodesic current on ${\partial}{{\mathcal R}}$ for which the constant $\Delta\ge 0$ defined by (\[boundongrowth\]) is finite. Let $m_\Gamma$ be the associated Ricks’ measure on $\quotient{\Gamma}{ {{\mathcal G}}}$. Then the following statements are equivalent:\ 1. $\mu_-({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_+({L_\Gamma^{\small{\mathrm{rad}}}})=1$. 2. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is conservative. 3. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is ergodic and $m_\Gamma$ is not supported on a single divergent orbit. Moreover, each of the three statements implies that $m_\Gamma$ is equal to the weak Ricks’ measure $\overline m_\Gamma$ on $\quotient{\Gamma}{[{{\mathcal G}}]}$ and to any Knieper’s measure on $\quotient{\Gamma}{{{\mathcal G}}}$ associated to $\overline\mu$ (if it exists). We finally mention a result concerning the dynamical systems $({\partial}{{\mathcal G}}, \Gamma, \overline\mu)$ and $({\partial{X}}\times{\partial{X}},\Gamma, \mu_-\otimes \mu_+)$ first introduced in Section \[geodcurrentmeasures\]. From the construction of the Ricks’ measure $m_\Gamma$ associated to the quasi-product geodesic current $\overline\mu$ defined on ${\partial}{{\mathcal R}}$ which is absolutely continuous with respect to the product $ ( \mu_-\otimes \mu_+){\,\rule[-5pt]{0.4pt}{12pt}\,{}}_{{\partial}{{\mathcal R}}}$ of non-atomic probability measures $\mu_{\pm}$ on ${\partial{X}}$ with ${\mbox{supp}}(\mu_{\pm})={L_\Gamma}$ we immediately get $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is ergodic if and only if $({\partial}{{\mathcal G}}, \Gamma, \overline\mu)$ is ergodic if and only if $({\partial{X}}\times{\partial{X}},\Gamma, \mu_-\otimes \mu_+)$ is ergodic. Geodesic currents coming from a conformal density {#currentsfromconfdens} ================================================= For the remainder of this article we will specialize to a particular kind of geodesic currents, namely the ones arising from a conformal density. As before ${X}$ will denote a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. We further fix a base point ${{o}}\in{X}$ on an invariant geodesic of a rank one element in $\Gamma$. We start with an important definition: Since $\Gamma<{\mbox{Is}}({X})$ is discrete and ${X}$ is proper the [[**]{}orbit counting function]{} $$N_\Gamma(R):=\#\{\gamma\in\Gamma\colon d({{o}},\gamma {{o}})\leq R\}$$ is finite for all $R>0$. The number $$\delta_\Gamma =\limsup_{R\to +\infty}\frac{\ln\bigl(N_\Gamma(R)\bigr)}{R}$$ is called the [[**]{}critical exponent]{} of $\Gamma$; it is independent of the choice of base point ${{o}}\in{X}$ and satisfies the equality $$\label{critexp} \delta_\Gamma= \inf\{s>0\colon \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-s d({{o}},\gamma {{o}})} \ \text{ converges}\}.$$ A discrete group $\Gamma$ is said to be [[**]{}divergent]{} if $$\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}\quad\text{diverges},$$ and [[**]{}convergent]{} otherwise (that is when the infimum in (\[critexp\]) is attained). Given $\delta\ge 0$, a $\delta$-dimensional $\Gamma$-invariant conformal density is a continuous map $\mu\,$ of ${X}$ into the cone of positive finite Borel measures on ${\partial{X}}$ [  ]{}$\mu_{{o}}:=\mu({{o}})$ is supported on the limit set ${L_\Gamma}$, $\mu\,$ is $\Gamma$-equivariant (that is $\gamma_*\mu_x=\mu_{\gamma x}$ for all $\gamma\in\Gamma$, $x\in{X}$)[^1] and $$\label{conformality} \frac{{\mathrm{d}}\mu_x}{{\mathrm{d}}\mu_{{o}}}(\eta)={\mathrm{e}}^{\delta {{\mathcal B}}_{\eta}({{o}},x)} \quad\mbox{for any}\ \,x\in{X}\ \text{and }\ \eta\in{\mbox{supp}}(\mu_{{o}}).$$ The existence of a $\delta$-dimensional $\Gamma$-invariant conformal density for $\delta=\delta_\Gamma$ goes back to S. J. Patterson in the case of Fuchsian groups, and it turns out that his explicit construction extends to arbitrary discrete isometry groups of Hadamard spaces with positive critical exponent (see for example [@MR1465601 Lemma 2.2]). This condition is satisfied for any discrete rank one group $\Gamma< {\mbox{Is}}({X})$ as it contains by definition a non-abelian free subgroup generated by two independent rank one elements. We now fix $\delta>0$ and let $\mu=(\mu_x)_{x\in{X}}$ be a $\delta$-dimensional $\Gamma$-invariant conformal density. By definition of a conformal density we have $0<\mu_{{o}}({\partial{X}})<\infty$, and we will assume that $\mu_{{o}}$ is normalized [  ]{}$\mu_{{o}}({\partial{X}})=1$. Before we construct a geodesic current from a conformal density we want to list a few results concerning these. We first turn our attention to the radial limit set defined by (\[radlimpoint\]). Recall that for $y\in {X}$ and $r>0$ $B_y(r)\subseteq{X}$ denotes the open ball of radius $r$ centered at $y\in{X}$. If $x\in {X}$ we define the [[**]{}shadow]{} $${\mathcal O}_{r}(x,y):=\{\eta \in{\partial{X}}\colon \sigma_{x,\eta}({\mathbb{R}}_+)\cap B_y(r)\neq\emptyset\};$$ if $\xi\in{\partial{X}}$ we set $${\mathcal O}_{r}(\xi,y):=\{\eta \in{\partial{X}}\colon \exists \ v\in{\partial}^{-1}(\xi,\eta) \quad \text{with }\ v(0)\in B_y(r)\}.$$ Notice that with these definitions the radial limit set can be written as $${L_\Gamma^{\small{\mathrm{rad}}}}=\bigcup_{c>0}\bigcap_{R>1} \bigcup_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle d({{o}},\gamma{{o}})>R}\end{smallmatrix}}{\mathcal O}_c({{o}},\gamma{{o}});$$ again, the definition is independent of the choice of base point ${{o}}\in{X}$. One corner stone result concerning $\delta$-dimensional $\Gamma$-invariant conformal densities is Sullivan’s shadow lemma which gives an asymptotic estimate for the measure of the shadows ${\mathcal O}_r({{o}},\gamma{{o}})$ as $d({{o}},\gamma{{o}})$ tends to infinity; obviously this will lead to estimates for the measure of the radial limit set. We will need here an extension of the shadow lemma [@MR2290453 Lemma 3.5] to the following refined versions of the shadows above which were first introduced by T. Roblin ([@MR2057305]): For $r>0$, $c>0$ and $x,y\in{X}$ we set $$\begin{aligned} {\mathcal O}^+_{r,c}(x,y) &:= \{\xi\in{\partial{X}}\colon \exists\, z\in B_x(r)\ {\mbox{such}\ \mbox{that}\ }\sigma_{z,\xi}({\mathbb{R}}_+)\cap B_y(c)\neq\emptyset\},\nonumber \\ {\mathcal O}^-_{r,c}(x,y) &:= \{\xi\in{\partial{X}}\colon \forall\, z\in B_x(r)\ \mbox{we have}\ \sigma_{z,\xi}({\mathbb{R}}_+)\cap B_y(c)\neq\emptyset\}.\nonumber \end{aligned}$$ It is clear from the definitions that $$\label{shadrelation} {\mathcal O}^-_{r,c}(x,y)=\bigcap_{z\in B_x(r)} {\mathcal O}_{c}(z,y) \subset{\mathcal O}_{c}(x,y)\subseteq \bigcup_{z\in B_x(r)} {\mathcal O}_{c}(z,y)={\mathcal O}^+_{r,c}(x,y);$$ moreover, ${\mathcal O}^-_{r,c}(x,y)$ is non-increasing in $r$ and non-decreasing in $c$. We further have the following generalization of Sullivan’s shadow lemma: [@LinkPicaud Proposition 3 and Remark 3]\[shadowlemma\]Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. Let $\delta>0 $ and $\mu\,$ a $\delta$-dimensional $\Gamma$-invariant conformal density. Then for any $r>0$ there exists a constant $c_0\ge r$ with the following property: If $c\geq c_0$ there exists a constant $D=D(c)>1$ [  ]{}for all $\gamma\in\Gamma$ with $d({{o}},{{\gamma{{o}}}})>2c$ we have $$\frac1{D}\; {\mathrm{e}}^{-\delta d({{o}},\gamma {{o}})}\le \mu_{{o}}\big({\mathcal O}_{r,c}^-({{o}},\gamma{{o}})\big)\le \mu_{{o}}\big({\mathcal O}_c({{o}},{{\gamma{{o}}}})\big)\le \mu_{{o}}\big({\mathcal O}_{c,c}^+({{o}},{{\gamma{{o}}}})\big)\le D {\mathrm{e}}^{-\delta d({{o}},\gamma{{o}})}.$$ Moreover, the upper bound holds for [[**]{}all]{} $\gamma\in\Gamma$. The proof of this proposition in the special case of a Hadamard [[**]{}manifold]{} ${X}$ was given in [@LinkPicaud]; however the proof there does not use the fact that ${X}$ is a manifold. Next we state some results from Section 3 in [@MR2290453] and from Section 5 in [@LinkPicaud] which all rely on the shadow lemma above and which remain valid in the setting of non-Riemannian Hadamard spaces. \[confdensexistence\][@MR2290453 Proposition 3.7] If $\mu$ is a $\delta$-dimensional $\Gamma$-invariant conformal density, then $\delta\ge\delta_\Gamma$. [@LinkPicaud Lemma 5.1]\[convseries\]If $\ \displaystyle \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta d({{o}},{{\gamma{{o}}}})}$ converges, then $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. In particular, if $\delta>\delta_\Gamma$, then from (\[critexp\]) we immediately get $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. Notice that the converse statement to Lemma \[convseries\] is much more intricate; we will have to postpone its proof to Section \[divergentmeansconservative\] as we will need to work with a weak Ricks’ measure on $\quotient{\Gamma}{[{{\mathcal G}}]}$. The following lemma states that $\Gamma$ acts ergodically on the radial limit set with respect to the measure class defined by $\mu$: [@LinkPicaud Proposition 4]\[ergodicity\]If $A\subseteq {L_\Gamma^{\small{\mathrm{rad}}}}$ is a $\Gamma$-invariant Borel subset of ${L_\Gamma^{\small{\mathrm{rad}}}}$, then $\mu_{{o}}(A)=0$ or $\mu_{{o}}(A)=\mu_{{o}}({\partial{X}})=1$. By a standard argument (see for example the proof of Theorem 4.2.1 in [@MR1041575]) we get the following \[uniqueness\] If $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})>0$ then $\delta=\delta_\Gamma$ and $\mu\, $ is the unique $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density normalized such that $\mu_{{o}}({\partial{X}})=1$. Finally, the following statement clarifies the possible existence of atoms: [@LinkPicaud Proposition 5]\[atomicpart\]A radial limit point cannot be a point mass for a $\delta$-dimensional $\Gamma$-invariant conformal density $\mu$. We are now going to construct a geodesic current from a $\delta$-dimensional$\Gamma$-invariant conformal density. Notice that according to Lemma \[confdensexistence\] such a density only exists if $\delta\ge\delta_\Gamma$. First we define for $y\in {X}$ a map $${{Gr}}_y:{\partial{X}}\times{\partial{X}}\to{\mathbb{R}},\quad (\xi,\eta)\mapsto \frac12 \sup_{x\in{X}} \bigl({{\mathcal B}}_\xi(y,x)+{{\mathcal B}}_\eta(y,x)\bigr).$$ Obviously, the map ${{Gr}}_y$ has values in $[0,\infty]$, and comparing it to the definition by R. Ricks following [@Ricks Lemma 5.1] we have the relation $ {{Gr}}_y(\xi,\eta)=-2 \beta_y(\xi,\eta)$ for all $(\xi,\eta)\in{\partial{X}}\times{\partial{X}}$. Hence according to Lemma 5.2 in [@Ricks] ${{Gr}}_y(\xi,\eta)$ is finite if and only if $(\xi,\eta)\in{\partial}{{\mathcal G}}$; moreover, $$\label{GromovProd} {{Gr}}_y(\xi,\eta)=\frac12\bigl({{\mathcal B}}_\xi(y,z)+{{\mathcal B}}_\eta(y,z)\bigr)$$ if and only if $z\in (\xi\eta)$ lies on the image of a geodesic joining $\xi$ and $\eta$. So the map ${{Gr}}_y$ extends the [[**]{}Gromov product]{} defined in [@MR1341941] via the formula (\[GromovProd\]) from ${\partial}{{\mathcal G}}$ to ${\partial{X}}\times{\partial{X}}$. By Lemma 5.3 in [@Ricks] ${{Gr}}_y$ is continuous on ${\partial}{{\mathcal R}}$ and lower semicontinuous on ${\partial{X}}\times{\partial{X}}$. We now define as in Section 7 of [@Ricks] a measure $\overline{\mu}\,$ on ${\partial}{{\mathcal G}}\subseteq{\partial{X}}\times{\partial{X}}$ via $$\label{overlinemudef} d\overline{\mu}(\xi,\eta)={\mathrm{e}}^{2\delta {{Gr}}_{{o}}(\xi,\eta)} {\mathbbm 1}_{{\partial}\mathcal{R}}(\xi,\eta){\mathrm{d}}\mu_{{o}}(\xi){\mathrm{d}}\mu_{{o}}(\eta).$$ As ${\partial}{{\mathcal G}}$ is locally compact and as $\overline\mu\,$ is finite for all compact subsets of ${\partial}{{\mathcal G}}$, the measure $\overline\mu\,$ is Radon; it is non-trivial by (\[overlinemunotzero\]). Moreover, $\Gamma$-equivariance and conformality (\[conformality\]) of the $\delta$-dimensional $\Gamma$-invariant conformal density $\mu=(\mu_x)_{x\in{X}}$ occurring in the formula imply that $\overline\mu\,$ is invariant by the diagonal action of $\Gamma$ (and also independent of the choice of ${{o}}\in{X}$). Hence as described at the end of Section \[geodcurrentmeasures\] we can construct from the geodesic current $\overline\mu\,$ Knieper’s measure $m_\Gamma$ (provided $\overline\mu\,$ is supported on ${\partial}{{\mathcal Z}}$ or, more generally, if there exists a geodesic flow invariant Borel measure $\lambda_{(\xi\eta)}$ on the set $(\xi\eta)\subseteq{X}$ for $\overline\mu$-almost every $(\xi,\eta)\in{\partial}{{\mathcal G}}$) and both Ricks’ weak measure $\overline m_\Gamma$ on $\quotient{\Gamma}{[{{\mathcal G}}]}$ and Ricks’ measure $m_\Gamma^0$ on $\quotient{\Gamma}{{{\mathcal G}}}$ (which will be trivial if $\overline\mu({\partial}{{\mathcal Z}})=0$). Combining Lemma \[convseries\] with Lemma \[consdiss\] (b) we get the following \[confgivesdissipative\] If $\delta>\delta_\Gamma$ or if $\,\Gamma$ is convergent, then $\overline\mu( {\partial}{{\mathcal G}}_{\Gamma}^{\small{\mathrm{rad}}})=0$, and hencethe dynamical systems $\bigl(\quotient{\Gamma}{{{\mathcal G}}}, (g^t_\Gamma)_{t\in{\mathbb{R}}}, m_\Gamma\bigr)$ with Knieper’s measure $m_\Gamma$ and$\bigl(\quotient{\Gamma}{ [{{\mathcal G}}]}, (g^t_\Gamma)_{t\in{\mathbb{R}}}, \overline m_\Gamma\bigr)$ with the weak Ricks’ measure $\overline m_\Gamma$ associated to $\overline \mu\,$ are dissipative and non-ergodic unless $\overline \mu$ is supported on a single orbit $\, \Gamma\cdot (\xi,\eta)\subseteq{\partial}{{\mathcal G}}$. Notice that if ${X}$ is a proper CAT$(-1)$-space and $\Gamma<{\mbox{Is}}({X})$ a non-elementary discrete group, then the so-called [[**]{}Bowen-Margulis measure]{} (see for example [@MR2057305 p.12] or [@MR1207579 Section 3]) on $\quotient{\Gamma}{{{\mathcal G}}}$ – which in this case equals $\quotient{\Gamma}{{{\mathcal Z}}}$ – is precisely Knieper’s measure $m_\Gamma$ or equivalently Ricks’ measure $m_\Gamma^0$ associated to the geodesic current $\overline\mu$. We finally mention a few further properties of the quasi-product geodesic current $\overline\mu\,$ defined by (\[overlinemudef\]). First, as $v(0)\in B_{{o}}(R)$ implies ${{Gr}}_{{o}}(v^-,v^+)\le R$, we have $$\overline\mu\bigl({\partial}\{v\in{{\mathcal G}}\colon v(0)\in B_{{o}}(R)\}\bigr)\le {\mathrm{e}}^{2\delta R}$$ for all $R>0$; hence $$\Delta\stackrel{(\ref{boundongrowth})}{ =}\sup \Big\{ \frac{\ln \overline\mu\bigl({\partial}{\mathcal B}(R)\bigr)}{R}\colon R>0\Big\}=2\delta.$$ Second, if $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=\mu_{{o}}({\partial{X}})=1$, then $\mu_{{o}}$ is non-atomic by Proposition \[atomicpart\]. So according to Lemma \[regnotnecessary\] the geodesic current $\overline\mu\,$ is given by $$\label{overlinemudefconfcons} d\overline{\mu}(\xi,\eta)={\mathrm{e}}^{2\delta {{Gr}}_{{o}}(\xi,\eta)} (\xi,\eta){\mathrm{d}}\mu_{{o}}(\xi){\mathrm{d}}\mu_{{o}}(\eta),$$ that is the factor ${\mathbbm 1}_{{\partial}\mathcal{R}}$ in (\[overlinemudef\]) can be removed. Moreover, all the equivalent statements of Theorem \[conservativestatement\] hold. Conservativity in the case of divergent groups {#divergentmeansconservative} ============================================== As before, ${X}$ will be a proper Hadamard space, $\Gamma$ a discrete rank one group and ${{o}}\in{X}$ a fixed base point on an invariant geodesic of a rank one element in $\Gamma$. The goal of this section is to prove the converse statement to Lemma \[convseries\], that is if\ $$\ \displaystyle \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta d({{o}},{{\gamma{{o}}}})} \quad\text{diverges}, \quad \text{then }\quad \mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})>0.$$ However, by Lemma \[confdensexistence\] a $\delta$-dimensional $\Gamma$-invariant conformal density $\mu$ only exists if $\delta\ge \delta_\Gamma$; for $\delta>\delta_\Gamma$ the Poincar[é]{} series $$\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta d({{o}},\gamma{{o}})}$$ converges according to the alternative definition (\[critexp\]) of the critical exponent of $\Gamma$. So from here on we will assume that $\Gamma$ is divergent and that $\mu=(\mu_x)_{x\in{X}}$ is a$\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density. In order to prove that the radial limit set of $\Gamma$ has full measure with respect to $\mu_{{o}}$ we follow as in [@LinkPicaud Section 6] Roblin’s exposition. As we want to apply the generalization of the second Borel-Cantelli lemma Lemma 2 in [@MR0766098], we need to work with a weak Ricks’ measure $\overline m_\Gamma$ on $\quotient{\Gamma}{[{{\mathcal G}}]}$ and find an appropriate Borel set $\overline K\subseteq [{{\mathcal G}}]$ whose projection to $\quotient{\Gamma}{[{{\mathcal G}}]}$ has finite $\overline m_\Gamma$-measure and which satisfies the two Renyi inequalities (\[upbound\]) and (\[lowbound\]) below. Notice that in order to get a better control – and a proof even without the presence of a [[**]{}zero width]{} rank one element – apart from using the weak Ricks’ measure we need to choose the set $\overline K$ more carefully than in [@LinkPicaud Section 6]. Before we proceed we need a result concerning the following slightly refined version of the corridors first introduced by T. Roblin ([@MR2057305]): For $r>0$, $c>0$ and $x,y\in{X}$ we set $$\begin{aligned} {\mathcal L}_{r,c}(x,y) &:= \{(\xi,\eta)\in{\partial}{{\mathcal G}}\colon \exists\, v\in{\partial}^{-1}(\xi,\eta)\ \exists\, t>0\ {\mbox{such}\ \mbox{that}\ }\nonumber \\ &\hspace*{4.8cm}v(0)\in B_x(r),\ v( t)\in B_y(c) \}.\label{Lrc}\end{aligned}$$ Notice that in the case of a Hadamard manifold the definition is equivalent to the one given in Section 2 of [@LinkPicaud]; however, due to the fact that the extension of a geodesic segment to a geodesic line is in general not unique in a singular Hadamard space the definition (8) given there is not convenient here. It is clear from the definitions that ${\mathcal L}_{r,c}(x,y) $ is non-decreasing in both $r$ and $c$. Moreover, for all $r',c'>0$, $x'\in B_x(r')$ and $y'\in B_y(c')$ with $d(x',y')>r+r'+c+c'$ we have $$\label{lrcinclus} {\mathcal L}_{r,c}(x,y)\subseteq {\mathcal L}_{r+r',c+c'}(x',y'),$$ and the following result from [@LinkPicaud] (whose proof extends to non-Riemannian Hadamard spaces) asserts that for suitable $r$ and $c$ the sets $ {\mathcal L}_{r,c}({{o}},\gamma {{o}})$ are big enough for all but a finite number of elements in $\Gamma$. Recall that $\Gamma<{\mbox{Is}}({X})$ was assumed to be a discrete rank one group and that the base point ${{o}}$ belongs to an invariant geodesic of a rank one element $h\in\Gamma$. [@LinkPicaud Proposition 1]\[lrcinproduct\]Let $r_0>{\mathrm{width}}(h)$ and $U^-,U^+\subseteq{\overline{{X}}}$ the open disjoint neighborhoods of $h^-$, $h^+$ provided by Lemma \[joinrankone\] for $r_0$. Then there exists a finite set $\Lambda\subseteq\Gamma$ such that the following holds: For any $c>0$ there exists $R\gg 1$ [  ]{}if $\gamma \in \Gamma$ satisfies $d({{o}},\gamma {{o}})>R$, then for some $\beta\in\Lambda$ we have $${\mathcal L}_{r,c}({{o}},\beta\gamma {{o}})\cap \big(U^-\times U^+\big)\supseteq (U^- \cap {\partial{X}}) \times {\mathcal O}_{r,c}^-({{o}},\beta\gamma{{o}})\qquad\mbox{for all }\ r\ge r_0.$$ We fix $r=r_0>{\mathrm{width}}(h)$ and open disjoint neighborhoods $U^-,U^+\subseteq{\overline{{X}}}$ of $h^-,h^+$ provided by Lemma \[joinrankone\] for $r_0$. Let $\Lambda\subseteq\Gamma$ be the finite subset provided by Proposition \[lrcinproduct\]. We then set $$\rho:=\max\{d({{o}},\beta{{o}})\colon \beta\in\Lambda\}$$ and – with the constant $c_0>r$ from the shadow lemma Proposition \[shadowlemma\] – fix $$c>c_0+\rho.$$ Notice that by choice of $c_0>r=r_0>{\mathrm{width}}(h)$ we always have $c>{\mathrm{width}}(h)$. For this fixed constant $c$ and with the sets $U^-,U^+\subseteq{\overline{{X}}}$ as above we define $$\label{Kdef} \overline K:=\{ g^s [v] \colon v \in {{\mathcal G}},\ v(0)\in B_{{o}}(c),\ (v^-,v^+)\in\Gamma (U^-\times U^+),\ s\in \bigl(-\frac{c}2,\frac{c}2\bigr) \},$$ which is an open subset of $[{{\mathcal R}}]$. Moreover, every representative $u\in{{\mathcal G}}$ of $[u]\in\overline K$ satisfies ${\mathrm{width}}(u)\le 2c$: Indeed, $[u]\in\overline K$ implies that $\alpha u^-\in U^-$ and $ \alpha u^+\in U^+$ for some $\alpha\in\Gamma$; hence by Lemma \[joinrankone\] the geodesic $\alpha\cdot u\in{{\mathcal G}}$ is rank one and ${\mathrm{width}}(\alpha\cdot u)\le 2c$. The claim then follows from ${\mbox{Is}}({X})$-invariance of the width function. We further remark that by construction every orbit of the geodesic flow which enters $\overline K$ spends at least time $c$ and at most time $3c$ in it. In order to make the exposition of the proof of Proposition \[divseries\] below more transparent, we first state a few easy geometric estimates concerning intersections of the form $$\overline K\cap g^{-t}\gamma \overline K\qquad\text{and}\quad \overline K\cap g^{-t}\gamma \overline K\cap g^{-s-t}\varphi \overline K$$ in $[{{\mathcal G}}]$ with $t,s>0$ and $\gamma,\varphi\in \Gamma$. The first one gives a relation to the sets ${\mathcal L}_{c,c}({{o}},\gamma{{o}})$ introduced in (\[Lrc\]): \[K0calL\] $$\begin{aligned} {\mathcal L}_{c,c}({{o}},\gamma{{o}})\cap \Gamma(U^-\times U^+)&\subseteq {\partial}\big(\{ \overline K\cap g^{-t}\gamma \overline K\colon t>0\}\big)\\ &\subseteq {\mathcal L}_{2c,2c}({{o}},\gamma{{o}})\cap \Gamma(U^-\times U^+)\end{aligned}$$ For the first inclusion we let $(\xi,\eta)\in {\mathcal L}_{c,c}({{o}},\gamma{{o}}) \cap \Gamma(U^-\times U^+)$ be arbitrary. Then there exists $\alpha\in\Gamma$ [  ]{}$(\xi,\eta)\in \alpha(U^-\times U^+)$, and by definition (\[Lrc\]) there exists $v\in {{\mathcal G}}$ with $(v^-,v^+)=(\xi,\eta)$, $d({{o}}, v(0))<c$ and $d(\gamma{{o}}, v(t))<c$ for some $t>0$. We conclude that $[v]\in\overline K$ and, since $\gamma^{-1}(v^-,v^+)\in \gamma^{-1}\alpha(U^-\times U^+)\subseteq \Gamma (U^-\times U^+)$, also $\gamma^{-1}g^t [v]\in \overline K$. For the second inclusion we let $(\xi,\eta)\in {\partial}\bigl(\{ \overline K\cap g^{-t}\gamma \overline K\colon t>0\}\bigr)$. Then $(\xi,\eta)\in\Gamma(U^-\times U^+)$ and there exist $v,u \in {\partial}^{-1}(\xi,\eta)$, $v\sim u$ [  ]{}$v(0)\in B_{{o}}(c)$ and $(g^t u)(0)\in B_{\gamma{{o}}}(c)$ for some $t>0$. Since $\xi\in \alpha U^-$ and $\eta\in \alpha U^+$ for some $\alpha \in\Gamma$ we know from Lemma \[joinrankone\] (since $c>{\mathrm{width}}(h)$ and ${{o}}$ was chosen on an invariant geodesic of the rank one element $h$) that every rank one geodesic $w\in\mathcal{R}$ joining $\alpha^{-1}\xi$ and $\alpha^{-1}\eta$ has ${\mathrm{width}}(w)\le 2c$. Now both $\alpha^{-1}v$ and $\alpha^{-1}u$ are such rank one geodesics and therefore we get from $u\sim v$ $$d\bigl(u(s), v(s)\bigr)= d\bigl(\alpha^{-1}u(s), \alpha^{-1} v(s)\bigr)\le 2c\quad\text{for all } \ s\in{\mathbb{R}}.$$ Choosing $w\in {{\mathcal G}}$ with $w\sim v$ [  ]{} $$d\bigl(u(s), w(s)\bigr)=d\bigl(w(s), v(s)\bigr)=\frac12 d\bigl(u(s), v(s)\bigr) \le c$$ for all $s\in{\mathbb{R}}$ we conclude that $\ (\xi,\eta)=(w^-,w^+)\in {\mathcal L}_{2c,2c}({{o}},\gamma{{o}}).$ As a direct consequence we obtain that for all $t,s>0$ and all $\gamma,\varphi\in \Gamma$ $$\begin{aligned} \label{INC2} {\partial}\big(\overline K\cap g^{-t}\gamma \overline K\cap g^{-t-s}\varphi \overline K\big)\subseteq {\mathcal L}_{2c,2c}({{o}},\varphi{{o}})\cap \Gamma(U^-\times U^+).\end{aligned}$$ The following geometric estimate gives a relation between the constants $t,s>0$ and the elements $\gamma,\varphi\in\Gamma$: \[IT\] $\overline K\cap g^{-t}\gamma \overline K\ne\emptyset\ $ implies $$|d({{o}},\gamma{{o}})-t|\le 5c,$$ and $\overline K\cap g^{-t}\gamma \overline K\cap g^{-s-t}\varphi \overline K\ne \emptyset\ $ further gives $$0\leq d({{o}},\gamma{{o}})+d(\gamma{{o}},\varphi{{o}})-d({{o}},\varphi{{o}})\le 15 c.$$ Assume that $\overline K\cap g^{-t}\gamma \overline K\ne \emptyset$. Then there exist $v,u\in {{\mathcal G}}$ with $v\sim u$, $(v^-,v^+)=(u^-,u^+)\in\Gamma(U^-\times U^+)$ and $s,r\in (-c/2,c/2)$ [  ]{}$$(g^s v)(0)=v(s)\in B_{{o}}(c) \quad\text{and }\ (g^{r}g^t u)(0)=u(r+t)\in B_{\gamma{{o}}}(c).$$ So in particular – as in the proof of the second inclusion above – we get $$d\bigl(u(s), v(s)\bigr)\le 2c\quad\text{for all }\ s\in{\mathbb{R}}.$$ Hence $$\begin{aligned} d({{o}},\gamma{{o}})&\le d({{o}}, v(s))+d(v(s),v(0))+ d(v(0),v(t))+d(v(t), u(t))\\ &\qquad +d(u(t),u(r+t))+d(u(r+t),\gamma{{o}})\le c+s+t+2c+r+ c\le t+ 5 c \end{aligned}$$ and similarly the reverse inequality $$d({{o}},\gamma{{o}})\ge t- 5c.$$ If $ \overline K\cap g^{-t}\gamma \overline K\cap g^{-s-t}\varphi \overline K\ne \emptyset$, then from the first claim we get $$|d({{o}},\gamma{{o}})-t|\le 5c,\quad |d({{o}},\varphi{{o}})-s-t|\le 5c \quad\text{and }\quad |d(\gamma{{o}},\varphi{{o}})-s|\le 5c.$$ So we conclude again by the triangle inequality. Finally we remark that if $(\xi,\eta)\in{\mathcal L}_{2c,2c}({{o}},\varphi{{o}})$, then there exists $z\in (\xi\eta)\cap B_{{{o}}}(2c)$ [  ]{}$${{Gr}}_{{{o}}}(\xi,\eta)=\frac12\big({{\mathcal B}}_\xi({{o}},z)+{{\mathcal B}}_\eta({{o}},z)\big)$$ which immediately gives the estimate $$\label{GROMOV} {{Gr}}_{{{o}}}(\xi,\eta) \le 2c.$$ Recall that $\mu$ is a $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density. Let $\overline\mu$ be the geodesic current on ${\partial}{{\mathcal G}}$ given by the formula (\[overlinemudef\]) and $\overline m_\Gamma$ the induced weak Ricks’ measure on $\quotient{\Gamma}{[{{\mathcal G}}]}$ (which is supported on $\quotient{\Gamma}{[{{\mathcal R}}]}$). Notice that for the projection $\overline K_\Gamma\subseteq \quotient{\Gamma}{[{{\mathcal R}}]}$ of the set $\overline K\subseteq [{{\mathcal R}}]$ defined in (\[Kdef\]) to $\quotient{\Gamma}{[{{\mathcal R}}]}$ we have $$0< \overline m_\Gamma(\overline K_\Gamma)\le \overline m(\overline K)\le 3c\cdot {\mathrm{e}}^{2c\delta } \underbrace{ (\mu_{{o}}\otimes\mu_{{o}})\bigl(\Gamma (U^-\times U^+)\bigr)}_{\le 1}<\infty.$$ We are now going to prove the converse to Lemma \[convseries\] in our setting of a proper Hadamard space ${X}$ and a discrete rank one group $\Gamma<{\mbox{Is}}({X})$. Our result here generalizes Proposition 1 in [@LinkPicaud] as we neither require ${X}$ to be a [[**]{}manifold]{} nor $\Gamma$ to contain a [[**]{}strong]{} rank one isometry or a zero width rank one isometry. \[divseries\]If $\ \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})}\,$ diverges, then $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})>0$. We argue by contradiction, assuming that the sum $\ \sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})}$ diverges and that $\,\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. We will show that for the Borel set $\overline K\subseteq [{{\mathcal R}}]$ defined by (\[Kdef\]) the following inequalities hold for $T$ sufficiently large with universal constants $C,C'>0$: $$\label{upbound} \int_{0}^T {\mathrm{d}}t \int_0^T {\mathrm{d}}s \sum_{\gamma,\varphi\in\Gamma} \overline m(\overline K\cap g^{-t}\gamma \overline K\cap g^{-t-s}\varphi \overline K) \le C \biggl(\sum_{\begin{smallmatrix}{\scriptscriptstyle \gamma\in\Gamma}\\ {\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}\biggr)^2$$ $$\label{lowbound} \int_{0}^T {\mathrm{d}}t \sum_{\gamma\in\Gamma} \overline m(\overline K\cap g^{-t}\gamma \overline K) \ge C' \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\ {\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}$$ Once these inequalities are proved and under the assumption that the sum $\ \sum_{\gamma\in\Gamma}{\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}$ diverges one can apply the above mentioned generalization of the second Borel-Cantelli lemma, and the conclusion follows as in [@MR2057305 p. 20] (applying [@MR0766098 Lemma 2] to the finite measure $M=\overline m_\Gamma$ restricted to $\overline K_\Gamma\subseteq \quotient{\Gamma}{[{{\mathcal R}}]} $), namely $$\overline m_\Gamma\bigl(\{ [v]\in \quotient{\Gamma}{[{{\mathcal G}}]} \colon \int_0^\infty {{\mathbbm 1}}_{\overline K_\Gamma\cap g^{-t}_\Gamma \overline K_\Gamma}([v])=\infty\}\bigr)>0.$$ This means that the dynamical system $\bigl(\quotient{\Gamma}{[{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma\bigr)$ is not dissipative. But by Lemma \[consdiss\] (b) this is a contradiction to $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. We begin with the proof of (\[upbound\]): From the definition of the weak Ricks’ measure and the estimates (\[INC2\]) and (\[GROMOV\]) it follows that for all $\gamma,\varphi\in \Gamma$ $$\begin{aligned} \overline m(\overline K\cap g^{-t}\gamma \overline K\cap g^{-t-s}\varphi \overline K) &\le \int_{{\mathcal L}_{2c,2c}({{o}},\varphi{{o}})\cap \Gamma(U^-\times U^+)} {\mathrm{d}}\mu_{{o}}(\xi) {\mathrm{d}}\mu_{{o}}(\eta) {\mathrm{e}}^{2 \delta_\Gamma {{Gr}}_{{o}}(\xi,\eta)} \cdot c \\ & \le {\mathrm{e}}^{4 c \delta_\Gamma} c \int_{{\mathcal L}_{2c,2c}({{o}},\varphi{{o}})\cap \Gamma(U^-\times U^+)} {\mathrm{d}}\mu_{{o}}(\xi) {\mathrm{d}}\mu_{{o}}(\eta) .\end{aligned}$$ Since obviously $ {\mathcal L}_{2c,2c}({{o}},\varphi{{o}})\cap \Gamma(U^-\times U^+)\subseteq {\mathcal L}_{2c,2c}({{o}},\varphi{{o}})\subseteq {\partial{X}}\times {\mathcal O}^+_{2c,2c}({{o}},\varphi{{o}})$ we obtain $$\begin{aligned} \overline m(\overline K\cap g^{-t}\gamma \overline K\cap g^{-t-s}\varphi \overline K) & \le {\mathrm{e}}^{4 c \delta_\Gamma} c \mu_{{o}}\bigl({\mathcal O}^+_{2c,2c}({{o}},\varphi{{o}})\bigr) \le {\mathrm{e}}^{4 c \delta_\Gamma} c D(c) {\mathrm{e}}^{-\delta_\Gamma d({{o}},\varphi{{o}}) }, \end{aligned}$$ where we used the shadow lemma Proposition \[shadowlemma\] in the last step. Using Lemma \[IT\] we finally get $$\begin{aligned} \int_0^T {\mathrm{d}}t\int_0^T {\mathrm{d}}s & \hspace{1mm} \overline m(\overline K\cap g^{-t}\gamma \overline K\cap g^{-t-s}\varphi \overline K)\le (10c)^2 \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma,\varphi\in\Gamma}\\{\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T+5c}\\{\scriptscriptstyle d({{\gamma{{o}}}},\varphi{{o}})\le T+5c}\end{smallmatrix}} {\mathrm{e}}^{4 c \delta_\Gamma} c D(c) {\mathrm{e}}^{-\delta_\Gamma d({{o}},\varphi{{o}}) }\\ & \le 100c^3 {\mathrm{e}}^{4 c \delta_\Gamma} D(c) \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma,\varphi\in\Gamma}\\{\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T+5c}\\{\scriptscriptstyle d({{\gamma{{o}}}},\varphi{{o}})\le T+5c}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma (d({{o}},\gamma{{o}}) + d(\gamma{{o}},\varphi{{o}})-15c) } \\ & = 100c^3 {\mathrm{e}}^{19 c \delta_\Gamma} D(c) \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma,\alpha\in\Gamma}\\{\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T+5c}\\{\scriptscriptstyle d({{o}},\alpha{{o}})\le T+5c}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma (d({{o}},\gamma{{o}}) + d({{o}},\alpha{{o}})) } \\ & = 100 c^3 {\mathrm{e}}^{19 c \delta_\Gamma} D(c) \Bigl( \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle d(o,{{\gamma{{o}}}})\le T+5c}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}\Bigr)^2 . \end{aligned}$$ Since $$\displaystyle \sum_{T<d({{o}},{{\gamma{{o}}}})\le T+5c} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})}\quad$$ is uniformly bounded in $T$ as a direct consequence of Corollary 3.8 in [@MR2290453], we have established (\[upbound\]) with a constant $C>0$ depending only on $c$. It remains to prove inequality (\[lowbound\]). Notice first that by Lemma \[joinrankone\] every pair of points $(\xi,\eta)\in \Gamma(U^-\times U^+)$ can be joined by a rank one geodesic of width smaller than or equal to twice the width of $h$. We recall that by construction every orbit of the geodesic flow which enters $\overline K$ (or one of its translates by $\Gamma$) spends at least time $c$ in it. Using the definition of $\overline m$, Lemma \[K0calL\] and the non-negativity of the Gromov product, we first obtain for $\gamma\in\Gamma$ with $5c\le d({{o}},\gamma{{o}})\le T- 5c$ $$\begin{aligned} & \hspace{-0.3cm}\int_0^T {\mathrm{d}}t\, \overline m(\overline K\cap g^{-t}\gamma \overline K)\\ &\ge \int_{{\mathcal L}_{c,c}({{o}},{{\gamma{{o}}}})\cap\Gamma(U^-\times U^+)} {\mathrm{d}}\mu_{{o}}(\xi){\mathrm{d}}\mu_{{o}}(\eta)\overbrace{\biggl(\int_{-c/2}^{c/2} {\mathrm{d}}s \int_0^T {\mathrm{d}}t\, {\mathbbm 1}_{\gamma \overline K} \bigl(g^{t}(\xi,\eta,s)\bigr)\biggr)}^{\ge c^2}.\end{aligned}$$ Recall that $r=r_0>{\mathrm{width}}(h)$ and $c>c_0+\rho\ge r+\rho$. According to Proposition \[lrcinproduct\] we know that for all $\gamma\in\Gamma$ with $d({{o}},\gamma{{o}})>R\,$ (with $ R>5c$ sufficiently large) there exists an element $\beta$ in the finite set $\Lambda\subseteq\Gamma$ with the property $${\mathcal L}_{r,c}({{o}},\beta^{-1}\gamma {{o}})\cap \big(U^-\times U^+\big)\supseteq (U^- \cap {\partial{X}}) \times {\mathcal O}_{r,c}^-({{o}},\beta^{-1}\gamma{{o}});$$ using (\[lrcinclus\]) and $c>c_0+\rho\ge r +\rho$ we also have the inclusion $${\mathcal L}_{r,c}({{o}},\beta^{-1}\gamma {{o}})=\beta^{-1}{\mathcal L}_{r,c}(\beta{{o}},\gamma{{o}})\subseteq \beta^{-1} {\mathcal L}_{r+\rho,c}({{o}},\gamma{{o}})\subseteq \beta^{-1} {\mathcal L}_{c,c}({{o}},\gamma{{o}}).$$ So for all $\gamma\in\Gamma$ with $R<d({{o}},\gamma{{o}})\le T-5c$ and $\beta=\beta(\gamma)\in\Lambda$ as above we have $$\begin{aligned} {\mathcal L}_{c,c}({{o}},\gamma{{o}})\cap \Gamma (U^-\times U^+) & \supseteq {\mathcal L}_{c,c}({{o}},\gamma{{o}})\cap \beta(U^-\times U^+)\\ & \supseteq \beta \bigl((U^- \cap {\partial{X}}) \times {\mathcal O}_{r,c}^-({{o}},\beta^{-1}\gamma{{o}})\bigr)\\ &=(\beta U^- \cap {\partial{X}}) \times \beta {\mathcal O}_{r,c}^-( {{o}},\beta^{-1}\gamma{{o}})\end{aligned}$$ and therefore $$\begin{aligned} \int_0^T {\mathrm{d}}t\, \overline m(\overline K\cap g^{-t}\gamma \overline K)&\ge c^2\cdot \int_{ (\beta U^-\cap {\partial{X}})\times \beta {\mathcal O}_{r,c}^-( {{o}},\beta^{-1}{{\gamma{{o}}}})\big)} {\mathrm{d}}\mu_{{o}}(\xi){\mathrm{d}}\mu_{{o}}(\eta)\\ &= c^2 \cdot \mu_{{{o}}}(\beta U^-) \mu_{{o}}\bigl(\beta {\mathcal O}_{r,c}^-({{o}},\beta^{-1}\gamma{{o}}) \bigr)\\ &\ge c^2 \cdot \mu_{{{o}}}(\beta U^-) {\mathrm{e}}^{-\delta_\Gamma d({{o}},\beta^{-1}{{o}})}\mu_{{o}}( {\mathcal O}_{r,c}^-({{o}},\beta^{-1}\gamma{{o}}) )\\ &\ge c^2 \cdot \mu_{{{o}}}(\beta U^-) {\mathrm{e}}^{-\delta_\Gamma d({{o}},\beta^{-1}{{o}})} \cdot \frac1{D(c)}{\mathrm{e}}^{-\delta_\Gamma d( {{o}},\beta^{-1}\gamma{{o}})}\\ & \ge c^2\cdot \min_{\beta\in\Lambda} \mu_{{o}}(\beta U^-) \cdot {\mathrm{e}}^{-2 \delta_\Gamma\rho} \frac1{D(c)} {\mathrm{e}}^{-\delta_\Gamma d( {{o}},\gamma{{o}})}\\ &= C'' {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}\end{aligned}$$ with a constant $C''$ depending only on $c$ and the fixed finite set $\Lambda\subseteq\Gamma$; in the last three inequalities we used the $\Gamma$-equivariance and the conformality (\[conformality\]) of $\mu$, the shadow lemma Proposition \[shadowlemma\] and the triangle inequality for the exponent. Finally, taking the sum over all elements $\gamma\in\Gamma$ we get $$\begin{aligned} \int_0^T \sum_{\gamma\in\Gamma} \overline m(\overline K\cap g^{-t}\gamma \overline K)\;{\mathrm{d}}t &\ge \int_0^T \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle R< d(o,{{\gamma{{o}}}})\le T-5c}\end{smallmatrix}} \overline m(\overline K\cap g^{-t}\gamma \overline K)\;{\mathrm{d}}t \\ &\ge C'' \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle R< d(o,{{\gamma{{o}}}})\le T-5c}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})},\end{aligned}$$ and inequality (\[lowbound\]) follows with the same argument as above, namely that the sums $$\sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle d(o,{{\gamma{{o}}}})\le R}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})} \qquad\text{and}\quad \sum_{\begin{smallmatrix}{\scriptscriptstyle\gamma\in\Gamma}\\{\scriptscriptstyle T-5c< d(o,{{\gamma{{o}}}})\le T}\end{smallmatrix}} {\mathrm{e}}^{-\delta_\Gamma d({{o}},{{\gamma{{o}}}})}$$ are uniformly bounded in $T$. Conclusion and a construction of convergent groups {#conclusion} ================================================== We now summarize all the previously collected results in the weakest possible setting: \[HTSweak\]Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. For $\delta>0$ let $\mu$ be a $\delta$-dimensional $\Gamma$-invariant conformal density normalized such that $\mu_{{o}}({\partial{X}})=1$, and $\overline m_\Gamma$ the weak Ricks’ measure on $\quotient{\Gamma}{ [{{\mathcal G}}]}$ associated to the quasi-product geodesic current $\overline\mu$ defined by (\[overlinemudef\]). Then exactly one of the following two complementary cases holds, and the statements (i) to (iii) are equivalent in each case:\ 1. Case: 1. $\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta d({{o}},\gamma{{o}})}$ diverges. 2. $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=1$. 3. $(\quotient{\Gamma}{ [{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma)$ is conservative. 2. Case: 1. $\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta d({{o}},\gamma{{o}})}$ converges. 2. $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. 3. $(\quotient{\Gamma}{ [{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma)$ is dissipative. We remark that the first case can only happen if $\Gamma$ is divergent and if $\delta=\delta_\Gamma$. In this case there are several well-known additional statements: The $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density $\mu$ is unique up to multiplication by a scalar. Moreover it follows from Lemma \[ergodicity\] that $\mu$ is quasi-ergodic in the sence that every $\Gamma$-invariant Borel subset $A\subseteq {\partial{X}}$ either has zero or full measure with respect to any measure $\mu_x$ in $\mu$. According to Proposition \[atomicpart\], $\mu$ is also non-atomic. Obviously, if $\delta>\delta_\Gamma$, then we are always in the second case. Moreover, in the second case the measure $\overline m_\Gamma$ is infinite and we also have non-ergodicity of the dynamical system $(\quotient{\Gamma}{ [{{\mathcal G}}]}, g_\Gamma, \overline m_\Gamma)$ unless the measure $\overline m_\Gamma$ is supported on a single divergent orbit $\{g_\Gamma^t [v]\colon t\in{\mathbb{R}}\}$ for some $v\in\quotient{\Gamma }{{{\mathcal G}}}$; this follows directly from the paragraph before Theorem \[Hopfindividual\]. Since for $\delta>\delta_\Gamma$ we are always in the dissipative case we will formulate the subsequent results only for $\delta=\delta_\Gamma$. Under the presence of a zero width rank one geodesic with extremities in the limit set we get the following statement which implies Theorem B from the introduction: \[HTS\]Suppose $\Gamma<{\mbox{Is}}({X})$ is a discrete rank one group with the extremities of a [[**]{}zero width]{} rank one geodesic in its limit set. Let $\mu$ be a $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density normalized such that $\mu_{{o}}({\partial{X}})=1$, and $m_\Gamma$ the associated Ricks’ measure on $\quotient{\Gamma}{ {{\mathcal G}}}$. Then exactly one of the following two complementary cases holds, and the statements (i) to (iv) are equivalent in each case:\ 1. Case: 1. $\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}$ diverges. 2. $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=1$. 3. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is conservative. 4. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is ergodic and $m_\Gamma$ is not supported on a single divergent orbit. 2. Case: 1. $\sum_{\gamma\in\Gamma} {\mathrm{e}}^{-\delta_\Gamma d({{o}},\gamma{{o}})}$ converges. 2. $\mu_{{o}}({L_\Gamma^{\small{\mathrm{rad}}}})=0$. 3. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is dissipative. 4. $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is non-ergodic unless $m_\Gamma$ is supported on a single divergent orbit. Let us discuss the relation between Theorem \[HTS\] above and Theorem \[HTSweak\] in the case that ${L_\Gamma}$ contains the extremities of a zero width rank one geodesic and $\delta=\delta_\Gamma$: If $\Gamma$ is divergent, then according to Theorem \[conservativestatement\] the weak Ricks’ measure is equal to the Ricks’ measure. So the statements in the first case of Theorem \[HTSweak\] are only supplemented by the fact that the dynamical systems are ergodic. For a convergent group $\Gamma$ it is well-known that there can exist many different $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal densities. So first of all it is possible to obtain several distinct weak Ricks’ measures $\overline m_\Gamma$ associated to different conformal densities. And even if the same $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density is used in the construction, the Ricks’ measure $m_\Gamma$ can be different from the [[**]{}weak]{} Ricks’ measure $\overline m_\Gamma$ (as it is supported on an a priori smaller set). The statements in Theorem \[HTS\] above and Theorem \[HTSweak\] for the second case therefore apply to any (weak) Ricks’ measure constructed from a $\delta_\Gamma$-dimensional $\Gamma$-invariant conformal density. In order to obtain Theorem C from the introduction, we have to relate our new results to the Main Theorem in [@LinkPicaud]. Since the measure $\overline \mu$ on ${\partial}{{\mathcal G}}$ is used in Knieper’s construction, Knieper’s measure coincides with Ricks’ measure on the set $ \quotient{\Gamma}{{{\mathcal Z}}}$. As in the divergent case the support of both Knieper’s and Ricks’ measure is $\quotient{\Gamma}{{{\mathcal Z}}}$, the divergent case of the Main Theorem in [@LinkPicaud] remains true under the weaker hypothesis that $\Gamma$ is a discrete rank one group. By Lemma \[consdiss\] we further get that the equivalent conditions in the convergent case hold under the same weaker condition. So the existence of a periodic geodesic without parallel perpendicular Jacobi field in $\quotient{\Gamma}{{X}}$ is not a necessary hypothesis in the Main Theorem of [@LinkPicaud] and we immediately get Theorem C from the introduction. Finally I want to mention that for finite $m_\Gamma$ – the case treated in the article [@Ricks] by R. Ricks – we are always in the first case; this follows easily from the fact that finite measure spaces are conservative. Ricks further showed ([@Ricks Theorem 4]) that if ${X}$ is geodesically complete, $m_\Gamma$ is finite and ${L_\Gamma}={\partial{X}}$, then $(\quotient{\Gamma}{ {{\mathcal G}}}, g_\Gamma, m_\Gamma)$ is mixing unless ${X}$ is isometric to a tree with all edge lengths in $c{\mathbb{Z}}$ for some $c>0$. To conclude this article I want to describe a construction of convergent rank one groups whose idea goes back to F. Dal’bo, J.P. Otal and M. Peign[é]{} ([@MR1776078], see also [@MR3220550]). We first give a criterion for the critical exponent of a divergent subgroup of a rank one group which extends Theorem 3.2 in [@MR3220550]: Let ${X}$ be a proper Hadamard space and $\Gamma<{\mbox{Is}}({X})$ a discrete rank one group. If $H<\Gamma$ is a divergent subgroup with $L_H\subsetneq {L_\Gamma}$, then its critical exponent satisfies $\delta_H<\delta_\Gamma$. As $L_H\subsetneq {L_\Gamma}$ we may choose a point $\xi\in {L_\Gamma}\setminus L_H$. Since $L_H$ is a closed subset of ${\partial{X}}$ there exists an open neighborhood $U\subseteq{\partial{X}}$ of $\xi$ such that $U\cap L_H=\emptyset$. As $\Gamma$ is a discrete rank one group, Theorem 2.8 in [@MR656659] implies the existence of a rank one element $g\in\Gamma$ [  ]{}$g^+\in U$. Let $V^-\subseteq{\partial{X}}$, $V^+\subseteq U$ be small neighborhoods of $g^-$, $g^+$ respectively. Taking a rank one element $ \gamma\in \Gamma$ independent from $g$ and making $V^-$ smaller if necessary we have $\{ \gamma^-, \gamma^+\}\cap V^-=\emptyset$. Using the north-south dynamics Lemma \[dynrankone\] (b) we know that for $N\in{\mathbb{N}}$ sufficiently large the rank one element $$\widetilde \gamma=g^N \gamma g^{-N}\in\Gamma$$ has both fixed points in $V^+\subseteq U$. Replacing $\widetilde\gamma$ by $ \widetilde\gamma^M$ for some $M\in{\mathbb{N}}$ large enough we may further assume that $$\widetilde\gamma ({\partial{X}}\setminus U)\subseteq U \quad\text{and }\ \widetilde \gamma^{-1} ({\partial{X}}\setminus U)\subseteq U.$$ We now consider the free product $G=H\ast \widetilde\gamma<\Gamma$; the set $$\{h_1\widetilde\gamma h_2\widetilde\gamma \cdots h_{k-1}\widetilde\gamma h_k\widetilde \gamma\colon k\in{\mathbb{N}}, h_i\in H\setminus\{e\}\}$$ is obviously a subset of $G$ and hence of $\Gamma$. For any $s>0$ the Poincar[é]{} series $P_\Gamma(s)$ of $\Gamma$ then satisfies $$\begin{aligned} P_\Gamma(s)&= \sum_{\gamma\in\Gamma}{\mathrm{e}}^{-sd({{o}},\gamma{{o}})}\ge \sum_{k=1}^\infty \ \sum_{h_1,\ldots h_k\in H\setminus\{e\}} {\mathrm{e}}^{-sd({{o}}, h_1 \widetilde\gamma h_2\cdots \widetilde\gamma h_k\widetilde \gamma {{o}})}\\ &\ge \sum_{k=1}^\infty {\mathrm{e}}^{-s k d({{o}},\widetilde \gamma{{o}})} \sum_{h_1,\ldots h_k\in H\setminus\{e\}} {\mathrm{e}}^{-sd({{o}}, h_1{{o}})}{\mathrm{e}}^{-sd({{o}},h_2{{o}})}\cdots {\mathrm{e}}^{- s d({{o}}, h_k{{o}})}\\ &= \sum_{k=1}^\infty \left({\mathrm{e}}^{-s d({{o}},\widetilde \gamma{{o}})}\right)^k\cdot \left( \sum_{h\in H\setminus\{e\}} {\mathrm{e}}^{-sd({{o}}, h{{o}})}\right)^k. \end{aligned}$$ Since $H$ is divergent, the sum $\displaystyle \sum_{h\in H\setminus\{e\}}{\mathrm{e}}^{-sd({{o}},h{{o}})}$ tends to infinity as $s\searrow \delta_H$. Hence there exists $s_0>\delta_H$ [  ]{} $${\mathrm{e}}^{-s_0 d({{o}},\widetilde \gamma{{o}})} \cdot \sum_{h\in H\setminus\{e\}} {\mathrm{e}}^{-s_0d({{o}}, h{{o}})}>1;$$ for this parameter $s_0$ the Poincar[é]{} series $P_\Gamma(s_0)$ diverges, hence $\delta_H<s_0\le \delta_\Gamma$. Notice that $H$ need not be a rank one group. However, as in [@MR3220550] the above proposition allows to produce plenty of convergent discrete rank one isometry groups of any Hadamard space admitting a rank one isometry. The only novelty in the proof compared to the one given by M. Peign[é]{} in [@MR3220550] is the fact that the convergent subgroup is rank one (and hence is an example for a group in which the second case of Hopf-Tsuji-Sullivan dichotomy holds). Let ${X}$ be a proper Hadamard space [  ]{}${\mbox{Is}}({X})$ contains two independent rank one elements $h,g$. Then there exist $N,M\in{\mathbb{N}}$ [  ]{}the subgroup $G$ of ${\mbox{Is}}({X})$ generated by $$\{ g^{-nN} h^M g^{nN}\colon n\in {\mathbb{N}}_0\}$$ is a convergent discrete rank one group. Let $U^-,U^+, V^-,V^+\subseteq{\overline{{X}}}$ be pairwise disjoint neighborhoods of $h^-,h^+, g^-,g^+$. Thanks to Lemma \[dynrankone\] (b) there exist $M,N\in{\mathbb{N}}$ [  ]{}$$\label{inclusionofLH} h^{\pm M}(V^-\cup V^+)\subseteq U^\pm\quad\text{and }\ g^{\pm N}(U^-\cup U^+)\subseteq V^\pm.$$ This implies that $G$ acts freely on ${X}$ and hence that $G$ is discrete; moreover, the limit set $L_{G }$ of $G $ contains the set $$\{ g^{-nN} h^-, g^{-nN} h^+\colon n\in {\mathbb{N}}_0\},$$ so $L_{G}$ is infinite. Hence according to Lemma \[inflimset\] $G $ is a rank one group. The limit set $L_H$ of the conjugate discrete subgroup $H=g^{-N} G g^N<{\mbox{Is}}({X})$ is contained in $L_G$ and also in $V^-$ by (\[inclusionofLH\]). Since $ h^+\in L_G$, $h^+\notin V^-$ we get $L_H\subsetneq L_{G}$. Obviously we also have $\delta_H=\delta_G$, hence the proposition above implies that $H$ must be convergent. As conjugate groups are simultanously convergent or divergent we conclude that $G$ is convergent. Notice that the isometry group of a Hadamard space ${X}$ contains two independent rank one elements whenever it admits a discrete rank one subgroup. So the above construction in particular allows to construct plenty of convergent rank one subgroups in a given rank one discrete isometry group of ${X}$. Acknowledgements {#acknowledgements .unnumbered} ================ The author would like to thank Russel Ricks for answering her questions concerning his article [@Ricks] and for pointing out a mistake in a previous version of this article. She also thanks Marc Peign[é]{} for his comments on the preprint. Finally she would like to thank the referee for carefully reading the article and for pointing out a gap in the original version of the proof to Proposition \[largewidthgiveszerowidth\]. \[2\][\#2]{} \[1\][[arXiv:\#1](http://arxiv.org/abs/#1)]{} \[1\][`#1`]{} [10]{} (MR1676950) \[10.1017/S0143385799126592\] J. Aaronson and M. Denker, , *Ergodic Theory Dynam. Systems*, **19** (1999), 1–20. (MR0766098) J. Aaronson and D. Sullivan, , *Ergodic Theory Dynam. Systems*, **4** (1984), 165–178. (MR656659) \[10.1007/BF01456836\] W. Ballmann, , *Math. Ann.*, **259** (1982), 131–144. (MR819559) \[10.2307/1971331\] W. Ballmann, , *Ann. of Math. (2)*, **122** (1985), 597–609. (MR1377265) \[10.1007/978-3-0348-9240-7\] W. Ballmann, *Lectures on Spaces of Nonpositive Curvature*, vol. 25 of DMV Seminar, Birkhäuser Verlag, Basel, 1995, With an appendix by Misha Brin. (MR1383216) Werner Ballmann and Michael Brin, , *Inst. Hautes Études Sci. Publ. Math.* (1995), no. 82, 169–209. (MR799256) \[10.2307/1971373\] W. Ballmann, M. Brin and P. Eberlein, , *Ann. of Math. (2)*, **122** (1985), 171–203. (MR823981) \[10.1007/978-1-4684-9159-3\] W. Ballmann, M. Gromov and V. Schroeder, *Manifolds of Nonpositive Curvature*, vol. 61 of Progress in Mathematics, Birkhäuser Boston Inc., Boston, MA, 1985. (MR1132759) V. Bangert and V. Schroeder, Existence of flat tori in analytic manifolds of nonpositive curvature, *Ann. Sci. École Norm. Sup. (4)*, **24** (1991), 605–634. (MR1341941) M. Bourdon, Structure conforme au bord et flot géodésique d’un [${\rm CAT}(-1)$]{}-espace, *Enseign. Math. (2)*, **41** (1995), 63–102. (MR1744486) Martin R. Bridson and Andr[é]{} Haefliger, , Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], vol. 319, Springer-Verlag, Berlin, 1999. (MR1835418) D. Burago, Y. Burago, S. Ivanov, , *Graduate Studies in Mathematics, 33)*, American Mathematical Society, Providence, RI, 2001. (MR743026) \[10.1017/S0143385700001796\] K. Burns, , *Ergodic Theory Dynam. Systems*, **3** (1983), 1–12. (MR908215) K. Burns and R. Spatzier, Manifolds of nonpositive curvature and their buildings, *Inst. Hautes Études Sci. Publ. Math.* **65** (1987), 35–59. (MR2585575) \[10.1007/s00039-009-0042-2\] P-E. Caprace and K. Fujiwara, , *Geom. Funct. Anal.*, **19** (2010), no. 5, 1296–1319. (MR1207579) \[10.2307/2154747\] M. Coornaert and A. Papadopoulos, , *Trans. Amer. Math. Soc.*, **343** (1994), 883–898. (MR1776078) \[10.1007/BF02803518\] F. Dal’bo, J.-P. Otal and M. Peign[é]{}, , *Israel J. Math.*, **118** (2000), 109–124. (MR2581914)\[10.1090/conm/501/09839\] U. Hamenstädt, , *Discrete Groups and Geometric Structures (Contemporary Mathematics, 501)*, *American Mathematical Society*, Providence, RI (2009), 43–59. \[10.1007/978-3-642-86630-2\] E. Hopf, *Ergodentheorie*, Springer, 1937. (MR0284564) \[10.1090/S0002-9904-1971-12799-4\] E. Hopf, , *Bull. Amer. Math. Soc.*, **77** (1971), 863–877. (MR1293874) \[10.1515/crll.1994.455.57\] V. A. Kaimanovich, , *J. Reine Angew. Math.*, **455** (1994), 57–103. (MR1465601) \[10.1007/s000390050025\] G. Knieper, , *Geom. Funct. Anal.*, **7** (1997), 755–782. (MR1652924) \[10.2307/120995\] G. Knieper, , *Ann. of Math. (2)*, **148** (1998), 291–314. (MR0224773) U. Krengel, , *Mathematische Annalen*, **176** (1968), 181–190. (MR797411) \[10.1515/9783110844641\] U. Krengel, *Ergodic Theorems*, vol. 6 of de Gruyter Studies in Mathematics, Walter de Gruyter & Co., Berlin, 1985, With a supplement by Antoine Brunel. (MR2290453) \[10.1007/s10455-006-9016-x\] G. Link, , *Ann. Global Anal. Geom.*, **31** (2007), 37–57. (MR2629900) \[10.1007/s10455-006-9016-x\] G. Link, , *Geometry and Topology*, no. 2, **14** (2010), 1063–1094. (MR2255528) G. Link, M. Peign[é]{} and J.-C. Picaud, Sur les surfaces non-compactes de rang un, *Enseign. Math. (2)*, **52** (2006), 3–36. (MR3543588) \[10.3934/dcds.2016072\] G. Link and J.-C. Picaud, Ergodic geometry for non-elementary rank one manifolds, *Discrete and Continuous Dyn. Syst. A*, no. 11, **36** (2016), 6257–6284. (MR1041575) \[10.1017/CBO9780511600678\] P. J. Nicholls, *The Ergodic Theory of Discrete Groups*, vol. 143 of London Mathematical Society Lecture Note Series, Cambridge University Press, Cambridge, 1989. (MR2097356) \[10.1215/S0012-7094-04-12512-6\] J.-P. Otal and M. Peign[é]{}, , *Duke Math. J.*, **125** (2004), 15–44. (MR0450547) \[10.1007/BF02392046\] S. J. Patterson, , *Acta Math.*, **136** (1976), 241–273. (MR3220550) M. Peign[é]{}, , *Géométrie ergodique*, 25–59, *Monogr. Enseign. Math.*, **43**, Enseignement Math., Geneva, 2013. R. Ricks, , *PhD Thesis*, University of Michigan, 2015. (MR3628926) \[10.1017/etds.2015.78\] R. Ricks, , *Ergodic Theory and Dynamical Systems*, **** (2015), 1–32. (MR2057305) T. Roblin, Ergodicité et équidistribution en courbure négative, *Mém. Soc. Math. Fr. (N.S.)*, vi+96. (MR2166367) \[10.1007/BF02785371\] T. Roblin, , *Israel J. Math.*, **147** (2005), 333–357. (MR953675) \[10.1515/crll.1988.390.32\] V. Schroeder, , *J. Reine Angew. Math.*, **390** (1988), 32–46. (MR994382) \[10.1007/BF01182086\] V. Schroeder, , *Manuscripta Math.*, **64** (1989), 77–105. (MR1050413) \[10.1007/BF00181332\] V. Schroeder, , *Geom. Dedicata*, **33** (1990), 251–263. (MR556586) D. Sullivan, The density at infinity of a discrete group of hyperbolic motions, *Inst. Hautes Études Sci. Publ. Math.*, 171–202. (MR2245472) \[10.1090/gsm/076\] M. E. Taylor, *Measure Theory and Integration*, vol. 76 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 2006. (MR0414898) M. Tsuji, *Potential Theory in Modern Function Theory*, Chelsea Publishing Co., New York, 1975, Reprinting of the 1959 original. (MR1348871) \[10.1090/S0002-9947-96-01614-5\] C. Yue, , *Trans. Amer. Math. Soc.*, **348** (1996), 4965–5005. [^1]: Here $\gamma_*\mu_x$ denotes the measure defined by $\gamma_*\mu_x(E)=\mu_x(\gamma^{-1}E)$ for any Borel set $E\subseteq{\partial{X}}$.
--- abstract: 'Recently Fourier Ptychography (FP) has attracted great attention, due to its marked effectiveness in leveraging snapshot numbers for spatial resolution in large field-of-view imaging. To acquire high signal-to-noise-ratio (SNR) images under angularly varying illuminations for subsequent reconstruction, FP requires long exposure time, which largely limits its practical applications. In this paper, based on the recently reported Wirtinger flow algorithm, we propose an iterative optimization framework incorporating phase retrieval and noise relaxation together, to realize FP reconstruction using low SNR images captured under short exposure time. Experiments on both synthetic and real captured data validate the effectiveness of the proposed reconstruction method. Specifically, the proposed technique could save $\sim 80\%$ exposure time to achieve similar retrieval accuracy compared to the conventional FP. Besides, we have released our source code for non-commercial use.' address: | $^1$Department of Automation, Tsinghua University, China\ $^2$Biomedical Engineering, University of Connecticut, USA\ $^3$Electrical and Computer Engineering, University of Connecticut, USA\ author: - 'Liheng Bian,$^{1}$ Jinli Suo,$^{1}$ Guoan Zheng,$^{2,3}$ KaiKai Guo,$^2$ Feng Chen,$^{1}$ and Qionghai Dai$^{1,*}$' bibliography: - 'WirtingerFlowForFPM.bib' title: Fourier ptychographic reconstruction using Wirtinger flow optimization --- Introduction {#sec:Introduction} ============ Fourier Ptychography (FP) is a newly reported technique for large field-of-view (FOV) and high-resolution (HR) imaging [@FPM_Nature; @FPM_IEEE; @FPM_Optics]. This technique sequentially captures a set of low-resolution (LR) images describing different spatial spectrum bands of the sample, and then stitches these spectrum bands together in the Fourier domain to reconstruct the entire HR spatial spectrum, including both amplitudes and phases. This HR spectrum can be transformed to the spatial domain to recover the HR image of the sample. Mathematically, FP reconstruction could be treated as a typical phase retrieval problem, which needs to recover a plural function given the magnitude measurements of its Fourier transform. Specifically, we only obtain the magnitudes of images corresponding to the sub-bands of the HR spatial spectrum, and intend to retrieve the plural HR spectrum. So far, all the FP applications [@FPM_Nature; @FPM_Quantitative; @FPM_Fluorescence; @FPM_Cellphone] utilize the alternating projection (AP) algorithm [@Phase_Comparison; @Phase_Fienup_2], a widely used method for phase retrieval, to implement the reconstruction process. Recently, FP technique has been successfully applied to microscopic imaging and brings Fourier ptychography microscopy (FPM) [@FPM_Nature]. FPM assumes plane-wave illuminations from the LED array. Therefore, by sequentially lightening LEDs at different positions in the illumination plane, according to [@Optics], we can obtain different shifted versions of the sample’s spatial spectrum. Since the whole light field is filtered by the microscope’s objective lens, the changing of incident angles results in a set of LR images carrying different sub-bands of the entire spectrum. Finally, by utilizing the FP reconstruction technique (the AP algorithm), FPM can achieve large FOV and HR microscopic imaging. As stated in [@FPM_Nature], the synthetic NA of the FPM setup is $\sim$0.5, and the FOV could reach $\sim$120 mm$^2$, which greatly improves the throughput performance of existing microscopes. However, the AP algorithm utilized in the reconstruction process is sensitive to the input noise [@Phase_Review], and thus long exposure time is required for capturing high signal-to-noise-ratio (SNR) inputs. As reported in [@FPM_Nature], the FPM setup needs around 3 minutes to acquire high SNR images under 137 angularly varying illuminations, for subsequent reconstruction of a gigapixel grayscale image. This largely limits the practical applications of FPM and other FP applications. To shorten the acquisition time, this paper proposes a new phase retrieval method for FP reconstruction, which is able to deal with low SNR input images. As for existing phase retrieval methods, typically they can be classified into two categories, namely alternating projection algorithms and semi-definite programming (SDP) based algorithms [@Phase_Review]. The former kind alternately operates in spatial space and Fourier space, imposing corresponding spatial-plane and Fourier-plane constraints to the retrieved plural function. These constraints include consistency with measured magnitudes [@Phase_GS], magnitudes’ non-negativity [@Phase_Fienup_1], signal support constraint [@Phase_Comparison], and so on. This kind of methods are efficient but at the risk of non-convergence and reaching to a local optimum [@Phase_Review]. The latter kind of approaches rely on the observation that the quadratic equations in the phase retrieval problem can be rewritten as linear equations in a higher dimensional space [@SDP]. Typical SDP algorithms include PhaseLift [@Phase_Lift_1; @Phase_Lift_2] and PhaseCut [@Phase_Cut], and PhaseLift has been successfully applied to the common phase retrieval task from captured coded diffraction patterns [@Phase_Lift_App]. Such kind of methods could converge to a global optimum by a series of convex relaxations. However, they require matrix lifting to work in a higher space, and thus causes high computation cost, which makes them less competitive. Recently, based on Wirtinger derivatives [@Wirtinger_Book_1; @Wirtinger_Book_2], Candes et al. [@Phase_Wirtinger] develop a non-convex formulation of the phase retrieval problem, and utilize the gradient descent scheme to derive a computation-cost-saving solution, termed Wirtinger flow (WF) algorithm. As stated in [@Phase_Wirtinger], although the quadratic model is non-convex, Wirtinger flow algorithm can rigorously retrieve exact phase information from a nearly minimal number of random measurements, by starting with a relatively accurate initialization. Besides, the algorithm converges at a geometric rate in the case of random Gaussian sampling mode, and at a linear rate in the case of coded diffraction pattern mode. In this paper, we apply the Wirtinger flow scheme to FP, and further introduce a noise relaxation constraint for a new FP reconstruction framework. The proposed framework is termed WFP (Wirtinger flow optimization for Fourier Ptychographic). The advantages of the proposed WFP are threefold: - Compared to the existing AP algorithm for FP, WFP can better handle the detector noise, and thus largely reduce the requisite long exposure time; - Compared to the SDP based algorithms such as PhaseLift and PhaseCut, WFP doesn’t need matrix lifting and largely decreases the computation cost; - WFP is a general optimization framework being able to incorporate various priors and constraints, and hence can be extended to save costs and increase the retrieval accuracy further. The remainder of this paper is organized as follows: modeling and derivation of the optimization algorithm are explained in Sec. \[sec:Method\]. Then, we conduct a series of experiments on both synthetic and real captured data to validate the proposed approach in Sec. \[sec:Experiments\]. Finally, we conclude this paper with some summaries and discussions in Sec. \[sec:Conclusions\]. Optimization framework {#sec:Method} ====================== In this section, we first review the Wirtinger flow algorithm[@Phase_Wirtinger], and then introduce the Wirtinger flow formulation into the FP reconstruction process, and incorporate the noise constraint. Finally, we derive the WFP reconstruction algorithm robust to the capturing noise. Review of the Wirtinger flow algorithm -------------------------------------- Wirtinger flow algorithm [@Phase_Wirtinger] is a recently reported technique to solve the standard phase retrieval problem. Specifically, it retrieves a plural signal ${\bf x}\in \mathbb{C}^{n}$ from a series of its real sampling measurements ${\bf b}\in \mathbb{R}^{m}$, with the measurement formation defined as ${\bf b} = |{\bf A}{\bf x}|^2=\bf (Ax)^*\odot Ax$. Here ${\bf A}\in \mathbb{C}^{m\times n}$ is a linear sampling matrix, and $\odot$ stands for the dot product. Based on the quadratic loss function, the Wirtinger flow algorithm transforms the phase retrieval task into a minimization problem as $$\begin{aligned} \label{eqs:Model_Ori} \min && f({\bf x}) = \frac{1}{2}||{\bf (Ax)^*\odot Ax - b}||_F^2,~~~{\bf x}\in \mathbb{C}^n.\end{aligned}$$ Here $||\cdot||_F$ is the Frobenius norm, and is calculated as $||{\text{\bf X}}||_F = \sqrt{\sum_{i,j}{\text{\bf X}}_{ij}^2}$. Such an optimization model can be solved in an iterative manner, utilizing the gradient descent scheme. According to [@Wirtinger_Book_1; @Wirtinger_Book_2], the derivative of the complex quadratic cost function with respect to ${\bf x}^*$, i.e. $\frac{\partial f}{\partial {\bf x}^*}$, is necessary for updating $\bf x$ in each iteration. In implementation, $\bf x$ is updated in a gradient descending manner as [@Phase_Wirtinger] $$\begin{aligned} \label{eqs:Update_Wirtinger} {\bf x}^{(k+1)} = {\bf x}^{(k)} - \Delta\frac{\partial f}{\partial {\bf x}^*}|_{{\bf x} = {\bf x}^{(k)}}.\end{aligned}$$ Here $\Delta$ is the gradient descent step size set by users, and $\frac{\partial f}{\partial {\bf x}^*}$ can be easily calculated according to the Wirtinger derivatives as $$\begin{aligned} \label{eqs:Wirtinger_Gradient} \frac{\partial f}{\partial {\bf x*}} &=& \frac{\partial \frac{1}{2}||{\bf (Ax)^*\odot Ax - b}||_F^2}{\partial {\bf x*}}\\\nonumber &=& {\bf A}^H\left[(|{\bf Ax}|.^2 - {\bf b)\odot (Ax)}\right].\end{aligned}$$ With the above derivations, the Wirtinger flow algorithm is summarized as Alg. \[alg:WirtingerFlow\]. ${\text{\bf x}}={\text{\bf x}}^{(0)}$, $k = 0$ Wirtinger flow optimization for Fourier Ptychographic—WFP --------------------------------------------------------- In terms of the FP reconstruction, the target is to recover the HR spatial spectrum from a series of LR images captured in spatial space. The relation between the HR reconstruction and the LR observations corresponds to two sequential linear operations: (i) down-sampling caused by the object aperture, and (ii) inverse Fourier transform to the LR spectrum bands caused by the microscope imaging system. We treat these two operations as a whole and use ${\text{\bf A}}\in \mathbb{C}^{m\times n}$ to represent this combinational sampling process. Specifically, denoting the HR spatial spectrum as a plural vector ${\text{\bf x}}\in \mathbb{C}^n$, the corresponding sampling matrix ${\text{\bf A}} = {\text{\bf F}}{\text{\bf S}}$ is composed of two components: inverse Fourier transform ${\text{\bf F}}$ and down-sampling ${\text{\bf S}}$. In addition, we also consider the capturing noise explicitly, and the measurement formation model becomes $$\begin{aligned} \label{eqs:Formation} {\text{\bf b}} = |{\text{\bf A}}{\text{\bf x}}|^2 + {\text{\bf n}},\end{aligned}$$ where ${\text{\bf n}}\in \mathbb{R}^{m}$ denotes the capturing noise, we assume which to be Gaussian. We use ${\sigma} \in \mathbb{R}$ to represent the standard deviation of the noise. The three sigma rule tells that, nearly all (99.73$\%$) the samples of a random variable lie within 3 times standard deviation from its mean. Therefore, we can approximatively formulate our noise constraint as $$\begin{aligned} \label{eqs:Noise_Neq} |{\text{\bf n}}| \leqslant 3\sigma.\end{aligned}$$ Introducing a relaxation vector $\bm \epsilon\in \mathbb{R}^{m}$, we can transform the above inequality into an equality $$\begin{aligned} \label{eqs:Noise_Eq} {\text{\bf n}}\odot {\text{\bf n}} - 9\sigma^2 + {\bm \epsilon} \odot {\bm \epsilon} = \bf 0.\end{aligned}$$ Combining the above noise constraint (Eq. \[eqs:Noise\_Eq\]) with the measurement formation (Eq. \[eqs:Formation\]), we can get the optimization model for FP reconstruction as $$\begin{aligned} \label{eqs:Model} \min && f({\bf x}) = \frac{1}{2}||{\bf (Ax)}^*\odot {\bf Ax} + {\bf n} - {\bf b}||_F^2 \\\nonumber s.t. && {\text{\bf n}}\odot {\text{\bf n}} - 9\sigma^2 + {\bm \epsilon} \odot {\bm \epsilon} = \bf 0.\end{aligned}$$ In the following, according to the Wirtinger flow algorithm, we derive the WFP optimization algorithm to solve the above model. First, we introduce a weighting parameter $\mu$ to incorporate the noise constraint into the objective function, and the model becomes $$\begin{aligned} \label{eqs:ALM} \min & f({\bf x}) = \frac{1}{2}||{\bf (Ax)}^*\odot {\bf Ax} + {\bf n} - {\bf b}||_F^2 + \frac{\mu}{2}||{\text{\bf n}}\odot {\text{\bf n}} - 9\sigma^2 + {\bm \epsilon} \odot {\bm \epsilon}||_F^2.\end{aligned}$$ This is similar to the augmented Lagrangian function [@ALM_1], which can be solved by sequentially updating each variable [@ALM_2], while keeping the other variables constant. The updating can be conducted either by assigning zero to the function’s partial derivative with respect to the updating variable, or by the gradient descent technique. Here we utilize a similar scheme, and sequentially update the optimization variables, i.e., ${\text{\bf x}}$, ${\text{\bf n}}$ and $\bm \epsilon$, in Eq. \[eqs:ALM\]. For ${\text{\bf x}}$, by calculating the partial derivative of $f$ to ${\text{\bf x}}^*$, we can get its updating rule using the gradient descent technique as $$\begin{aligned} \label{eqs:WFP_X} {\bf x}^{(k+1)} &=& {\bf x}^{(k)} - \Delta_{{\bf x}}\frac{\partial f}{\partial {\bf x}^*}|_{{\bf x} = {\bf x}^{(k)}} \\\nonumber &=& {\bf x}^{(k)} - \Delta_{{\bf x}}{\bf A}^H\left[(|{\bf Ax}|.^2 + {\bf n} - {\bf b)\odot (Ax)}\right]|_{{\bf x} = {\bf x}^{(k)}},\end{aligned}$$ with $\Delta_{{\bf x}}$ being the gradient descent step size of $\bf x$. Similarly, we can set the step size of ${\text{\bf n}}$ as $\Delta_{{\bf n}}$, and update ${\text{\bf n}}$ following $$\begin{aligned} \label{eqs:WFP_N}{\bf n}^{(k+1)} &=& {\bf n}^{(k)} - \Delta_{{\bf n}}\frac{\partial f}{\partial {\bf n}}|_{{\bf n} = {\bf n}^{(k)}} \\\nonumber &=& {\bf n}^{(k)} - \Delta_{{\bf n}}\left[(|{\bf Az}|.^2 + {\bf n -b}) + \mu({\bf n\odot n} - 9{ \sigma}^2 + {\bm \epsilon\odot \bm \epsilon})\odot 2{\bf n}\right]|_{{\bf n} = {\bf n}^{(k)}}.\end{aligned}$$ For updating of $\bm \epsilon$, we let the partial derivative of $f$ to $\bm \epsilon$ equal to $\bf 0$, and derive the closed-form updating rule as $$\begin{aligned} \label{eqs:WFP_E} & &\frac{\partial f}{\partial {\bm \epsilon}}|_{{\bm \epsilon} = {\bm \epsilon}^{(k+1)}} = \left[\mu({\bf n\odot n} - 9\sigma^2 + {\bm \epsilon\odot \bm \epsilon})\odot 2{\bm \epsilon}\right]|_{{\bm \epsilon} = {\bm \epsilon}^{(k+1)}}= {{\text{\bf 0}}}\\\nonumber &\Rightarrow& {\bm \epsilon}^{(k+1)} = \sqrt{\max\left(9\sigma^2 - {\bf n\odot n}, {\text{\bf 0}}\right)}.\end{aligned}$$ Based on the above derivations, the proposed WFP algorithm is summarized in Alg. \[alg:WFP\]. As for the initialization, we set ${\text{\bf x}}^{(0)}$ as the spatial spectrum of the up-sampled version of the LR image, which is captured under the normal incident light. ${\text{\bf n}}^{(0)}={\text{\bf 0}}$, $\bm \epsilon^{(0)}={\text{\bf 0}}$, $k = 0$ As for the parameters settings of $\Delta_{{\bf x}}$ and $\Delta_{{\bf n}}$, similar to WF, we assign $\Delta_{{\bf x}}^{(k)} = \frac{\theta^{(k)}}{||{\text{\bf x}}^{(0)}||^2}$ and $\Delta_{{\bf x}}^{(k)} = \frac{\theta^{(k)}}{||\bm \sigma||^2}$, where $\theta^{(k)} = \min\left(1-e^{-k/k_0},\theta_{max}\right)$. As stated in [@Phase_Wirtinger], $k_0 = 330$ and $\theta_{max} = 0.4$ work well, so we also use these settings in our algorithm. We have released our source code for non-commercial use, which can be downloaded [here](http://www.sites.google.com/site/lihengbian). Experiments {#sec:Experiments} =========== In this section, we conduct a series of experiments on both synthetic and real captured data to validate the proposed WFP algorithm. Experiment on synthetic data {#sec:Simulations} ---------------------------- **Algorithms for comparison:** To demonstrate the performance and advantages of the proposed WFP algorithm, here we run the WFP as well as the AP algorithm on simulated FP data. Besides, to investigate WFP’s ability in addressing acquisition noise, we further compare its results with those produced by applying denoising before or after AP reconstruction. Here we use BM3D[@BM3D] for denoising considering its promising results and high efficiency, as stated in [@BM3D_2]. For simplicity, we respectively refer to these two methods including the denoising operation as “BM3D+AP” and “AP+BM3D”. **Criterion:** Besides the visual results, we also utilize two quantitative criteria to assess the recovery performance of the above methods. The first one is the peak signal to noise ratio (PSNR), which has traditionally been widely used to assess the quality of processed images compared to benchmark. PSNR intuitively describes the intensity difference between two images, and would be smaller for lower quality recovery. Another adopted criterion is the structure similarity (SSIM)[@SSIM] that measures the spatial structural closeness between two images, and thus consists with human perception better than PSNR. The SSIM score ranges from 0 to 1, and is higher when two images are of more similar structural information. Note that here both PSNR and SSIM are calculated on the intensity images. ![image](Simulation_NoiseLevel_GaussianNoise.pdf){width="100.00000%"} [p[0.2]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;]{} $\sigma$ & 0.002& 0.004 & 0.006 & 0.008\ PSNR(dB) & 26.14 & 25.52 & 24.80 & 23.97\ SSIM & 0.76 & 0.72 & 0.66 & 0.60\ **Experiment parameter settings:** The convergence experiment in [@Phase_Wirtinger] shows that the Wirtinger flow algorithm works successfully when measurements are more than 6 times of the signal entries to be recovered. In terms of the FP problem, assuming that the overlap ratio is $\xi $, the LR images are of $m\times m$ pixels and we capture $k\times k$ LR images, the sampling ratio between measurements and signal entries can be calculated as $$\label{eqs:SamplingRatio} \eta = \frac{m^2k^2}{\left[(1-\xi)m(k-1)+m\right]^2} \approx \frac{1}{\left(1-\xi\right)^2}.$$ Similar to [@FPM_Nature], by setting $\xi = 65\%$, we can get the sampling ratio as around 8. The ratio is higher than the minimum convergence requisition ($\sim$6) in [@Phase_Wirtinger]. Therefore, we adopt the above experiment settings in our simulation experiment, namely $\xi = 65\%$, $k = 15$, $m = 100$. **Results:** Based on the above specifications, the captured image volume in our simulation experiment is synthesized by following three steps: 1) we apply FFT to the original HR image, and select subregions corresponding to different incident angles, by multiplying the HR spectrum with an ideal pupil function (all ones in the pupil circle and zeros outside). 2) We shift these sub spatial spectra to the origin location, and do inverse Fourier transform to recover the LR plural images in the spatial domain. 3) We retain only the intensity of these LR plural images, and add Gaussian white noise to obtain the simulated captured noisy images. In our implementation, we use the ’Lena’ and the ’Map’ image ($512\times512$ pixels) from the USC-SIPI image database [@Data] as the HR intensity and phase image, respectively. The LR images’ pixel numbers are set to be one tenth of the HR image along both directions. First, we apply the proposed WFP on the simulated data with varying noise levels to study the algorithm’s performance. Specifically, the standard derivation $\sigma$ of the additive noise ranges from 0.002 to 0.008 with a 0.002 interval. By algorithm testing, 500 iterations are enough for WFP to converge, and hence we set the iteration number of WFP as 500. The visual and quantitative results are respectively shown in Fig. \[fig:Simulation\_NoiseLevel\] and Tab. \[tab:Simulation\_NoiseLevel\]. From the results we can see that WPF works well to reconstruct both intensity and phase information. Besides, as the noise level increases, the reconstruction quality does not degenerate much. This illustrates the robustness of WFP to different noise levels, and thus a wider applicability. Then, we compare WFP with the above mentioned three other methods, i.e., AP, BM3D+AP, and AP+BM3D, to show their pros and cons. Here the noise level is fixed at $\sigma = 0.004$. The iteration number of conventional AP is set to 50 to ensure convergence. The simulated acquisition image under the normal illumination is shown in Fig. \[fig:Simulation\_Methods\](a). The quantitative and visual reconstruction results are respectively shown in Tab. \[tab:Simulation\_Methods\] and Fig. \[fig:Simulation\_Methods\](b)-(d). Due to the noise corruption, the SNR of captured images is very low, especially for the images corresponding to high spatial frequencies. As a result, the reconstruction intensity and phase image of AP in Fig. \[fig:Simulation\_Methods\](b) are very noisy. When BM3D is applied before the AP reconstruction, a lot of high frequency information are filtered out, thus there exist serious artifacts in the final recovery, as shown in Fig. \[fig:Simulation\_Methods\](c). If we apply BM3D after the AP reconstruction, though most of the noise is removed, many crucial image details are filtered out as well (see the areas of hat tassels in Fig .\[fig:Simulation\_Methods\](d)). Differently, the proposed WFP incorporates noise suppression into the reconstruction framework, and conducts these two operations jointly. This largely avoids error accumulation in successive processing and achieves higher performance. Consequently, WFP could obtain satisfying reconstruction results with less noise and more details. In a nutshell, WFP largely outperforms the other three methods, on both visual and quantitative metrics. ![image](Simulation_Methods_GaussianNoise.pdf){width="95.00000%"} [p[0.2]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;p[0.15]{}&lt;]{} & AP & BM3D+AP & AP+BM3D & WFP\  PSNR(dB) & 13.33 & 20.03 & 18.71 & [**25.52**]{}\  SSIM & 0.18 & 0.41 & 0.66 & [**0.72**]{}\  Running time &[**12s**]{} & 14s & 15s & 1min\ The advantageous performance is at the expense of high computational cost. We implement all the four different methods using Matlab on an Intel i7 3.6 GHz CPU computer, with 16G RAM and 64 bit Windows 7 system. The running time of different methods are listed in the bottom row of Tab. \[tab:Simulation\_Methods\]. Obviously, WFP takes longer time than the other methods. Experiment on real captured data -------------------------------- To further validate the effectiveness of WFP, we build a FPM setup to capture LR images as inputs for FP reconstruction. Similar to [@AFP], an upright microscope with a 2$\times$ (NA = 0.1) objective is used in the platform. The LED array is placed around 8cm under the specimen, and the lateral distance between two adjacent LEDs is 4 mm. The central wavelength of incident light is 632nm. The pixel size of the captured raw images is $\sim$1.85$\mu$m. First, we capture the LR images corresponding to the 15$\times$15 LED positions, with the exposure time for each LED as 1ms, and apply AP as well as WFP to the image set for performance comprison. Fig. \[fig:Real\_Experiment\](a) shows the LR images of the USAF chart and the blood smear sample captured under the normal illumination. The HR reconstruction results of AP and WFP are respectively presented in Fig. \[fig:Real\_Experiment\](b) and Fig. \[fig:Real\_Experiment\](c), where the left columns show recovered amplitudes, and the right columns present recovered phases. ![image](RealExperiment.pdf){width="\textwidth"} From the results we can conclude three advantages of WFP over conventional AP. First, the reconstruction results of WFP own higher resolution than those of AP, and thus contain more details (see the close-ups for clearer comparison). Second, WFP could effectively suppress noise (see the smooth regions in the USAF chart). Third, WFP could reconstruct much more accurate phase than AP. Note that there exists some phase jumps recovered by WFP in the feature areas of the USAF chart, where the magnitudes are close to zero. This is due to that in these areas, the phase can be any value and their assignments would not affect successful magnitude recovery. Then, to quantitatively evaluate the advantage of WFP over conventional AP in the view of exposure time, we increase the exposure time, and apply AP to corresponding acquisition data under longer exposure time. See the amplitude reconstruction results in Fig. \[fig:TimeEvaluation\]. The proposed WFP could resolve a comparable resolution under 1ms exposure time, to that of the conventional AP under 5ms exposure time. This indicates that WFP could save round $80\%$ exposure time than AP to achieve the same reconstruction accuracy. In all, WFP offers a feasible way for the FP technique to reconstruct highly accurate results when there exists non-ignorable capturing noise, such as the case of low exposure time, or when the hardware is not so precise. This will do lot of help in real applications. Conclusions and discussions {#sec:Conclusions} =========================== This paper proposes a reconstruction framework termed Wirtinger flow optimization for Fourier Ptychography (WFP). Based on the recently reported Wirtinger flow algorithm, WFP formulates the FP recovery as a quadratic optimization problem, and presents a solution utilizing the gradient descent scheme. By incorporating priors on the capturing noise, WFP can save around 80$\%$ of the exposure time for the present FP technique, while without obvious performance degeneration. Results on both synthetic and real captured data validate the effectiveness of WFP. One extension of WFP is to handle non-uniform noise. This can be easily realized by treating the standard derivation of noise $\sigma$ in Eq. (\[eqs:Noise\_Neq\]) as spatial non-uniform, namely by changing ${\sigma} \in \mathbb{R}$ to ${\bm\sigma} \in \mathbb{R}^{m}$. Besides, as a flexible optimization framework, WFP can also be easily extended by introducing other priors and constraints. For example, we can incorporate the sparsity of the latent HR image [@Sparse_1] into our framework, which may further reduce the snapshot number and thus the acquisition time. We can also introduce the total variance prior [@TV_1; @TV_2] into WFP to further suppress noise in the reconstruction results. What’s more, in the optimization model in Eqs. (\[eqs:Model\]), the sampling matrix can be composed of any kinds of linear operations (down-sampling and inverse Fourier transform in conventional FP). Therefore, WFP is applicable for different variants of conventional FP, such as multiplexed FP [@FPM_Multiplexing_1; @FPM_Multiplexing_2] and extended FP for fluorescence imaging [@FPM_Fluorescence]. Although advantages in multiple ways over the conventional FP algorithm, WFP is limited in running efficiency, i.e., WFP needs more running time than AP. Therefore, shortening the running time of WFP is one of our future work. Utilizing accelerated gradient descent methods and introducing parallel computation techniques are two promising speeding up options.
--- abstract: 'The kinematic evolution of axisymmetric magnetic fields in rotating magnetospheres of relativistic compact objects is analytically studied, based on relativistic Ohm’s law in stationary axisymmetric geometry. By neglecting the poloidal flows of plasma in simplified magnetospheric models, we discuss self-excited dynamos due to the frame-dragging effect (originally pointed out by Khanna & Camenzind), and we propose alternative processes to generate axisymmetric magnetic fields against ohmic dissipation. The first process (which may be called induced excitation) is caused by the help of a background uniform magnetic field in addition to the dragging of inertial frames. It is shown that excited multipolar components of poloidal and azimuthal fields are sustained as stationary modes, and outgoing Poynting flux converges toward the rotation axis. The second one is self-excited dynamo through azimuthal convection current, which is found to be effective if plasma rotation becomes highly relativistic with a sharp gradient in the angular velocity. In this case no frame-dragging effect is needed, and the coupling between charge separation and plasma rotation becomes important. We discuss briefly the results in relation to active phenomena in the relativistic magnetospheres.' author: - AKIRA TOMIMATSU title: | RELATIVISTIC DYNAMOS IN MAGNETOSPHERES\ OF ROTATING COMPACT OBJECTS --- DPNU-99-26 INTRODUCTION ============ Interesting high-energetic phenomena have been observed in various compact astrophysical systems, such as pulsars, X-ray binaries and active galactic nuclei. Though the energy-release processes via radiation emission and jet formation have not been fully understood as yet, strong magnetic fields near the central objects can be a crucial component for explaining the observational features, and many works have been devoted to the developments of relativistic magnetohydrodynamical (MHD) models. In particular, Khanna & Camenzind (1994, 1996a) have recently proposed a self-excitation mechanism of axisymmetric magnetic fields, based on relativistic Ohm’s law in Kerr geometry. This effect which is called self-excited gravitomagnetic dynamo is due to the coupling between relativistic frame-dragging of a rotating central object and rotational motion of surrounding plasma, and such dynamo action is expected to play an important role in the astrophysical phenomena as a trigger of relativistic plasma flows. Unfortunately, numerical calculations done by Brandenburg (1996) have shown that in a wide set of standard thin disk models around a rotating black hole no magnetic field can be maintained against ohmic dissipation. Cowling’s antidynamo theorem for axisymmetric magnetic fields still holds even near a rotating black hole in the situations previously considered. Therefore, a new viewpoint will be necessary if the kinematic theory of resistive MHD presented by Khanna & Camenzind (1994, 1996a) is applied to the problem of generation of axisymmetric magnetic fields (see also Núñez 1997). In this paper we do not adhere to the accretion disk models, but we pursue analytically the processes permissible in rotating magnetospheres of compact objects. Though in the kinematic treatment of neglecting the feedback of magnetic fields due to the Lorentz force on plasma velocity the basic field equations become much simpler in comparison with the full MHD theory, some additional approximations are required to make any analytical approach possible. We would like to restrict attention to the simplified cases in which only the essential interactions between poloidal and azimuthal magnetic fields for allowing the existence of growing or stationary modes are preserved. The plasma injected into the rotating magnetosphere will partially accrete onto the central object and partially escape to infinity. The poloidal velocity of plasma flows, however, can remain sub-Alfvenic in some intermediate (quasi-equilibrium) region between the outer light cylinder and the surface of the central object, where the plasma angular velocity may be different from the Keplerian one, because gravitational forces do not dominate in comparison with any other interactions (see, e.g., Camenzind 1987; Takahashi et al. 1990). The stationary axisymmetric structure of the magnetosphere is mainly determined in the framework of ideal MHD theory under the frozen-in condition. We consider the evolution of electromagnetic fields perturbed by the presence of small magnetic diffusivity. Though the motion of the plasma is assumed to be non-perturbed, a complicated dynamical evolution of electromagetic fields due to the poloidal flows will occur. Therefore, by fixing the poloidal velocity to be zero in the quasi-equilibrium region, we study the generation of magnetic fields which goes on slowly in a long diffusion timescale. Our purpose is to point out some basic aspects of the perturbed fields governed by the plasma rotation and the frame-dragging effect. We obtain the main results such that (i) if a background uniform magnetic field exists, it can sustain excited poloidal and azimuthal multipolar modes even in slow-rotation cases, and (ii) a sufficient charge separation generated by plasma rotation with a relativistic speed can cause self-excited dynamo without any frame-dragging effect. We discuss these processes of generation of magnetic fields (which have been missed in the above-mentioned numerical models) in relation to active phenomena observed in the relativistic magnetospheres. In the following we use units such that $c=G=1$, and the axisymmetric stationary metric denoted by $g_{ab}$ has signs ($-$ + + +). KINEMATIC EQUATIONS FOR AXISYMMETRIC DYNAMOS ============================================ A contribution of accreting plasma and electromagnetic fields around a rotating compact object to the stationary axisymmetric gravitational field remains negligibly small. Therefore, we can always study the MHD interaction under a fixed gravitational field with the line element of the form $$ds^{2} \ = \ g_{tt}dt^{2}+2g_{t\phi}dtd\phi+g_{\phi\phi}d\phi^{2}+g_{rr}dr^{2}+g_{\theta\theta}d\theta^{2} \ ,$$ where $r$, $\theta$ and $\phi$ are the spherical coordinates, and the metric $g_{ab}$ is assumed to be independent of the time coordinate $t$ and the azimuthal angle coordinate $\phi$. The angular velocity of the dragging of inertial frames is denoted by $\omega\equiv -g_{t\phi}/g_{\phi\phi}$. We define $g$ to be the determinant of $(g_{ab})$, and for the lapse function $\alpha\equiv\sqrt{-g_{tt}+(g_{t\phi}^{2}/g_{\phi\phi})}$ we have $\sqrt{-g}=\alpha\sqrt{g_{\phi\phi}g_{rr}g_{\theta\theta}}$. We do not limit the metric to the Kerr form for later discussions. Further, we do not use explicitly the 3+1 formalism developed by Thorne, Price, & Macdonald (1986), and we treat relativistic Ohm’s law in a covariant form written by $$F_{ab}u^{b} \ = \ 4\pi\eta(j_{a}-Qu_{a}) \ ,$$ where $\eta$ is the magnetic diffusivity. According to the kinematic MHD theory the plasma 4-velocity $u^{a}(r,\theta)$ is also fixed, and $Q\equiv-j^{a}u_{a}$ is the electric charge density measured by an observer comoving with the plasma. The electric current density $j^{a}$ should be rewritten by the electromagnetic field $F_{ab}$ via the Maxwell equations $4\pi j^{a}=\nabla_{b}F^{ab}$, where $\nabla_{b}$ denotes the covariant derivative with respect to the metric $g_{ab}$. Because we consider time-dependent fields under the assumption of axisymmetry, the poloidal magnetic components $F_{r\phi}$ and $F_{\theta\phi}$ and the azimuthal electric component $F_{t\phi}$ are given by the single scalar potential $\Psi(t,r,\theta)$ via the equation $$F_{a\phi} \ = \ \partial_{a}\Psi \ .$$ Then the four field variables $\Psi$, $F^{rt}$, $F^{\theta t}$ and $F^{r\theta}$ remain to be solved, and the equation added to relativistic Ohm’s law is the azimuthal part of the Faraday law $$\partial_{t}F_{r\theta}+\partial_{r}F_{\theta t}+\partial_{\theta}F_{tr} \ = \ 0 \ .$$ These field equations for the kinematic evolution are still too complicated to discuss analytically the behaviors of solutions. Hence, the following investigation is limited to the models with no poloidal flow, i.e., $$u^{r} \ = \ u^{\theta} \ = 0 \ ,$$ which will be justified if plasma is located in a quasi-equilibrium region slightly distant from the surface of the central object, and a dynamical evolution of electromagnetic fields caused by the poloidal plasma flows with sub-Alfvenic velocities is not essential to the problem of dynamo action. Further, we assume $\eta$ to be a very small constant, and the time variation of fields is described by a long diffusion timescale such that $t\sim r^{2}/\eta$. Then it is convenient to introduce the variable $T\equiv\eta t$ instead of $t$. As a result of this assumption concerning the order of $\eta$ the field equations can have consistent solutions, if the ratio of the amplitudes is understood to be $$\eta F_{r\theta}/\Psi \ = O(1) \ .$$ Now let us give the field equations which are simplified according to the above-mentioned approximations. By virtue of equation (5) the poloidal part of equation (2) leads to $$F^{At}u_{t}+F^{A\phi}u_{\phi} \ = \ \eta j^{A} \ ,$$ where the poloidal current density is approximately given by $$\eta j^{A} \ = \ \frac{1}{\sqrt{-g}}\epsilon^{AB}\partial_{B}F \ , \ \ \epsilon^{r\theta} \ = \ -\epsilon^{\theta r} \ = \ 1 \ ,$$ in which the displacement current is neglected. (Hereafter the superscripts or subscripts written by $A$ and $B$ mean the coordinates $r$ and $\theta$.) Because we have $F_{A\phi}=\partial_{A}\Psi$, these relations are used to express $F^{At}$ and $F^{A\phi}$ by $\Psi$ and $F\equiv\eta\sqrt{-g}F^{r\theta}$, for example, in the approximated form of the proper charge density $Q$ given by $$4\pi Q \ = \ -\frac{u_{a}}{\sqrt{-g}}\partial_{b}(\sqrt{-g}F^{ab}) \ \simeq \ F^{tA}\partial_{A}u_{t}+F^{\phi A}\partial_{A}u_{\phi} \ ,$$ which should be derived from equation (2) by using the current conservation $\nabla_{a}j^{a}=0$ and the inequality $|Q|\gg|\eta\nabla_{a}(Qu^{a})|$. Note that for the stationary and axisymmetric velocity field $u_{a}$ we have $\partial_{t}u_{a}=\partial_{\phi}u_{a}=0$, while the rotational motion can generate the non-vanishing components $\partial_{A}u_{t}$ and $\partial_{A}u_{\phi}$ for $A=r, \theta$. Then, except in the case that both $u_{t}$ and $u_{\phi}$ are constant, $\partial_{b}u_{a}$ becomes non-symmetric under the permutation of $a$ and $b$ to assure the validity of equation (9) for the estimation of charge separation. From the Maxwell equations we also obtain approximately the azimuthal current density $j_{\phi}$, in order to substitute it into the azimuthal part of equation (2) of the form $$u^{t}\partial_{T}\Psi \ = \ -4\pi(j_{\phi}-Qu_{\phi}) \ ,$$ with equation (9) for the proper charge density $Q$. Then, we arrive at the final result of the evolution equation for $\Psi$ $$\partial_{T}\Psi \ = \ S_{1}+S_{2}+S_{3} \ ,$$ where $$S_{1} \ = \ \frac{g_{\phi\phi}}{u^{t}\sqrt{-g}}\partial_{A}(\frac{\sqrt{-g}}{g_{\phi\phi}}\partial^{A}\Psi) \ ,$$ $$S_{2} \ = \ \frac{u_{\phi}}{(\alpha u^{t})^{2}g_{\phi\phi}}\partial^{A}\Psi(g_{\phi\phi}\partial_{A}\omega+u_{t}\partial_{A}u_{\phi}-u_{\phi}\partial_{A}u_{t}) \ ,$$ $$S_{3} \ = \ \frac{1}{(\alpha u^{t})^{2}\sqrt{-g}}\epsilon^{AB}\partial_{A}F \{g_{\phi\phi}\partial_{B}\omega-u_{\phi}(\partial_{B}u_{t}+\omega\partial_{B}u_{\phi})\} .$$ The final term $S_{3}$ can contribute to the excitation of $\Psi$ through the coupling to $F$. The ohmic diffusion is mainly due to the first term $S_{1}$, and the role of $S_{2}$ (i.e., self-generation or self-destruction) will depend on the topology of $\Psi$. The Faraday law (4) is the evolution equation for $F$, which approximately reduces to the form $$\partial_{T}F \ = \ \frac{\alpha^{2}g_{\phi\phi}}{\sqrt{-g}}\{\partial_{A}(\frac{\sqrt{-g}}{\alpha^{2}u^{t}g_{\phi\phi}}\partial^{A}F) -\epsilon^{AB}\partial_{A}\Psi\partial_{B}\Omega\} \ ,$$ where $\Omega\equiv u^{\phi}/u^{t}$ is the plasma angular velocity. Note that the right-hand side of equation (15) is also decomposed into the terms for diffusion and amplification of $F$. In the following sections we will study the coupled equations (11) and (15) for $\Psi$ and $F$ to see the efficiency of excitation mechanisms. THE FRAME-DRAGGING EFFECT ========================= In the case of no poloidal velocity $u^{A}=0$ we can give the specific energy and angular momentum of plasma denoted by $-u_{t}$ and $u_{\phi}$ as follows, $$-u_{t} \ = \ \gamma\{\alpha^{2}+g_{\phi\phi}\omega(\Omega-\omega)\} \ , \ \ u_{\phi} \ = \ \gamma g_{\phi\phi}(\Omega-\omega)$$ where $\gamma\equiv u^{t}=1/\sqrt{\alpha^{2}-g_{\phi\phi}(\Omega-\omega)^{2}}$ is the Lorentz factor of rotating plasma. If the plasma is co-rotating with the angular velocity of the background magnetosphere, the Lorentz factor $\gamma$ becomes very large in a region close to the light cylinder surface, and the term originated from the convection current density $Qu_{\phi}$ will dominate in $S_{3}$. In this section we would like to restrict attention to the frame-dragging effect, by neglecting any contribution of such charge separation under the condition $\Omega=\omega$. The $\omega$-$\Omega$ Dynamo ---------------------------- Recall that in the numerical models of $u_{\phi}=0$ calculated by Brandenburg (1996) the dynamo action can work only if $\omega$ is taken to be artificially large. To see roughly this result from equations (11) and (15) with $u_{\phi}=0$, let us consider simplified forms of the metric components as functions of the coordinate $\theta$ such that $g_{\phi\phi}$ and $-g$ are proportional to $\sin^{2}\theta$, and $\omega$, $g_{rr}$ and $g_{\theta\theta}$ are independent of $\theta$. (Now the Lorentz factor $u^{t}=1/\alpha$ depends only on $r$. This simplified metric may be regarded as the Kerr metric in slow-rotation approximation.) Then, by setting the time behavior of $\Psi$ and $F$ to be $\exp(\mu T)$, we obtain $$\frac{g_{\theta\theta}}{1-x^{2}}(\frac{\mu}{\alpha}-L_{1})\Psi \ = \ \partial_{x}^{2}\Psi-\frac{\sigma}{\alpha}\partial_{x}F \ ,$$ and $$\frac{g_{\theta\theta}}{1-x^{2}}(\frac{\mu}{\alpha}-L_{2})F \ = \ \partial_{x}^{2}F+\alpha\sigma\partial_{x}\Psi \ ,$$ where $x\equiv\cos\theta$. The efficiency of dynamo action will be determined by the behavior of $\sigma(r)$ dependent on the metric components as follows, $$\sigma \ = \ -\frac{g_{\phi\phi}g_{\theta\theta}}{\sin\theta\sqrt{-g}}\frac{d\omega}{dr} \ .$$ Further, $L_{1}\Psi$ and $L_{2}F$ in the left-hand sides of equations (17) and (18) represent the ohmic diffusion of $\Psi$ and $F$ in radial direction, which are given by $$L_{1}\Psi \ = \ \frac{g_{\phi\phi}}{\sqrt{-g}}\partial_{r}(\frac{\sqrt{-g}}{g_{\phi\phi}g_{rr}}\partial_{r}\Psi) \ ,$$ and $$L_{2}F \ = \ \frac{\alpha g_{\phi\phi}}{\sqrt{-g}}\partial_{r}(\frac{\sqrt{-g}}{\alpha g_{\phi\phi}g_{rr}}\partial_{r}F) \ .$$ We can decompose $\Psi$ and $F$ into modes symmetric or antisymmetric with respect to the equatorial plane (Núñez 1996), and in this paper we would like to consider only the configuration of magnetic fields with the symmetry such that $\Psi(-x)=\Psi(x)$ and $F(-x)=-F(x)$. (This corresponds to dipole-type topology of $\Psi$. Quadrupole-type fields may be more easily excited near the equatorial plane. Here we do not pursue this possibility.) For physical modes the functions $\Psi$ and $F$ should satisfy the boundary conditon that both $\Psi/(1-x^{2})$ and $F/(1-x^{2})$ remain finite on the polar axis ($x^{2}=1$). Then, if the diffusion terms $L_{1}\Psi$ and $L_{2}F$ are neglected, it is easy to obtain $$\Psi \ = \ \Psi_{0}\{1-(-1)^{n}\cos(\sigma x)\} \ , \ \ F \ = \ (-1)^{n}\alpha\Psi_{0}\sin(\sigma x)$$ as a stationary eigenmode with $\mu=0$. By virtue of the boundary condition for $\Psi$ and $F$ the eigenvalue of $\sigma$ is given by $\sigma=n\pi$, where $n=1, 2, \cdots$. Of course, if one takes account of a diffusion effect due to the terms $L_{1}\Psi$ and $L_{2}F$, the minimum value of $\sigma$ should become larger than $\pi$. However, for the Kerr metric on the equatorial plane, we can estimate the value of $\sigma$ to be $$\sigma \ = \ \frac{2Ma(3r^{2}+a^{2})}{r^{4}+a^{2}(r^{2}+2Mr)} \ \leq \ 2 \ ,$$ where $M$ and $a$ are the mass and rotation parameters, respectively. Therefore, one can expect no self-excitation of fields to occur near the Kerr black hole. In this sense the rotation of the black hole turns out to be too slow to excite the dynamo action. Induced Excitation ------------------ Now let us propose an alternative process which can work even in the slow-rotation case and may be called [*induced excitation*]{} instead of [*self-excitation*]{}. The key assumption is the existence of a background poloidal field denoted by $\Psi_{B}$ as a stationary solution of the vacuum Maxwell equations $\nabla_{b}F^{ab}=0$. (The typical example is given by Wald’s (1974) solution for the Kerr hole immersed in a uniform magnetic field with the form $\Psi_{B}=B_{0}\{ag_{t\phi}+(g_{\phi\phi}/2)\}$.) If plasma is injected into the magnetosphere, the structure should be deformed by the motion of plasma. However, the original vacuum field in the background magnetosphere can remain dissipation-free and play a role of a stationary source field in equations (17) and (18). Then, we will be able to find a stationary solution written by $\Psi=\Psi_{B}+\Psi_{L}$ and $F=F_{L}$. These perturbed parts $\Psi_{L}$ and $F_{L}$ represent localized fields with amplitudes decreasing for large $r$, and the dynamical balance between the ohmic dissipation and the excitation via the frame-dragging effect for $\Psi_{L}$ and $F_{L}$ is induced by the background field $\Psi_{B}$. (This process has been also mentioned in Khanna & Camenzind (1996b) and Khanna (1997), and the numerical examples have been presented by Khanna (1998c).) To verify this induced excitation as a viable process, we write the stationary fields $\Psi$ and $F$ satisfying equations (17) and (18) in the expansion forms $$\Psi \ = \ (1-x^{2})\sum_{n=0}^{\infty}q_{2n}(r)\frac{dP_{2n+1}(x)}{dx} \ ,$$ and $$F \ = \ (1-x^{2})\sum_{n=0}^{\infty}q_{2n+1}(r)\frac{dP_{2n+2}(x)}{dx} \ ,$$ according to the boundary condition on the polar axis and the symmetry with respect to the equatorial plane. By the help of the recurrence relations for Legendre polynomials $P_{n}$ we have the equations for the coefficients $q_{n}$ as follows, $$g_{\theta\theta}(L_{1}q_{2n})-(2n+1)(2n+2)q_{2n} \ = \ \frac{\sigma}{\alpha}(c_{2n+1}q_{2n+1}-c_{2n-1}q_{2n-1}) \ ,$$ and $$g_{\theta\theta}(L_{2}q_{2n+1})-(2n+2)(2n+3)q_{2n+1} \ = \ -\alpha\sigma(c_{2n+2}q_{2n+2}-c_{2n}q_{2n}) \ ,$$ where $c_{n}=(n+1)(n+2)/(2n+3)$. It is clear that the higher multipolar modes are generated from the lower ones through the action of the dragging of inertial frames denoted by $\sigma$. Our main purpose is to point out a remarkable difference of the efficiency between the self-excitation and the induced one. Hence, for further analytical study, we consider a distant region where the metric components are approximately given by $$\alpha \ = \ 1 \ , \ \ g_{\phi\phi} \ = \ r^{2}\sin^{2}\theta \ , \ \ g_{rr} \ = \ g_{\theta\theta}/r^{2} \ = \ 1 \ ,$$ keeping the dragging of inertial frames written as $$\omega \ = \ 2J/r^{3}$$ for the angular momentum $J$ of a central object. Equation (29) leads to $$\sigma \ = \ 6J/r^{2} \ \ll \ 1 \ .$$ In order to assure $\sigma=-r^{2}d\omega/dr$ to remain very small even for a small $r$ in the following calculation, one may assume the behavior $$\sigma \ = \ (r/r_{c})^{2}\sigma_{c} \ , \ \ \sigma_{c} \ = \ 6J/r_{c}^{2}$$ in the inner region $r<r_{c}$, where $r_{c}$ will be of the order of the radius of a central object. In the slow-rotation limit the recurrence relations (26) and (27) for $n\geq1$ reduce to the approximated form $$\frac{d^{2}q_{n+1}}{dr^{2}}-\frac{(n+2)(n+3)}{r^{2}}q_{n+1} \ = \ (-1)^{n}\frac{\sigma c_{n}}{r^{2}}q_{n} \ ,$$ because we can neglect $q_{n+2}$ in comparison with $q_{n}$ in the right-hand sides. (Consistently with equation (32), the ratio $q_{n+1}/q_{n}$ is assumed to be of the order of $\sigma$, and the convergence of the expansion forms equations (24) and (25) is assured.) Now, if $q_{n}$ is known, it is easy to obtain the higher multipolar mode $q_{n+1}$. Note that in the flat spacetime the azimuthal component of magnetic field measured in orthonormal frame is given by $B_{T}=F/\eta r\sin\theta$. Then, $q_{2n+1}/r$ should vanish in the limit $r\rightarrow\infty$ and be regular in the limit $r\rightarrow0$. The function $q_{2n}$ for any localized poloidal flux should satisfy the same boundary conditions. For $\sigma=0$ we have the two independent solutions for each $q_{n+1}$ as follows, $$q_{n+1} \ = \ r^{-(n+2)} \ , \ \ \ q_{n+1} \ = \ r^{n+3} \ ,$$ violating either the outer boundary condition or the inner one. If there exists a localized field corresponding to a lower multipolar mode $q_{n}$, however, the frame-dragging effect giving $\sigma\neq0$ can generate the higher one $$q_{n+1} \ = \ r^{-(n+2)}\int_{0}^{r}b_{n}(\rho)\rho^{2n+4}d\rho \ ,$$ where $$b_{n}(r) \ = \ \int_{r}^{\infty}(-1)^{n+1}\frac{\sigma c_{n}}{\rho^{n+4}}q_{n}(\rho)d\rho \ .$$ Therefore, the key problem is the generation of the lowest mode $q_{0}$, for which we obtain the equation $$\frac{d^{2}q_{0}}{dr^{2}}-\frac{2}{r^{2}}q_{0} \ = \ \frac{6\sigma}{5r^{2}}q_{1} \ ,$$ because $c_{n}=0$ for $n=-1$. Note that the remaining source field for $q_{0}$ is only the lowest mode $q_{1}$ of the azimuthal magnetic field, satisfying the equation $$\frac{d^{2}q_{1}}{dr^{2}}-\frac{6}{r^{2}}q_{1} \ = \ \frac{2\sigma}{3r^{2}}q_{0} \ .$$ Both modes should be self-consistently generated according to these coupled equations. As was previously mentioned, a localized solution as a result of self-excited dynamo will be prohibited for a small $\sigma$. For example, we can check the suppression of the action even in the extreme case of assuming a sharp gradient in the angular velocity $\omega$ at $r=r_{c}$: The value of $\omega$ is given by a nonzero constant $\Delta\omega$ in the inner region $r<r_{c}$, while it becomes zero in the outer region $r>r_{c}$. In this case, though the localized modes $q_{0}$ and $q_{1}$ can be continuous with values denoted by $q_{c0}$ and $q_{c1}$, the gradients $dq_{0}/dr$ and $dq_{1}/dr$ must have discontinuous gaps estimated to be $-3q_{c0}/r_{c}$ and $-5q_{c1}/r_{c}$, respectively. (We have $q_{n}\sim r^{n+2}$ for $r<r_{c}$, while $q_{n}\sim r^{-(n+1)}$ for $r>r_{c}$.) Then, integrating equations (36) and (37) over the narrow region with the sharp gradients, we obtain the eigenvalue of the discontinuous gap given by $r_{c}\Delta\omega=5\sqrt{5}/2$. Núñez (1997) has also treated a case with a sharp gradient in $\Omega$ by using equation (29) for $\omega$, and he has claimed the existence of growing modes for a smaller discontinuous gap $r_{c}\Delta\Omega\leq1$. This difference will be mainly due to the estimation of the diffusion term given by $\partial^{2}\Psi/\partial r^{2}$, which was rewritten into the form $\epsilon^{2}\partial^{2}\Psi/\partial x^{2}$ under the scale transformation $r=r_{c}-(\epsilon/2)+\epsilon x$. It seems to be unacceptable that the amplitude of the diffusion term is suppressed by the small factor $\epsilon^{2}$. In fact, the straightforward application of the scale transformation will lead to $\partial^{2}\Psi/\partial r^{2}=\epsilon^{-2}\partial^{2}\Psi/\partial x^{2}$. In our calculation the diffusion terms $d^{2}q_{0}/dr^{2}$ and $d^{2}q_{1}/dr^{2}$ are estimated to be very large by virtue of the steep change of the gradients $dq_{0}/dr$ and $dq_{1}/dr$. Then, the ohmic dissipation due to the diffusion terms dominate in the evolution, unless the discontinuous gap $\Delta\omega$ itself becomes unphysically large. Therefore, let us consider the induced excitation for the physically plausible behavior of $\sigma$ given by equations (30) and (31). In the case $\sigma=0$, the two independent solutions for $q_{0}$ have the forms $r^{2}$ and $r^{-1}$ corresponding to a uniform magnetic field and a dipole one, which will be relevant to the magnetospheres of black holes and neutron stars. Of course, these vacuum fields should be regarded as background fields in the magnetospheres, of which the origin cannot be attributed to the dynamo action discussed here. Our problem is rather to study the field generation against the ohmic dissipation in the background magnetospheres. We limit the analysis to the case of the uniform background field with the strength $B_{0}$. Then, for $\sigma\neq0$ due to the frame-dragging effect the solution $q_{0}$ is modified into the form $$q_{0} \ = \ \frac{B_{0}r^{2}}{2}+\sigma_{c}^{2}p_{0} \ .$$ To obtain $q_{1}$ in the first order of $\sigma_{c}$, it is sufficient to substitute the uniform background field into equation (37), and the lowest mode of localized azimuthal field is found to be $$q_{1} \ = \ B_{0}J(\frac{2r_{c}^{2}}{15r^{2}}-\frac{1}{3})$$ for $r>r_{c}$, and $$q_{1} \ = \ B_{0}J(-\frac{8r^{3}}{15r_{c}^{3}}+\frac{r^{4}}{3r_{c}^{4}})$$ for $r<r_{c}$. This azimuthal part can work as a source of the localized poloidal field $p_{0}$ in (36), and we obtain $$p_{0} \ = \ \frac{B_{0}r_{c}^{3}}{675r}(\frac{r_{c}^{3}}{r^{3}}-\frac{45r_{c}}{4r}+\frac{104}{7})$$ for $r>r_{c}$, and $$p_{0} \ = \ \frac{B_{0}r^{2}}{675}(-\frac{4r^{3}}{r_{c}^{3}}+\frac{45r^{4}}{28r_{c}^{4}}+7)$$ for $r<r_{c}$. The generated poloidal part $p_{0}(r)$ has a maximum point in the outer region $r>r_{c}$, and the poloidal field lines along which $(1-x^{2})p_{0}(r)$ is constant show a loop structure on the poloidal plane, which is maintained against the ohmic dissipation. In the slow-rotation limit such a modification of the background uniform field due to the generated poloidal field remains small. Nevertheless, we can expect that a remarkable structure of poloidal field lines as a result of the induced excitation appears in the magnetosphere, if the result presented here is extended to a fast-rotation case of the Kerr geometry. The role of the generated azimuthal field $B_{T}=3\sin\theta\cos\theta q_{1}(r)/\eta r$ with the strength of the order of $B_{0}J/\eta r$ may be astrophysically more important even in the slow-rotation limit. In the presence of azimuthal magnetic fields, outflows of plasma from the central region may be efficiently drived by the Lorentz force. Further, a Poynting flux is excited if azimuthal magnetic fields exist in the magnatosphere, and in our approximation the Poynting flux vector $P^{A}$ ($A=r,\theta$) is given by $$P^{A} \ = \ \frac{1}{4\pi}F_{Bt}F^{AB} \ ,$$ In the asymptotic region $r\gg r_{c}$ the components $P^{r}$ and $P^{\theta}$ are easily estimated to be $$P^{r} \ = \ \frac{B_{0}^{2}J^{2}}{2\pi\eta r^{3}}\sin^{2}\theta\cos^{2}\theta \ ,$$ and $$P^{\theta} \ = \ -\frac{B_{0}^{2}J^{2}}{4\pi\eta r^{4}}\sin\theta\cos\theta (1+\cos^{2}\theta) \ .$$ Interestingly, the Poynting flux propagates outward from the central region and converges toward the rotation axis along the curves given by $(1+\cos^{2}\theta)/r=$ const. Though the generation of azimuthal magnetic fields is also possible through ideal and non-relativistic MHD processes, the dynamical balance between the ohmic dissipation and the induced excitation due to the frame-dragging effect can be an origin of the magnetospheric structure responsible for producing high-energetic phenomena in the polar region. THE EFFECT OF CHARGE SEPARATION =============================== In the case of vanishing $u_{\phi}$ no self-excitation of magnetic fields was found, even if an artificial sharp gradient of the frame-dragging angular velocity $\omega$ was assumed. This is mainly because a sufficiently large $\omega$ is not allowed for any gravity around a central object rotating with a limited angular momentum. We also obtain the upper limit of the angular velocity $\Omega$ of plasma by the requirement $(\Omega-\omega)^{2}<\alpha^{2}/g_{\phi\phi}$ (see eq. \[16\]). Therefore, even in the cases $\Omega\neq\omega$, the $\omega$-$\Omega$ coupling will remain inefficient for self-excited dynamo, at least within the analytical framework developed here. (As was previously mentioned, the existence of growing modes claimed by Núñez (1997) may be a possible way to self-excited dynamo, if his estimation for the diffusion effect can be justified.) However, the situation may crucially change in the presence of charge separation given by equation (9). For the plasma co-rotating with the angular velocity of the background magnetosphere, the value of $u_{\phi}$ can become large without limit near the light cylinder surface, and a large gradient of $u_{\phi}$ (i. e., a large proper charge density $Q$) will be also allowed to occur there. Then, the azimuthal convection current $Qu_{\phi}$ which appears in equation (10) can play a key role for self-excitation of poloidal flux instead of the frame-dragging effect. To study the $Q$-$\Omega$ coupling as a mechanism of self-excited dynamo, let us restrict the following discussion to the case of no gravity and assume the plasma angular velocity to behave as $\Omega=\Omega(R)$. (Hereafter, we use the cylindrical coordinates $R$, $Z$ and $\phi$.) Now the modes $\Psi$ and $F$ satisfying equations (11) and (15) can have the forms $$\Psi \ = \ \psi(R)\cos(kZ)e^{\mu T} \ , \ \ F \ = \ f(R)\sin(kZ)e^{\mu T} \ ,$$ according to the assumed symmetry with respect to the equatorial plane. Using the Lorentz factor given by $\gamma=1/\sqrt{1-(R\Omega)^{2}}$, the special relativistic versions of equations (11) and (15) reduce to $$L\psi-R\Omega^{2}\gamma^{2}\frac{d\psi}{dR} \ = \ -kR\Omega\frac{d\gamma}{dR}f \ ,$$ and $$Lf \ = \ -kR\frac{d\Omega}{dR}\gamma\psi \ ,$$ where $L$ is the differential operator defined by $$L \ = \ \gamma R\frac{d}{dR}(\frac{1}{\gamma R}\frac{d}{dR})-(k^{2}+\gamma\mu) \ .$$ The plasma angular velocity $\Omega$ may be nearly constant in an inner region (i.e., $\Omega\simeq\Omega_{i}$, which is equal to the angular velocity of the background stationary magnetosphere with a rigid rotation), while it should decrease as $R\rightarrow1/\Omega_{i}$. For mathematical simplicity, we represent such a behavior as a sharp gradient in $\Omega$ at $R=R_{c}<1/\Omega_{i}$. (Núñez (1997) has also studied this case in terms of the $\omega$-$\Omega$ coupling without considering charge separation.) By virtue of the discontinuous gap $\Delta\Omega$ the gradients $d\psi/dR$ and $df/dR$ should also have the discontinuous gaps $\Delta(d\psi/dR)$ and $\Delta(df/dR)$ at $R=R_{c}$. Then, using the equality $$\frac{R\Omega}{\gamma}d\gamma \ = \ -d(R\Omega+\frac{1}{2}\ln\frac{1-R\Omega}{1+R\Omega}) \ ,$$ the integrations of equations (47) and (48) over the narrow region with the sharp gradient in $\Omega$ lead to $$\Delta(\frac{1}{\gamma}\frac{d\psi}{dR}) \ = \ kf_{c}\Delta(R\Omega+\frac{1}{2}\ln\frac{1-R\Omega}{1+R\Omega}) \ ,$$ and $$\Delta(\frac{1}{\gamma}\frac{df}{dR}) \ = \ -k\psi_{c}\Delta(R\Omega) \ ,$$ where $\psi=\psi_{c}$ and $f=f_{c}$ at $R=R_{c}$. For further analysis we consider the discontinuous change from $\Omega=\Omega_{i}$ (i.e., $\gamma=1/\sqrt{1-R_{c}^{2}\Omega_{i}^{2}}\equiv\gamma_{i}$) to $\Omega=0$ (i.e., $\gamma=1$). Then, for the modes with $k\gg 1/R_{c}$, the amplitudes of $\psi$ and $f$ should decrease with the forms $\exp(-k_{o}(R-R_{c}))$ at $R>R_{c}$ and $\exp(k_{i}(R-R_{c}))$ at $R<R_{c}$, where $$k_{o} \ = \ \sqrt{k^{2}+\mu} \ ,$$ and $$k_{i} \ = \ \sqrt{k^{2}+\gamma_{i}\mu} \ .$$ From these behaviors of $\psi$ and $f$ we can easily estimate the discontinuous gaps $\Delta(d\psi/dR)$ and $\Delta(df/dR)$ at $R=R_{c}$, and the growing rate $\mu$ is found to be $$\frac{\mu}{k^{2}} \ = \ \frac{a(\gamma_{i})}{b(\gamma_{i})} \ ,$$ if the eigenvalue problem of equations (51) and (52) is solved, where $$a \ = \ \sqrt{\frac{\gamma_{i}-1}{\gamma_{i}+1}}\ln(\gamma_{i}+\sqrt{\gamma_{i}^{2}-1}) -2 \ ,$$ and $$b \ = \ 1+\frac{2}{\gamma_{i}+1}(\gamma_{i}+\frac{(\gamma_{i}-1)^{3/2}}{\sqrt{\gamma_{i}+1}\ln(\gamma_{i}+\sqrt{\gamma^{2}-1})-2\sqrt{\gamma_{i}-1}})^{1/2} \ .$$ The stationary mode with $\omega=0$ corresponds to the case $\gamma_{i}=\gamma_{m}$ satisfying $a(\gamma_{m})=0$, and the numerical value is estimated to be $\gamma_{m}\simeq5.55$. If plasma rotates with a relativistic velocity giving $\gamma>\gamma_{m}$ in a region around a central object, a sufficient decrease of $\Omega$ within a range $R_{c}(1-\delta)<R<R_{c}(1+\delta)$ ($\delta\ll1$) can excite growing magnetic fields with the scale of $1/k$ much smaller than the distance $R_{c}$. An interesting possibility is that this self-excitation of magnetic fields produces relativistic outflows of plasma across the light cylinder as a result of the back-reaction, though it is a process beyond the scope of the kinematic theory considered in this paper. The spatial variation $\Delta\Omega$ of the angular velocity also may become smaller as the back-reaction works. Then, the self-excited dynamo will stop, and the acceleration of outflows will occur only through a stationary MHD process. The very active phase of self-excitation of magnetic fields and violent plasma acceleration (which will be responsible for flare-like events of radiation flux) can reopen only when a region with highly relativistic angular velocity appears again. CONCLUSIONS =========== In a very simplified model we have succeeded in showing the self-excited dynamo through the coupling between $\Omega$ and $Q$ without any frame-dragging effect: Poloidal magnetic field generates azimuthal magnetic field by the help of differential rotation of plasma, and the azimuthal field induces charge separation if the Lorentz factor of the plasma rotation is not constant. The azimuthal convection current carried with the rotating charged plasma can amplify the original poloidal field. Though in our calculation a sharp gradient of $\Omega$ has been assumed, the condition essential to the self-excited dynamo would be the existence of large spatial variations $\Delta(R\Omega)$ and $\Delta\gamma$ such that $\Delta(R\Omega)\Delta\gamma\geq4.5$ within a radial scale $\Delta R$ smaller than $R$. Different from the angular velocity $\omega$ of the dragging of inertial frames, we can expect $\Delta\gamma\gg1$ as an astrophysically permissible case, e.g., if we consider a rotational motion of plasma near the light cylinder in magnetospheres of relativistic compact objects. However, a siginificant poloidal motion may occur near the light cylinder, before $\gamma$ of plasma rotation becomes much larger than unity. Then, the charge separation may be suppressed in the magnetosphere. The self-excited dynamo due to the $\Omega$-$Q$ coupling will be able to work only if strong background fields balance the centrifugal force in a plasma rotating with highly relativistic speed. Hence, this dynamo action should be understood to be an origin of flare-like instability permissible near the light cylinder rather than a mechanism of generation of strong background fields. We have also discussed the induced excitation of magnetic fields which occurs through the frame-dragging effect acting on uniform background magnetic field. This is a process leading to a new equilibrium configuration of magnetic field lines against the ohmic dissipation. Our analysis has been limited to the case of slow rotation, in which the higher poloidal multipoles remain very small, and the extension to rapid rotation is an important problem to be solved. Though no flare-like activity is expected for this process, the magnetospheric structure with outgoing Pynting flux convergent toward the rotation axis is an interesting result. In this paper we have clarified only the fundamental aspects of kinematic generation of magnetic fields based on relativistic Ohm’s law, and the astrophysical impications for black-hole or neutron-star magnetospheres should be confirmed in more realistic models. In particular, taking account of the presence of poloidal plasma flows will be an important task, which may work for antidynamo. Further, in the cases $\Omega\neq\omega$, the $Q$-$\Omega$ coupling also can be an important origin of the induced excitation even if the self-excited dynamo does not occur. The combined effects of the $\omega$-$\Omega$ and $Q$-$\Omega$ couplings remain unclear in this paper. To treat more appropriately the problem of charge separation, one may need two-component plasma theories (see Khanna 1998a, 1998b). Axisymmetric dynamo action in charged plasma is an interesting problem in future investigations. The author thanks Masaaki Takahashi and Masashi Egi for valuable discussions and Ramon Khanna, the referee, for suggestions on improving the manuscript. This work was supported in part by the Grant in-aid for Scientific Research (C) of the Ministry of Education, Science, Sports and Culture of Japan (No.10640257). Brandenburg, A. 1996, , 465, L115 Camenzind, M. 1987, , 184, 341 Khanna, R. 1997, in Second International Sakharov Conf. on Physics, ed. I. M. Dremin and A. M. Semikhatov (World Scientific), 134 Khanna, R. 1998a, , 294, 673 Khanna, R. 1998b, , 295, L6 Khanna, R. 1998c, in Proceedings of the Third William Fairbank Meeting on the Lense-Thirring Effect, ed. R. Ruffini et al. (World Scientific) Khanna, R., & Camenzind, M. 1994, , 435, L129 Khanna, R., & Camenzind, M. 1996a, , 307, 665 Khanna, R., & Camenzind, M. 1996b, , 313, 1028 Núñez, M. 1996, , 54, 7506 Núñez, M. 1997, , 79, 796 Takahashi, M., Nitta, S., Tatematsu, Y., & Tomimatsu, A. 1990, , 363, 206 Thorne, K. S., Price, R. H., & Macdonald, D. A. 1986, Black Holes: The Membrane Paradigm (New Haven: Yale Univ. Press) Wald, R. 1974, , 10, 1680
--- abstract: 'We propose a combinatorial algorithm to compute the Hoffman constant of a system of linear equations and inequalities. The algorithm is based on a characterization of the Hoffman constant as the largest of a finite canonical collection of easy-to-compute Hoffman constants. Our algorithm and characterization extend to the more general context where some of the constraints are easy to satisfy as in the case of box constraints. We highlight some natural connections between our characterizations of the Hoffman constant and Renegar’s distance to ill-posedness for systems of linear constraints.' author: - 'Javier Peña[^1]' - 'Juan Vera[^2]' - 'Luis F. Zuluaga[^3]' title: An algorithm to compute the Hoffman constant of a system of linear constraints --- Introduction {#sec.intro} ============ A classical result of @Hoff52 shows that the distance between a point $u \in {{\mathbb R}}^n$ and a non-empty polyhedron $P_{A,b}:=\{x \in {{\mathbb R}}^n: Ax \le b\}$ can be bounded above in terms of the size of the [*residual*]{} vector $(Au-b)_+ := \max(0,Au-b)$. More precisely, for $A \in {{\mathbb R}}^{m\times n}$ there exists a [*Hoffman constant*]{} $H(A)$ that depends only on $A$ such that for all $b\in{{\mathbb R}}^m$ with $P_{A,b} \ne \emptyset$ and all $u\in {{\mathbb R}}^n$, $$\label{eq:simple_erro} {{\mathrm{dist}}}(u,P_{A,b}) \le H(A) \cdot \|(Au-b)_+\|.$$ Here ${{\mathrm{dist}}}(u,P_{A,b}) := \min\{\|u-x\|: x\in P_{A,b}\}.$ The bound  is a type of [*error bound*]{} for the system of inequalities $Ax \le b$, that is, an inequality bounding the distance from a point $u\in {{\mathbb R}}^n$ to a nonempty [*solution set*]{} in terms of a measure of the [*error*]{} or [*residual*]{} of the point $u$. The Hoffman bound  and more general error bounds play a fundamental role in mathematical programming [@Nguy17; @Pang97; @ZhouS17]. In particular, Hoffman bounds as well as other related error bounds are instrumental in establishing convergence properties of a variety of algorithms [@BeckS15; @Garb18; @GutmP18; @LacoJ15; @LeveL10; @LuoT93; @NecoNG18; @PenaR16; @WangL14]. Hoffman bounds are also used to measure the optimality and feasibility of a point generated by rounding an optimal point of the continuous relaxation of a mixed-integer linear or quadratic optimization problem [@stein2016; @granot1990]. Furthermore, Hoffman bounds are used in sensitivity analysis [@jourani2000], and to design solution methods for non-convex quadratic programs [@xia2015]. The computational task of calculating or even estimating the constant $H(A)$ is known to be notoriously challenging [@KlatT95]. The following characterization of $H(A)$ from [@guler1995; @KlatT95; @WangL14] is often used in the optimization literature $$\label{eq.popular} H(A) = \max_{J\subseteq \{1,\dots,m\}\atop A_J \text{ full row rank} } \frac{1}{{\displaystyle\min}_{v\in {{\mathbb R}}^J_+, \|v\|^*=1}\|A_J\transp v\|^*}.$$ In  and throughout the paper $A_J \in {{\mathbb R}}^{J\times n}$ denotes the submatrix of $A\in {{\mathbb R}}^{m\times n}$ obtained by selecting the rows in $J \subseteq \{1,\dots,m\}.$ A naive attempt to use  to compute or estimate $H(A)$ is evidently non-viable because, in principle, it requires scanning an enormous number of sets $J\subseteq \{1,\dots,m\}$. A major limitation of  is that it does not reflect the fact that the tractability of computing $H(A)$ may depend on certain structural features of $A$. For instance, the computation of the Hoffman constant $H(A)$ is manageable when the set-valued mapping $x \mapsto Ax + {{\mathbb R}}^m_+$ is surjective, that is, when $A{{\mathbb R}}^n + {{\mathbb R}}^m_+ = {{\mathbb R}}^m$. In this case, as it is shown in [@KlatT95; @RamdP16; @robinson1973], the sharpest constant $H(A)$ satisfying  is $$H(A) = {\displaystyle\max}_{y\in {{\mathbb R}}^m\atop \|y\| = 1} \min_{x\in {{\mathbb R}}^n \atop Ax\le y}\|x\| = \frac{1}{{\displaystyle\min}_{v\ge 0,\, \|v\|^* = 1} \|A\transp v\|^*}.$$ This value is computable via convex optimization for suitable norms in ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$. Furthermore, when the set-valued mapping $x \mapsto Ax + {{\mathbb R}}^m_+$ is surjective, the system of linear inequalities $Ax < 0$ is [*well-posed,*]{} that is, it is feasible and remains feasible for small perturbations on $A$. In this case, the value $1/H(A)$ is precisely Renegar’s [*distance to ill-posedness*]{} [@Rene95a; @Rene95b] of $Ax < 0$, that is, the size of the smallest perturbation on $A$ that destroys the well-posedness of $Ax < 0$. We propose a combinatorial algorithm that computes the sharpest Hoffman constant $H(A)$ for any matrix $A$ by leveraging the above well-posedness property. The algorithm is founded on the following characterization $$H(A) = \max_{J \in {\mathcal S}(A) } \frac{1}{{\displaystyle\min}_{v\in {{\mathbb R}}^J_+, \|v\|^*=1}\|A_J\transp v\|^*},$$ where ${\mathcal S}(A)$ is the collection of subsets $J\subseteq \{1,\dots,m\}$ such that each $x\mapsto A_Jx + {{\mathbb R}}^J_+$ is surjective. As we detail in Section \[sec.algo\], this characterization readily enables the computation of $H(A)$ by computing $\min_{v\in {{\mathbb R}}^J_+, \|v\|^*=1}\|A_J\transp v\|^*$ over a much smaller collection ${\mathcal F}\subseteq {\mathcal S}(A)$. The identification of such a collection ${\mathcal F}\subseteq {\mathcal S}(A)$ is the main combinatorial challenge that our algorithm tackles. Our characterization and algorithm to compute the Hoffman constant also extend to the more general context involving both linear equations and linear inequalities and, perhaps most interestingly, to the case where some equations or inequalities are easy to satisfy. The latter situation arises naturally when some of the constraints are of the form $x \le u$ or $-x \le -\ell$. Our interest in characterizing the Hoffman constant in the more general case that includes easy-to-satisfy constraints is motivated by the recent articles [@BeckS15; @LacoJ15; @Garb18; @GutmP18; @PenaR16; @xia2015]. In each of these articles, suitable Hoffman constants for systems of linear constraints that include easy-to-satisfy constraints play a central role in establishing key properties of modern optimization algorithms. In particular, we show that the [*facial distance*]{} or [*pyramidal width*]{} introduced in [@LacoJ15; @PenaR16] is precisely a Hoffman constant of this kind. The paper makes the following main contributions. First, we develop a novel algorithmic approach to compute or estimate Hoffman constants (see Algorithm \[alg:bb\], Algorithm \[alg:bb.v2\], and Algorithm \[alg:bb.gral\]). Second, our algorithmic developments are supported by a fresh perspective on Hoffman error bounds based on a generic Hoffman constant for poyhedral sublinear mappings (see Theorem \[thm.main\]). This perspective readily yields a characterization of the classical Hoffman constant $H(A)$ for systems of linear inequalities (see Proposition \[prop.Hoffman.gral\]) and a similar characterization of the Hoffman constant for systems including both linear equations and linear inequalities (see Proposition \[prop.Hoffman\]). Third, we develop characterizations of Hoffman constants in the more general context when some of the constraints are easy to satisfy (see Proposition \[prop.Hoffman.gral.rest\], Proposition \[prop.Hoffman.std\], Proposition \[prop.equal.easy\], and Proposition \[prop.facial.dist\]). Throughout the paper we highlight the interesting and natural but somewhat overlooked connection between the Hoffman constant and Renegar’s distance to ill-posedness [@Rene95a; @Rene95b], which is a cornerstone of condition measures in continuous optimization. The paper is entirely self-contained and relies only on standard convex optimization techniques. We make extensive use of the one-to-one correspondence between the class of sublinear set-valued mappings $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ and the class of convex cones $K\subseteq {{\mathbb R}}^n \times {{\mathbb R}}^m$ defined via $\Phi \mapsto {{\mathrm{graph}}}(\Phi):=\{(x,y)\in {{\mathbb R}}^n \times {{\mathbb R}}^m: y\in \Phi(x)\}$. Our results are related to a number of previous developments in the rich literature on error bounds [@AzeC02; @burke1996; @guler1995; @Li93; @MangS87; @robinson1973; @VanNT09; @Zali03] and on condition measures for continuous optimization [@BurgC13; @EpelF02; @FreuV99a; @FreuV03; @Freu04; @Lewi99; @Pena00; @Pena03; @Rene95a; @Rene95b]. In particular, the expressions for the Hoffman constants in Proposition \[prop.Hoffman.gral\] and Proposition \[prop.Hoffman\] have appeared, albeit in slightly different form or under more restrictive conditions, in the work of Klatte and Thiere [@KlatT95], Li [@Li93], Robinson [@robinson1973], and Wang and Lin [@WangL14]. More precisely, Klatte and Thiere [@KlatT95] state and prove a version of Proposition \[prop.Hoffman\] under the more restrictive assumption that ${{\mathbb R}}^n$ is endowed with the $\ell_2$ norm. Klatte and Thiere [@KlatT95] also propose an algorithm to compute the Hoffman constant which is fairly different from ours. Li [@Li93], Robinson [@robinson1973], and Wang and Lin [@WangL14] give characterizations of Hoffman constants that are equivalent to Proposition \[prop.Hoffman.gral\] and Proposition \[prop.Hoffman\] but where the maximum is taken over a different, and typically much larger, collection of index sets. As we detail in Section \[sec.Hoffman\], the expression for $H(A)$ in Proposition \[prop.Hoffman.gral\] can readily be seen to be at least as sharp as some bounds on $H(A)$ derived by Güler et al. [@guler1995] and Burke and Tseng [@burke1996]. We also note that weaker versions of Theorem \[thm.main\] can be obtained from results on error bounds in Asplund spaces as those developed in the article by Van Ngai and Th[é]{}ra [@VanNT09]. Our goal to devise algorithms to compute Hoffman constants is in the spirit of and draws on the work by Freund and Vera [@FreuV99a; @FreuV03] to compute the distance to ill-posedness of a system of linear constraints. Our approach to Hoffman bounds based on the correspondence between sublinear set-valued mappings and convex cones is motivated by the work of Lewis [@Lewi99]. The characterizations of Hoffman constants when some constraints are easy to satisfy use ideas and techniques introduced by the first author in [@Pena00; @Pena03] and further developed by Lewis [@Lewi05]. The contents of the paper are organized as follows. Section \[sec.Hoffman\] presents a characterization of the Hoffman constant $H(A)$ for $A\in {{\mathbb R}}^{m\times n}$ as the largest of a finite canonical collection of easy-to-compute Hoffman constants of submatrices of $A$. We also give characterizations of similar Hoffman constants for more general cases that include both equality and inequality constraints, and where some of these constraints are easy to satisfy. Section \[sec.algo\] leverages the results of Section \[sec.Hoffman\] to devise an algorithm that computes the Hoffman constant $H(A)$ for $A\in {{\mathbb R}}^{m\times n}$ as well as other analogous Hoffman constants. Section \[sec.proof\] contains our main theoretical result, namely a characterization of the Hoffman constant ${\mathcal H}(\Phi \vert {{\mathcal L}})$ for a polyhedral sublinear mapping $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ when the residual is known to intersect a particular linear subspace ${{\mathcal L}}\subseteq{{\mathbb R}}^m$. The constant ${\mathcal H}(\Phi \vert {{\mathcal L}})$ is the maximum of the norms of a canonical set of polyhedral sublinear mappings associated to $\Phi$ and ${{\mathcal L}}$. Section \[sec.proof.Hoffman\] presents the proofs of the main statements in Section \[sec.Hoffman\]. Each of these statements is an instantiation of the generic characterization of the Hoffman constant ${\mathcal H}(\Phi \vert {{\mathcal L}})$ for suitable choices of $\Phi$ and ${{\mathcal L}}$. Throughout the paper whenever we work with an Euclidean space ${{\mathbb R}}^d$, we will assume that it is endowed with a norm $\|\cdot\|$ and inner product ${\langle \cdot , \cdot \rangle}$. Unless we explicitly state otherwise, our results apply to arbitrary norms. Hoffman constants for systems of linear constraints {#sec.Hoffman} =================================================== This section describes a characterization for the Hoffman constant $H(A)$ in  for systems of linear inequalities $$Ax \le b.$$ We subsequently consider analogous Hoffman constants for systems of linear equations and inequalities $$\begin{array}{l} Ax = b \\ Cx \le d. \end{array}$$ Although the latter case with equations and inequalities subsumes the former case, for exposition purposes we discuss separately the case with inequalities only. The notation and main ideas in this case are simpler and easier to grasp. The crux of the characterization of $H(A)$ based on a canonical collection of submatrices of $A$ is more apparent. We defer the proofs of all propositions in this section to Section \[sec.proof.Hoffman\], where we show that they follow from a characterization of a generic Hoffman constant for polyhedral sublinear mappings (Theorem \[thm.main\]). We will rely on the following terminology. Recall that a set-valued mapping $\Phi: {{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ assigns a set $\Phi(x) \subseteq {{\mathbb R}}^m$ to each $x\in {{\mathbb R}}^n.$ A set-valued mapping $\Phi: {{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ is [*surjective*]{} if $\Phi({{\mathbb R}}^n) = \bigcup_{x\in {{\mathbb R}}^n} \Phi(x)= {{\mathbb R}}^m$. More generally, $\Phi$ is [*relatively surjective*]{} if $\Phi({{\mathbb R}}^n)$ is a linear subspace. The case of inequalities only ----------------------------- Proposition \[prop.Hoffman.gral\] below gives a characterizations of the [*sharpest*]{} Hoffman constant $H(A)$ such that holds. The characterization is stated in terms of a canonical collection of submatrices of $A$ that define surjective sublinear mappings. Let $A \in {{\mathbb R}}^{m\times n}$. We shall say that a set $J\subseteq \{1,\dots,m\}$ is [*$A$-surjective*]{} if the set-valued mapping $x\mapsto Ax + \{s\in {{\mathbb R}}^m: s_J \ge 0\}$ is surjective. Equivalently, $J$ is $A$-surjective if $A_J{{\mathbb R}}^n + {{\mathbb R}}^J_+ = {{\mathbb R}}^J,$ where $A_J$ denotes the submatrix of $A$ determined by the rows in $J$. For $A \in {{\mathbb R}}^{m\times n}$ let ${\mathcal S}(A)$ denote the following collection of subsets of $\{1,\dots,m\}$: $${\mathcal S}(A):= \{J \subseteq \{1,\dots,m\}: J \text{ is $A$-surjective}\}.$$ For $A \in {{\mathbb R}}^{m\times n}$ let $$\label{eq.Hoffman.matr} H(A):={\displaystyle\max}_{J\in {\mathcal S}(A)} H_J(A),$$ where $$H_J(A):= {\displaystyle\max}_{v\in {{\mathbb R}}^m \atop \|v\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop A_Jx \le v_J} \|x\|$$ for each $J\in {\mathcal S}(A)$. By convention $H_J(A) = 0$ if $J = \emptyset$. Observe that the set ${\mathcal S}(A)$ is independent of the particular norms in ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$. On the other hand, the values of $H_J(A), J \in {\mathcal S}(A)$ and $H(A)$ certainly depend on these norms. The constant $H(A)$ defined in  is the sharpest constant satisfying . \[prop.Hoffman.gral\] Let $A\in {{\mathbb R}}^{m\times n}$. Then for all $b \in {{\mathbb R}}^m$ such that $P_{A,b} := \{x\in{{\mathbb R}}^n: Ax\le b\}\ne \emptyset$ and all $u\in {{\mathbb R}}^n$ $$\label{eq.Hoffman.bound.matr} {{\mathrm{dist}}}(u,P_{A,b}) \le H(A)\cdot {{\mathrm{dist}}}(b,Au + {{\mathbb R}}^m_+) \le H(A)\cdot\|(Au-b)_+\|.$$ Furthermore, the first bound is tight: If $H(A)>0$ then there exist $b\in {{\mathbb R}}^m$ such that $P_{A,b} \ne \emptyset$ and $u \not \in P_{A,b}$ such that $${{\mathrm{dist}}}(u,P_{A,b}) = H(A)\cdot {{\mathrm{dist}}}(b,Au + {{\mathbb R}}^m_+).$$ The following proposition complements Proposition \[prop.Hoffman.gral\] and yields a procedure to compute $H_J(A)$ for $J\in {\mathcal S}(A)$. \[prop.Hoffman.A.surj\] Let $A\in {{\mathbb R}}^{m\times n}$. Then for all $J\in {\mathcal S}(A)$ $$\label{eq.HA.J} H_J(A) = {\displaystyle\max}_{y\in {{\mathbb R}}^m \atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop A_Jx \le y_J} \|x\| = {\displaystyle\max}_{v \in {{\mathbb R}}^J_+ \atop \|A_J\transp v\|^*\le 1} \|v\|^* = \frac{1}{{\displaystyle\min}_{v \in {{\mathbb R}}^J_+, \; \|v\|^* =1 } \|A_J\transp v\|^*}.$$ If the mapping $x\mapsto Ax+{{\mathbb R}}^m_+$ is surjective then $$\label{eq.HA} H(A) = {\displaystyle\max}_{y\in {{\mathbb R}}^m \atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop Ax \le y} \|x\| = {\displaystyle\max}_{v \in {{\mathbb R}}^m_+ \atop \|A\transp v\|^*\le 1} \|v\|^* = \frac{1}{{\displaystyle\min}_{v \in {{\mathbb R}}^m_+, \; \|v\|^* =1 } \|A\transp v\|^*}.$$ The identity  in Proposition \[prop.Hoffman.A.surj\] has the following geometric interpretation. By Gordan’s theorem, the mapping $x \mapsto Ax + {{\mathbb R}}^m_+$ is surjective if and only if $0 \not\in \{A\transp v: v\ge 0, \, \|v\|^* = 1\}$. When this is the case, the quantity $1/H(A)$ is precisely the distance (in the dual norm $\|\cdot\|^*$) from the origin to $\{A\transp v: v\ge 0, \, \|v\|^* = 1\}$. The latter quantity in turn equals the [*distance to non-surjectivity*]{} of the mapping $x \mapsto Ax + {{\mathbb R}}^m_+$, that is, the norm of the smallest perturbation matrix $\Delta A \in {{\mathbb R}}^{m\times n}$ such that $x \mapsto (A+\Delta A)x + {{\mathbb R}}^m_+$ is not surjective as it is detailed in [@Lewi99]. This distance to non-surjectivity is the same as Renegar’s [*distance to ill-posedness*]{} of the system of linear inequalities $Ax < 0$ defined by $A$. The distance to ill-posedness provides the main building block for Renegar’s concept of [*condition number*]{} for convex optimization introduced in the seminal papers [@Rene95a; @Rene95b] that has been further extended in  [@AmelB12; @BurgC13; @EpelF02; @FreuV99a; @FreuV03; @Freu04; @Pena00; @Pena03] among many other articles. The identities  and  readily yield the following bound on $H(A)$ previously established in [@burke1996; @guler1995] $$\begin{aligned} H(A) &= \max_{J \in {\mathcal S}(A)} \max\{\|v\|^*: v \in{{\mathbb R}}^J_+, \|A_J\transp v\|^* \le 1\}\\ &= \max_{J \in {\mathcal S}(A)} \max\{\|\tilde v\|^*: \tilde v \in {{\mathrm{ext}}}\{v \in{{\mathbb R}}^J_+, \|A\transp v\|^* \le 1\}\} \\ &\le \max\{\|\tilde v\|^*: \tilde v \in {{\mathrm{ext}}}\{v \in{{\mathbb R}}^m_+, \|A\transp v\|^* \le 1\}\}.\end{aligned}$$ In the above expressions ${{\mathrm{ext}}}(C)$ denotes the set of extreme points of a closed convex set $C$. Let $A\in {{\mathbb R}}^{m\times n}.$ Observe that if $J \subseteq F \subseteq\{1,\dots,m\}$ and $F$ is $A$-surjective then $J$ is $A$-surjective. In other words, if $J \subseteq F \in {\mathcal S}(A)$ then $F$ provides a [*certificate of surjectivity*]{} for $J$. Equivalently, if $I \subseteq J$ and $I$ is not $A$-surjective then $J$ is not $A$-surjective, that is, $I$ provides a [*certificate of non-surjectivity*]{} for $J$. The following corollary of Proposition \[prop.Hoffman.gral\] takes this observation a bit further and provides the crux of our combinatorial algorithm to compute $H(A)$. \[corol.sets\] Let $A \in {{\mathbb R}}^{m\times n}$. Suppose ${\mathcal F}\subseteq {\mathcal S}(A)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,m\}}\setminus {\mathcal S}(A)$ provide joint certificates of surjectivity and non-surjectivity for all subsets of $\{1,\dots,m\}$. In other words, for all $J \subseteq \{1,\dots,m\}$ either $J\subseteq F$ for some $F \in {\mathcal F},$ or $I\subseteq J$ for some $I \in {\mathcal I}.$ Then $$H(A) = {\displaystyle\max}_{F \in {\mathcal F}} H_F(A).$$ The conditions on ${\mathcal F}$ and ${\mathcal I}$ imply that for all $J \in {\mathcal S}(A)$ there exists $F \in {\mathcal F}$ such that $J\subseteq F$. The latter condition implies that $H_J(A) \le H_F(A)$. Therefore Proposition \[prop.Hoffman.gral\] yields $$H(A) = {\displaystyle\max}_{J \in {\mathcal S}(A)} H_J(A) = {\displaystyle\max}_{F \in {\mathcal F}} H_F(A).$$ Proposition \[prop.Hoffman.gral\], Proposition \[prop.Hoffman.A.surj\], and Corollary \[corol.sets\] extend to the more general context when some of the inequalities in $Ax \le b$ are easy to satisfy. This occurs in particular when some of the inequalities $Ax \le b$ are of the form $ x \le u$ or $ -x \le -\ell$. It is thus natural to consider a refinement of the Hoffman constant $H(A)$ that reflects the presence of this kind of easy-to-satisfy constraints. Let $A\in {{\mathbb R}}^{m\times n}$ and $L\subseteq \{1,\dots,m\}$. Let $L^c:= \{1,\dots,m\}\setminus L$ denote the complementary set of $L$. Define $$\label{eq.Hoffman.matr.rest} H(A\vert L):={\displaystyle\max}_{J\in {\mathcal S}(A)} H_J(A\vert L)$$ where $$H_J(A\vert L):= {\displaystyle\max}_{y\in {{\mathbb R}}^L \atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop A_Jx \le y_J} \|x\| $$ for each $J\in {\mathcal S}(A)$. For ease of notation, the latter expression uses the convention that $y_j = 0$ whenever $j \in J\setminus L$. In particular, observe that $H_J(A\vert L) = 0$ if $J\cap L = \emptyset.$ For $b\in {{\mathbb R}}^m$ and $S\subseteq {{\mathbb R}}^m$ let $${{\mathrm{dist}}}_L(b,S):= \inf\{\|b-y\|: y\in S, (b-y)_{L^c} = 0\}.$$ Evidently ${{\mathrm{dist}}}_L(b,S) < \infty$ if and only if $(S-b) \cap \{y \in {{\mathbb R}}^m: y_{L^c} = 0\}\ne \emptyset$. Proposition \[prop.Hoffman.gral\], Proposition \[prop.Hoffman.A.surj\], and Corollary \[corol.sets\] extend to a system of inequalities of the form $Ax \le b$ where the subset of inequalities $A_{L^c} x \le b_{L^c}$ is easy to satisfy. \[prop.Hoffman.gral.rest\] Let $A\in {{\mathbb R}}^{m\times n}$ and $L\subseteq\{1,\dots,m\}.$ Then for all $b \in {{\mathbb R}}^m$ such that $P_{A,b} := \{x\in{{\mathbb R}}^n: Ax\le b\}\ne \emptyset$ and all $u\in \{x \in {{\mathbb R}}^n: A_{L^c} x \le b_{L^c}\}$ $$\label{eq.Hoffman.bound.matr} {{\mathrm{dist}}}(u,P_{A,b}) \le H(A\vert L)\cdot {{\mathrm{dist}}}_L(b,Au + {{\mathbb R}}^m_+) \le H(A\vert L)\cdot\|(A_Lu-b_L)_+\|.$$ Furthermore, the first bound is tight: If $H(A\vert L) >0$ then there exist $b\in {{\mathbb R}}^m$ such that $P_{A,b} \ne \emptyset$ and $u \in \{x \in {{\mathbb R}}^n: A_{L^c} x \le b_{L^c}\} \setminus P_{A,b}$ such that $${{\mathrm{dist}}}(u,P_{A,b}) = H(A\vert L)\cdot {{\mathrm{dist}}}_L(b,Au + {{\mathbb R}}^m_+).$$ \[prop.Hoffman.A.surj.rest\] Let $A\in {{\mathbb R}}^{m\times n}\setminus\{0\}$ and $L\subseteq\{1,\dots,m\}$. Then for all $J\in {\mathcal S}(A)$ $$\label{eq.HA.J.L} H_J(A\vert L) = {\displaystyle\max}_{y\in {{\mathbb R}}^L \atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n\atop A_J x \le y_J} \|x\| = {\displaystyle\max}_{v \in {{\mathbb R}}^J_+ \atop \|A_J\transp v\|^*\le 1} \|v_{J\cap L}\|^* = \frac{1}{{\displaystyle\min}_{v \in {{\mathbb R}}^{J}_+, \; \|v_{J\cap L}\|^* =1 } \|A_{J}\transp v\|^*}$$ with the convention that the denominator in the last expression is $+\infty$ when $J\cap L = \emptyset.$ If the mapping $x\mapsto Ax+{{\mathbb R}}^m_+$ is surjective then $$\label{eq.HA.L} H(A\vert L) = {\displaystyle\max}_{y\in {{\mathbb R}}^L \atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop A_L x \le y} \|x\| = {\displaystyle\max}_{v \in {{\mathbb R}}^m_+ \atop \|A\transp v\|^*\le 1} \|v_L\|^* = \frac{1}{{\displaystyle\min}_{v \in {{\mathbb R}}^m_+, \; \|v_L\|^* =1 } \|A\transp v\|^*}.$$ \[corol.sets.rest\] Let $A \in {{\mathbb R}}^{m\times n}$ and $L\subseteq \{1,\dots,m\}$. Suppose ${\mathcal F}\subseteq {\mathcal S}(A)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,m\}}\setminus {\mathcal S}(A)$ are such that for all $J \subseteq \{1,\dots,m\}$ either $J\subseteq F$ for some $F \in {\mathcal F},$ or $I\subseteq J$ for some $I \in {\mathcal I}.$ Then $$H(A\vert L) = {\displaystyle\max}_{F \in {\mathcal F}} H_F(A\vert L).$$ The case of equations and inequalities -------------------------------------- The previous statements extend to linear systems of equations and inequalities combined. Proposition \[prop.Hoffman\] below gives a bound analogous to  for the distance from a point $u\in {{\mathbb R}}^n$ to a nonempty polyhedron of the form $$\{x\in {{\mathbb R}}^n: Ax = b, \, Cx \le d\} = A^{-1}(b) \cap P_{C,d}.$$ Here and throughout this section $A^{-1}:{{\mathbb R}}^m \rightrightarrows {{\mathbb R}}^n$ denotes the inverse set-valued mapping of the linear mapping $x\mapsto Ax$ defined by a matrix $A \in {{\mathbb R}}^{m\times n}$. Let $A \in {{\mathbb R}}^{m\times n}, \, C \in {{\mathbb R}}^{p \times n}$. For $J\subseteq\{1,\dots,p\}$ let $[A,C,J]:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m \times {{\mathbb R}}^p$ be the set-valued mapping defined by $$x \mapsto {\begin{bmatrix} Ax\\Cx \end{bmatrix}} + \left\{{\begin{bmatrix} 0\\s \end{bmatrix}}: s\in {{\mathbb R}}^p, \, s_J \ge 0 \right\}.$$ Define $${\mathcal S}(A;C):=\{J \subseteq \{1,\dots,p\}: [A,C,J] \text{ is relatively surjective}\}.$$ \[prop.Hoffman\] Let $A \in {{\mathbb R}}^{m\times n}, \, C \in {{\mathbb R}}^{p \times n}$, and $H:={\displaystyle\max}_{J\in {\mathcal S}(A;C)} H_J$ where $$\label{eq.Hoffman} H_J:={\displaystyle\max}_{(y,w)\in (A{{\mathbb R}}^n)\times {{\mathbb R}}^p \atop \|(y,w)\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop Ax = y, C_Jx \le w_J} \|x\| = {\displaystyle\max}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0,\|A\transp v + C\transp z\|^* \le 1} \|(v,z)\|^* = \frac{1}{{\displaystyle\min}_{v\in A{{\mathbb R}}^n, z\in {{\mathbb R}}^p_+ \atop z_{J^c} = 0,\|(v,z)\|^* = 1} \|A\transp v + C\transp z\|^*}.$$ Then for all $b\in {{\mathbb R}}^m, d\in {{\mathbb R}}^p$ such that $A^{-1}(b) \cap P_{C,d} := \{x\in {{\mathbb R}}^n: Ax = b, \, Cx \le d\}\ne \emptyset$ and all $u\in {{\mathbb R}}^n$ $${{\mathrm{dist}}}(u,A^{-1}(b)\cap P_{C,d}) \le H\cdot {{\mathrm{dist}}}\left({\begin{bmatrix} b\\d \end{bmatrix}},{\begin{bmatrix} Au\\Cu \end{bmatrix}} + \{0\}\times{{\mathbb R}}^p_+\right) \le H\cdot\left\|{\begin{bmatrix} Au-b\\(Cu-d)_+ \end{bmatrix}}\right\|.$$ The first bound is tight: If $H>0$ then there exist $b\in {{\mathbb R}}^m, \, d\in{{\mathbb R}}^p$ such that $A^{-1}(b)\cap P_{C,d} \ne \emptyset$ and $u \not \in A^{-1}(b)\cap P_{C,d}$ such that $${{\mathrm{dist}}}(u,A^{-1}(b)\cap P_{C,d}) = H\cdot {{\mathrm{dist}}}\left({\begin{bmatrix} b\\d \end{bmatrix}},{\begin{bmatrix} Au\\Cu \end{bmatrix}} + \{0\}\times{{\mathbb R}}^p_+\right).$$ If $[A,C,\{1,\dots,p\}] : {{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m \times {{\mathbb R}}^p$ is surjective then $$\label{eq.Hoffman.surj} H = {\displaystyle\max}_{v\in {{\mathbb R}}^m, z\in {{\mathbb R}}^p_+ \atop \|A\transp v + C\transp z\|^* \le 1} \|(v,z)\|^* = \frac{1}{{\displaystyle\min}_{v\in {{\mathbb R}}^m, z\in {{\mathbb R}}^p_+ \atop \|(v,z)\|^* = 1} \|A\transp v + C\transp z\|^*}.$$ Proposition \[prop.Hoffman\] also extends to the case when some of the equations or inequalities in $Ax = b, \; Cx \le d$ are easy to satisfy. We next detail several special but particularly interesting cases. The next proposition considers systems of equations and inequalities when the inequalities are easy. This case plays a central role in [@Garb18]. \[prop.Hoffman.std\] Let $A \in {{\mathbb R}}^{m\times n}, \, C \in {{\mathbb R}}^{p \times n},$ and $H:={\displaystyle\max}_{J\in {\mathcal S}(A;C)} H_J$ where $$\label{eq.Hoffman.std} H_J:= {\displaystyle\max}_{y\in \{Ax: C_Jx\le 0\}\atop \|y\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop Ax = y, C_Jx \le 0} \|x\| = {\displaystyle\max}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|A\transp v + C\transp z\|^* \le 1} \|v\|^* = \frac{1}{{\displaystyle\min}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0,\|v\|^* = 1} \|A\transp v + C\transp z\|^*}.$$ Then for all $b\in {{\mathbb R}}^m, d\in {{\mathbb R}}^p$ such that $A^{-1}(b) \cap P_{C,d}\ne \emptyset$ and all $u\in P_{C,d}$ $${{\mathrm{dist}}}(u,A^{-1}(b)\cap P_{C,d}) \le H\cdot\|Au-b\|.$$ This bound is tight: If $H>0$ then there exist $b\in {{\mathbb R}}^m, \, d\in{{\mathbb R}}^p$ such that $A^{-1}(b)\cap P_{C,d} \ne \emptyset$ and $u \in P_{C,d} \setminus A^{-1}(b)$ such that $${{\mathrm{dist}}}(u,A^{-1}(b)\cap P_{C,d}) = H \cdot \|Au-b\|.$$ If $[A,C,\{1,\dots,p\}] : {{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m \times {{\mathbb R}}^p$ is surjective then $$\label{eq.Hoffman.std.surj} H={\displaystyle\max}_{(v,z)\in {{\mathbb R}}^m\times {{\mathbb R}}^p_+ \atop \|A\transp v + C\transp z\|^* \le 1} \|v\|^* = \frac{1}{{\displaystyle\min}_{(v,z)\in {{\mathbb R}}^m\times {{\mathbb R}}^p_+ \atop \|v\|^* = 1} \|A\transp v + C\transp z\|^*}. $$ Notice the analogy between Proposition \[prop.Hoffman.std\] and the following classical error bound for systems of linear equations. Let $A \in {{\mathbb R}}^{m\times n}$ be full row rank. Then for all $b \in {{\mathbb R}}^m$ and $u\in {{\mathbb R}}^n$ $${{\mathrm{dist}}}(u,A^{-1}(b)) \le \|A^{-1}\| \cdot \|Au - b\|$$ where $$\|A^{-1}\| = {\displaystyle\max}_{y\in{{\mathbb R}}^m\atop \|y\|\le 1} \min_{x\in A^{-1}(y)} \|x\|={\displaystyle\max}_{v \in {{\mathbb R}}^m \atop \|A\transp v\|^*\le 1} \|v\|^* = \frac{1}{ {\displaystyle\min}_{v \in {{\mathbb R}}^m \atop \|v\|^* = 1} \|A\transp v\|^* }$$ is the norm of the inverse mapping $A^{-1}:{{\mathbb R}}^m \rightrightarrows {{\mathbb R}}^n$ defined by $A$. Next, consider the case when the equations are easy. This case plays a central role in [@xia2015]. \[prop.equal.easy\] Let $A \in {{\mathbb R}}^{m\times n}, \, C \in {{\mathbb R}}^{p \times n}$, and $H:={\displaystyle\max}_{J\in {\mathcal S}(A;C)} H_J$ where $$\label{eq.equal.easy} H_J := {\displaystyle\max}_{w\in {{\mathbb R}}^p\atop \|w\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop Ax = 0, C_Jx \le w_J} \|x\| = {\displaystyle\max}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|A\transp v + C\transp z\|^* \le 1} \|z\|^* = \frac{1}{{\displaystyle\min}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|z\|^* = 1} \|A\transp v + C\transp z\|^*}.$$ Then for all $b\in {{\mathbb R}}^m, d\in {{\mathbb R}}^p$ such that $A^{-1}(b) \cap P_{C,d}\ne \emptyset$ and all $u\in A^{-1}(b)$ $${{\mathrm{dist}}}(u,A^{-1}(b) \cap P_{C,d}) \le H \cdot {{\mathrm{dist}}}\left(d,Cu + {{\mathbb R}}^p_+\right) \le H\cdot\|(Cu-d)_+\|.$$ The first bound is tight: If $H>0$ then there exist $b\in {{\mathbb R}}^m, \, d\in{{\mathbb R}}^p$ such that $A^{-1}(b) \cap P_{C,d} \ne \emptyset$ and $u \in P_{C,d} \setminus A^{-1}(b)$ such that $${{\mathrm{dist}}}(u,A^{-1}(b) \cap P_{C,d}) = H \cdot {{\mathrm{dist}}}\left(d,Cu + {{\mathbb R}}^p_+\right).$$ If $[A,C,\{1,\dots,p\}] : {{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m \times {{\mathbb R}}^p$ is surjective then $$\label{eq.equal.easy.surj} H= {\displaystyle\max}_{(v,z)\in {{\mathbb R}}^m\times {{\mathbb R}}^p_+ \atop \|A\transp v + C\transp z\|^* \le 1} \|z\|^* = \frac{1}{{\displaystyle\min}_{(v,z)\in {{\mathbb R}}^m\times {{\mathbb R}}^p_+ \atop \|z\|^* = 1} \|A\transp v + C\transp z\|^*}.$$ When the mapping $[A,C,\{1,\dots,p\}]$ is surjective, the quantity $1/H$ defined in each of Proposition \[prop.Hoffman\], Proposition \[prop.Hoffman.std\], or Proposition \[prop.equal.easy\] equals a certain kind of [*block-structured distance to non-surjectivity*]{} of $[A,C,\{1,\dots,p\}]$. More precisely, when $[A,C,\{1,\dots,p\}]$ is surjective, the quantity $1/H$ defined by  equals the size of the smallest $ (\Delta A,\Delta C)\in {{\mathbb R}}^{(m+p)\times n}$ such that $[A+\Delta A,C+\Delta C,\{1,\dots,p\}]$ is not surjective. Similarly, when $[A,C,\{1,\dots,p\}]$ is surjective, the quantity $1/H$ defined by  equals the size of the smallest $\Delta A \in {{\mathbb R}}^{m\times n}$ such that $[A+\Delta A, C,\{1,\dots,p\}]$ is not surjective. Finally, when $[A,C,\{1,\dots,p\}]$ is surjective, the quantity $1/H$ defined by  equals the size of the smallest $\Delta C \in {{\mathbb R}}^{{p}\times n}$ such that $[A,C+\Delta C,\{1,\dots,p\}]$ is not surjective. Each of these block-structured distances to non-surjectivity is the same as the analogous block-structured distances to ill-posedness of the system of inequalities $Ax = 0, \, Cx < 0$. For a more detailed discussion on the block-structure distance to non-surjectivity and the block-structure distance to ill-posedness, we refer the reader to [@Lewi05; @Pena00; @Pena03; @Pena05]. Next, consider a special case when one of the equations and all inequalities are easy. This case underlies the construction of some measures of conditioning for polytopes developed in [@BeckS15; @GarbH13; @LacoJ15; @GutmP18; @PenaR16] to establish the linear convergence of some variants of the Frank-Wolfe Algorithm. Recall some notation from [@GutmP18]. Let $A\in {{\mathbb R}}^{m\times n}$ and consider the polytope $ {{\mathrm{conv}}}(A) := A\Delta_{n-1}, $ where $\Delta_{n-1} := \{x\in {{\mathbb R}}^n_+: \|x\|_1 = 1\}$. Observe that $v \in {{\mathrm{conv}}}(A)$ if and only if the following system of constraints has a solution $$\label{eq.polytope} Ax = v, \; x \in \Delta_{n-1}.$$ It is natural to consider $x \in \Delta_{n-1} $ in  as an [*easy-to-satisfy*]{} constraint. Following the notation in [@GutmP18], for $v\in {{\mathrm{conv}}}(A)$ let $$Z(v) := \{z\in \Delta_{n-1}: Az = v\}.$$ The Hoffman constant $H$ in Proposition \[prop.facial.dist\] below plays a central role in [@GutmP18; @LacoJ15; @PenaR16]. In particular, when ${{\mathbb R}}^n$ is endowed with the $\ell_1$ norm, $1/H$ is the same as the [*facial distance*]{} or [*pyramidal width*]{} of the polytope ${{\mathrm{conv}}}(A)$ as detailed in [@GutmP18; @PenaR16]. We will rely on the following notation. For $A\in {{\mathbb R}}^{m\times n}$ let $L_A:=\{Ax: {{\mathbf 1}}\transp x = 0\}$ and for $J\subseteq \{1,\dots,n\}$ let $K_J:= \{x\in {{\mathbb R}}^n: {{\mathbf 1}}\transp x = 0, \; x_J \ge 0\}$. Let $\Pi_{L_A}:{{\mathbb R}}^m \rightarrow L_A$ denote the orthogonal projection onto $L_A$. \[prop.facial.dist\] Let $A\in {{\mathbb R}}^{m\times n}$. Let $\tilde A := {\begin{bmatrix} A \\ {{\mathbf 1}}\transp \end{bmatrix}} \in {{\mathbb R}}^{(m+1)\times n},\; C:=-I_n \in {{\mathbb R}}^{n\times n},$ and $H:={\displaystyle\max}_{J\in {\mathcal S}(\tilde A;C)} H_J$ where $$\label{eq.facial.dist} H_J:= \max_{y \in A K_J\atop \|y\| \le 1} {\displaystyle\min}_{x\in K_J \atop Ax = y} \|x\| = \max_{(v,t) \in \tilde A{{\mathbb R}}^n,z\in{{\mathbb R}}^n_+ \atop z_{J^c} = 0, \|A\transp v + t{{\mathbf 1}}- z\|^* \le 1} \|\Pi_{L_A}(v)\|^* = \frac{1}{{\displaystyle\min}_{(v,t) \in \tilde A{{\mathbb R}}^n, z\in {{\mathbb R}}^n_+ \atop z_{J^c} = 0, \|\Pi_{L_A}(v)\|^* = 1} \|A\transp v + t{{\mathbf 1}}- z\|^*}.$$ Then for all $x \in \Delta_{n-1}$ and $v\in {{\mathrm{conv}}}(A)$ $${{\mathrm{dist}}}(x,Z(v)) \le H \cdot \|Ax - v\|.$$ Furthermore, this bound is tight: If $H>0$ then there exist $v\in {{\mathrm{conv}}}(A)$ and $x\in \Delta_{n-1}\setminus Z(v)$ such that $${{\mathrm{dist}}}(x,Z(v)) = H \cdot \|Ax - v\| > 0.$$ The following analogue of Corollary \[corol.sets\] and Corollary \[corol.sets.rest\] also holds. \[corol.sets.gral\] Let $A \in {{\mathbb R}}^{m\times n},\, C\in {{\mathbb R}}^{p\times n}$. Suppose ${\mathcal F}\subseteq {\mathcal S}(A;C)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,p\}}\setminus {\mathcal S}(A;C)$ are such that for all $J \subseteq \{1,\dots,p\}$ either $J\subseteq F$ for some $F \in {\mathcal F},$ or $I\subseteq J$ for some $I \in {\mathcal I}.$ Then the expression $H:=\max_{J\in {\mathcal S}(A;C)} H_J$ in each of Proposition \[prop.Hoffman\], Proposition \[prop.Hoffman.std\], and Proposition \[prop.equal.easy\] can be replaced with $H=\max_{F\in {\mathcal F}} H_F$. The same holds for Proposition \[prop.facial.dist\] with $\tilde A = {\begin{bmatrix} A \\ {{\mathbf 1}}\transp \end{bmatrix}}$ in lieu of $A$. An algorithm to compute the Hoffman constant {#sec.algo} ============================================ We next describe an algorithm to compute the Hoffman constant of a systems of linear equations and inequalities. We first describe the computation of the Hoffman constant $H(A)$ in Proposition \[prop.Hoffman.gral\]. We subsequently describe the computation of the Hoffman constant $H(A;C) := H$ defined in Proposition \[prop.Hoffman\]. The algorithms described below have straightforward extensions to the more general case when some equations or inequalities are easy to satisfy. Computation of $H(A)$ {#sec.algo.ineq} --------------------- Let $A\in {{\mathbb R}}^{m\times n}.$ Corollary \[corol.sets\] suggests the following algorithmic approach to compute $H(A)$: Find collections of sets ${\mathcal F}\subseteq {\mathcal S}(A)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,m\}} \setminus {\mathcal S}(A)$ that provide joint certificates of surjectivity and non-surjectivity for all subsets of $\{1,\dots,m\}$ and then compute $H(A) = \max_{F\in {\mathcal F}} H_F(A)$. A naive way to construct ${\mathcal F}$ and ${\mathcal I}$ would be to scan the subsets of $\{1,\dots,m\}$ in monotonically decreasing order as follows. Starting with $J = \{1,\dots,m\}$, check whether $J$ is surjective. If $J$ is surjective, then place $J$ in ${\mathcal F}$. Otherwise, place $J$ in ${\mathcal I}$ and continue by scanning each $J\setminus\{i\}$ for $i\in J$. Algorithm \[alg:bb\] and its variant, Algorithm \[alg:bb.v2\], refine the above naive approach to construct ${\mathcal F},{\mathcal I}$ more efficiently. We next describe both algorithms. The central idea of Algorithm \[alg:bb\] is to maintain three collections ${\mathcal F},{\mathcal I},{\mathcal J}\subseteq 2^{\{1,\dots,m\}}$ such that the following invariant holds at the beginning of each main iteration (Step 3 in Algorithm \[alg:bb\]): > The collections ${\mathcal F}\subseteq {\mathcal S}(A)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,m\}}\setminus {\mathcal S}(A)$ provide joint certificates of surjectivity and non-surjectivity for all subsets of $\{1,\dots,m\}$ except possibly those included in some subset in the collection ${\mathcal J}$. This invariant evidently holds for ${\mathcal F}= {\mathcal I}= \emptyset$ and ${\mathcal J}=\{\{1,\dots,m\}\}.$ At each main iteration, Algorithm \[alg:bb\] scans a set $J\in {\mathcal J}$ to either detect that $J$ is $A$-surjective or find a certificate of non-surjectivity $I\subseteq J$. If $J$ is $A$-surjective then the above invariant continues to hold after adding $J$ to ${\mathcal F}$ and removing all $\tilde J\in {\mathcal J}$ such that $\tilde J \subseteq J$. On the other hand, if $I$ is a certificate of non-surjectivity for $J$, then the invariant continues to hold if $I$ is added to ${\mathcal I}$ and ${\mathcal J}$ is updated as follows. Replace each $\hat J\in {\mathcal J}$ that contains $I$ with the sets $\hat J\setminus \{i\}, \; i\in I$ that are not included in any set in ${\mathcal F}$. Algorithm \[alg:bb\] terminates when ${\mathcal J}$ is empty. This must happen eventually since at each main iteration the algorithm either removes at least one subset from ${\mathcal J}$ or removes at least one subset from ${\mathcal J}$ and replaces it by proper subsets of it. The most time-consuming operation in Algorithm \[alg:bb\] (Step 4) is the step that detects whether a subset $J\in {\mathcal J}$ is $A$-surjective or finds a certificate of nonsurjectivity $I\subseteq J$. This step requires solving the following problem $$\label{eq.lp} \min\{\|A_J\transp v\|^*: v\in {{\mathbb R}}^J_+, \|v\|^* = 1\}.$$ Observe that $J$ is $A$-surjective if and only if the optimal value of  is positive. More precisely, by Proposition \[prop.Hoffman.A.surj\], the minimization problem  either detects that $J$ is $A$-surjective and computes $1/H_J(A)$ when its optimal value is positive, or detects that $J$ is not $A$-surjective and finds $v\in {{\mathbb R}}^J_+\setminus\{0\}$ such that $A_J\transp v = 0$. In the latter case, the set $I(v):=\{i\in J: v_i > 0\}$ is a certificate of non-surjectivity for $J$. When $J$ is not $A$-surjective, the certificate of non-surjectivity $I(v)\subseteq J$ obtained from  is typically smaller than $J$. The tractability of problem  depends on the norms in ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$. In particular, when ${{\mathbb R}}^m$ is endowed with the $\ell_\infty$-norm we have $\|v\|^* = \|v\|_1 = {{\mathbf 1}}\transp v$ for $v\in {{\mathbb R}}^J_+$ and thus  becomes the following convex optimization problem $$\min\{\|A_J\transp v\|^*: v\in {{\mathbb R}}^J_+, {{\mathbf 1}}\transp v = 1\}.$$ Furthermore,  is a linear program if both ${{\mathbb R}}^m$ and ${{\mathbb R}}^n$ are endowed with the $\ell_\infty$-norm or if ${{\mathbb R}}^m$ is endowed with the $\ell_\infty$-norm and ${{\mathbb R}}^n$ is endowed with the $\ell_1$-norm. Problem  is a second-order conic program if ${{\mathbb R}}^m$ is endowed with the $\ell_\infty$-norm and ${{\mathbb R}}^n$ is endowed with the $\ell_2$-norm. In our MATLAB prototype implementation described below, ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$ are endowed with the $\ell_\infty$ norm and  is solved via linear programming. Problem  can also be solved by solving $|J|$ convex optimization problems when ${{\mathbb R}}^m$ is endowed with the $\ell_1$-norm. This is suggested by the characterizations of Renegar’s distance to ill-posedness in [@FreuV99a; @FreuV03]. When ${{\mathbb R}}^m$ is endowed with the $\ell_1$-norm we have $\|v\|^* = \|v\|_\infty = {\displaystyle\max}_{j\in J} v_j$ for $v\in {{\mathbb R}}^J_+$ and thus $$\min\{\|A_J\transp v\|^*: v\in {{\mathbb R}}^J_+, \|v\|^* = 1\} = {\displaystyle\min}_{j\in J} \; \min\{\|A_J\transp v\|^*: v\in {{\mathbb R}}^J_+, v \le {{\mathbf 1}}, \, v_j = 1\}.$$ Section \[sec.euclidean\] below describes a more involved approach to estimate  when both ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$ are endowed with the $\ell_2$ norm. We should note that although the specific value of the Hoffman constant $H(A)$ evidently depends on the norms in ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$, the $A$-surjectivity of a subset $J\subseteq\{1,\dots,m\}$ does not. In particular, the collections ${\mathcal F},{\mathcal I}$ found in Algorithm \[alg:bb\] could be used to compute or estimate $H(A)$ for any arbitrary norms provided each $H_F(A)$ can be computed or estimated when $F\subseteq\{1,\dots,m\}$ is $A$-surjective. A potential drawback of Algorithm \[alg:bb\] is the size of the collection ${\mathcal J}$ that could become potentially large even if the sets ${\mathcal F},{\mathcal I}$ do not. This drawback suggests an alternate approach. Given ${\mathcal F}\subseteq {\mathcal S}(A)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,m\}}\setminus {\mathcal S}(A)$ consider the feasibility problem $$\label{eq.ip} \begin{array}{rl} & |J^c\cap I| \ge 1, \; I\in {\mathcal I}\\ & |J\cap F^c| \ge 1, \; F\in {\mathcal F}\\ & J \subseteq \{1,\dots,m\}. \end{array}$$ Observe that ${\mathcal F},{\mathcal I}$ jointly provide certificates of surjectivity or non-surjectivity for all subsets of $\{1,\dots,m\}$ if and only if  is infeasible. This suggests the variant of Algorithm \[alg:bb\] described in Algorithm \[alg:bb.v2\]. The main difference is that Algorithm \[alg:bb.v2\] does not maintan ${\mathcal J}$ and instead relies on  at each main iteration. Algorithm \[alg:bb.v2\] trades off the memory cost of maintaining ${\mathcal J}$ for the computational cost of solving the feasibility problem  at each main iteration. $A \in {{\mathbb R}}^{m \times n}$ Let ${\mathcal F}:= \emptyset, \;{\mathcal I}:= \emptyset, \; {\mathcal J}:=\{\{1,\dots,m\}\}, H(A):= 0$ Pick $J \in {\mathcal J}$ and let $v$ solve  to detect whether $J$ is $A$-surjective ${\mathcal F}:= {\mathcal F}\cup \{J\}$, $\hat {\mathcal J}:= \{\hat J \in {\mathcal J}: \hat J \subseteq J\},$ and $H(A) := \max\left\{H(A),\frac{1}{\|A_J\transp v\|^*}\right\}$ Let ${\mathcal J}:={\mathcal J}\setminus\hat {\mathcal J}$ Let ${\mathcal I}:= {\mathcal I}\cup\{I(v)\}$, $\hat {\mathcal J}:= \left\{\hat J\in {\mathcal J}: I(v) \subseteq \hat J\right\}$ Let $\bar {\mathcal J}:= \left\{\hat J\setminus \{i\}: \hat J \in \hat {\mathcal J}, i\in I(v), \hat J\setminus \{i\} \not \subseteq F \text{ for all } F\in {\mathcal F}\right\}$ Let ${\mathcal J}:= ({\mathcal J}\setminus \hat {\mathcal J}) \cup \bar {\mathcal J}$ ${\mathcal F}, \, {\mathcal I}, \, H(A)$ $A \in {{\mathbb R}}^{m \times n}$ Let ${\mathcal F}:= \emptyset, \;{\mathcal I}:= \emptyset, H(A):= 0$ Let $J\subseteq \{1,\dots,m\}$ solve and let $v$ solve  to detect whether $J$ is $A$-surjective ${\mathcal F}:= {\mathcal F}\cup \{J\}$ and $H(A) := \max\left\{H(A),\frac{1}{\|A_J\transp v\|^*}\right\}$ Let ${\mathcal I}:= {\mathcal I}\cup\{I(v)\}$ ${\mathcal F}, \, {\mathcal I}, \, H(A)$ We tested prototype MATLAB implementations of Algorithm \[alg:bb\] and Algorithm \[alg:bb.v2\] on collections of randomly generated matrices $A$ of various sizes (with $m >n$, where the analysis is interesting). The entries in each matrix were drawn from independent standard normal distributions. Figure \[the.figure\] summarizes our results. It displays boxplots for the sizes of the sets ${\mathcal F}, {\mathcal I}$ at termination for the [*non-surjective*]{} instances in the sample, that is, the matrices $A$ such that $0\in {{\mathrm{conv}}}(A\transp)$. We excluded the [*surjective*]{} instances, that is, the ones with $0\not\in {{\mathrm{conv}}}(A\transp)$ because for those instances the collections ${\mathcal F}= \{\{1,\dots,m\}\}$ and ${\mathcal I}= \emptyset$ provide certificates of surjectivity for all subsets of $\{1,\dots,m\}$ and are identified at the first iteration of the algorithm when $J=\{1,\dots,m\}$ is scanned. Thus the non-surjective instances are the interesting ones. As a reality check to our implementation of both algorithms, for every instance that we tested, we used  to verify that the final sets ${\mathcal F}, {\mathcal I}$ indeed provide certificates of surjectivity and non-surjectivity for all subsets of $\{1,\dots,m\}.$ ![Box plots of the distributions of the sizes of the sets ${\mathcal I}$ and ${\mathcal F}$ for the non-surjective instances obtained after randomly sampling 1000 matrices with $m$ rows and $n$ columns.[]{data-label="the.figure"}](boxplots){width="\textwidth"} The MATLAB code and scripts used for our experiments are publicly available in the following website [http://www.andrew.cmu.edu/user/jfp/hoffman.html]{} The reader can readily use these files to replicate numerical results similar to those summarized in Figure \[the.figure\]. It is interesting to note that the size of the collection ${\mathcal F}$ in our experiments does not grow too rapidly. This is reassuring in light of the characterization $H(A) = \max_{F\in {\mathcal F}} H_F(A)$. Our prototype implementations are fairly basic. In particular, our prototype implementation of Algorithm \[alg:bb\] maintains an explicit representation of the collections ${\mathcal F}, {\mathcal I}, {\mathcal J}$. Our prototype implementation of Algorithm \[alg:bb.v2\] solves  via integer programming. Neither of them use warm-starts. It is evident that the collections ${\mathcal F}, {\mathcal I}, {\mathcal J}$ as well as the feasibility problem  could all be handled more efficiently via more elaborate combinatorial structures such as binary decision diagrams [@Aker78; @BergCVHH16]. A clever use of warm-starts would likely boost efficiency since the algorithms need to solve many similar linear and integer programs. The results of the prototype implementations of Algorithm \[alg:bb\] and Algorithm \[alg:bb.v2\] are encouraging and suggest that more sophisticated implementations could compute the Hoffman constant $H(A)$ for much larger matrices. Computation of $H(A;C)$ ----------------------- Throughout this subsection we let $H(A;C)$ denote the Hoffman constant $H$ defined in Proposition \[prop.Hoffman\]. Let $A\in {{\mathbb R}}^{m\times n}$ and $C\in {{\mathbb R}}^{p\times n}$. In parallel to the observation in Section \[sec.algo.ineq\] above, Corollary \[corol.sets.gral\] suggests the following approach to compute $H(A;C)$: Find collections of sets ${\mathcal F}\subseteq {\mathcal S}(A;C)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,p\}} \setminus {\mathcal S}(A;C)$ such that for all $J\subseteq \{1,\dots,p\}$ either $J\subseteq F$ for some $F \in {\mathcal F},$ or $I\subseteq J$ for some $I \in {\mathcal I}$. Then compute $H(A;C):={\displaystyle\max}_{J\in {\mathcal F}} H_J(A;C)$ where $$H_J(A;C) = \frac{1}{{\displaystyle\min}_{v\in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+ \atop \|(v,z)\|^* = 1} \|A\transp v + C_J\transp z\|^*}.$$ Algorithm \[alg:bb\] has the straightforward extension described in Algorithm \[alg:bb.gral\] to find ${\mathcal F}\subseteq {\mathcal S}(A;C)$ and ${\mathcal I}\subseteq 2^{\{1,\dots,p\}} \setminus {\mathcal S}(A;C)$ as above. Algorithm \[alg:bb.v2\] has a similar straightforward extension. The most time-consuming operation in Algorithm \[alg:bb.gral\] (Step 4) is the step that detects whether a subset $J\in {\mathcal J}$ satisfies $J\in {\mathcal S}(A;C)$ or finds a certificate of non-relative-surjectivity, that is, a set $I\in 2^{\{1,\dots,p\}}\setminus {\mathcal S}(A;C)$ such that $I\subseteq J$. This step requires solving the following problem $$\label{eq.lp.gral} \min\{\|A\transp v + C_J\transp z\|^*: v\in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+, \|(v,z)\|^* = 1\}.$$ Observe that $J\in {\mathcal S}(A;C)$ if and only if the optimal value of  is positive. Thus, the minimization problem  either detects that $J\in {\mathcal S}(A;C)$ and computes $1/H_J(A;C)$ when its optimal value is positive, or detects that $J\not \in {\mathcal S}(A;C)$ and finds $z\in {{\mathbb R}}^J_+\setminus\{0\}$ such that $A\transp v + C_J\transp z = 0$. In the latter case, the set $I(z):=\{i\in J: z_i > 0\}$ is a certificate of non–relative-surjectivity for $J$. The tractability of  is a bit more nuanced than that of  due to the presence of the unconstrained variables $v\in {{\mathbb R}}^m$. The following easier problem allows us to determine whether the optimal value of  is positive, that is, whether $J\in {\mathcal S}(A;C)$. This is the most critical information about  used in Algorithm \[alg:bb.gral\] $$\label{eq.lp.gral.easier} \min\{\|A\transp v + C_J\transp z\|^*: v\in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+, \|z\|^* = 1\}.$$ Problem  is a convex optimization problem when ${{\mathbb R}}^{p}$ is endowed with the $\ell_\infty$ norm. It is evident that the optimal value of  is zero if and only if the optimal value of  is zero. Thus for the purpose of solving the main computational challenge in computing $H(A;C)$, that is, finding the collections ${\mathcal F}$ and ${\mathcal I}$, Algorithm \[alg:bb.gral\] can rely on the easier problem  in place of . Nonetheless,  needs to be solved or estimated for the purpose of computing or estimating the value $H(A;C)$. When ${{\mathbb R}}^{m+p}$ is endowed with the $\ell_1$-norm,  can be solved by solving $2m+|J|$ convex optimization problems. In this case $\|(v,z)\|^* = \|(v,z)\|_\infty = \max\left(\max_{i=1,\dots,m} |v_i|,\max_{j\in J}|z_j|\right)$ and so $$\begin{aligned} \min &\{\|A\transp v + C_J\transp z\|^*: v\in {{\mathbb R}}^m, z\in {{\mathbb R}}^J_+, \|(v,z)\|^* = 1\} \\ & = \min\left(\begin{array}{l} {\displaystyle\min}_{i =1,\dots,m} \; \min\{\|A\transp v + C_J\transp z\|^*: v \in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+, \|(v,z)\|_\infty \le 1, \, v_i = 1\}, \\ {\displaystyle\min}_{i =1,\dots,m} \; \min\{\|A\transp v + C_J\transp z\|^*: v \in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+, \|(v,z)\|_\infty \le 1, \, v_i = -1\},\\ {\displaystyle\min}_{j\in J} \; \min\{\|A\transp v + C_J\transp z\|^*: v \in A{{\mathbb R}}^n, z\in {{\mathbb R}}^J_+, \|(v,z)\|_\infty\le 1, \, z_j = 1\} \end{array} \right).\end{aligned}$$ Section \[sec.euclidean\] describes a more involved approach to estimate the optimal value of  when ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$ are endowed with the $\ell_2$ norm. $A \in {{\mathbb R}}^{m \times n}, \, C\in {{\mathbb R}}^{p\times n}$ Let ${\mathcal F}:= \emptyset, \;{\mathcal I}:= \emptyset, \; {\mathcal J}:=\{\{1,\dots,p\}\}, H(A;C):= 0$ Pick $J \in {\mathcal J}$ and let $(v,z)$ solve  to detect whether $J\in {\mathcal S}(A;C)$ ${\mathcal F}:= {\mathcal F}\cup \{J\},\hat {\mathcal J}:= \{\hat J \in {\mathcal J}: \hat J \subseteq J\},H(A;C) := \max\left\{H(A;C),\frac{1}{\|A\transp v+C_J\transp z\|^*}\right\}$ Let ${\mathcal J}:={\mathcal J}\setminus\hat {\mathcal J}$ Let ${\mathcal I}:= {\mathcal I}\cup\{I(v)\}$, $\hat {\mathcal J}:= \left\{\hat J\in {\mathcal J}: I(z) \subseteq \hat J\right\}$ Let $\bar {\mathcal J}:= \left\{\hat J\setminus \{i\}: \hat J \in \hat {\mathcal J}, i\in I(v), \hat J\setminus \{i\} \not \subseteq F \text{ for all } F\in {\mathcal F}\right\}$ Let ${\mathcal J}:= ({\mathcal J}\setminus \hat {\mathcal J}) \cup \bar {\mathcal J}$ ${\mathcal F}, \, {\mathcal I}, \, H(A;C)$ Estimating  for Euclidean norms {#sec.euclidean} ------------------------------- Throughout this subsection suppose that ${{\mathbb R}}^n$ and ${{\mathbb R}}^{m+p}$ are endowed with the $\ell_2$ norm and $J\subseteq\{1,\dots,p\}$ is fixed. We next describe a procedure to compute lower and upper bounds on  within a factor $(4p+9)$ of each other by relying on a suitably constructed self-concordant barrier function. We concentrate on the case when $J\in {\mathcal S}(A;C)$ as otherwise  can easily detect that $J \not \in {\mathcal S}(A;C)$. By Proposition \[prop.Hoffman\] the optimal value of  equals $$\label{eq.HJ.dist} \frac{1}{H_J(A;C)} = \max\{r: (y,w) \in (A{{\mathbb R}}^n)\times {{\mathbb R}}^J, \|(y,w)\|_2 \le r \Rightarrow (y,w) \in {{\mathcal D}}\}$$ where ${{\mathcal D}}= \{(Ax,C_Jx + s): x\in {{\mathbb R}}^n, \, s\in {{\mathbb R}}^J_+, \|x\|_2 \le 1\}$. Equation  has the following geometric interpretation: $1/H_J(A;C)$ is the distance from the origin to the relative boundary of ${{\mathcal D}}$. Let $f(x,s):= -\log(1-\|x\|_2^2) - {\displaystyle\sum}_{j=1}^p \log(s_j)$ and define $F: {{\mathrm{ri}}}({{\mathcal D}}) \rightarrow {{\mathbb R}}$ as follows $$\label{eq.implicit} \begin{array}{rl}F(y,w):= {\displaystyle\min}_{x,s} & f(x,s) \\ & Ax = y \\ & C_Jx + s = w. \end{array}$$ From [@NestN94 Proposition 5.1.5] it follows that the function $F$ constructed in  is a $(p+2)$-self-concordant barrier function for ${{\mathcal D}}$. A straightforward calculus argument shows that $$\label{eq.dykin} \mathcal E := \{d \in(A{{\mathbb R}}^n) \times {{\mathbb R}}^J: {\langle \nabla^2F(0,0) d , d \rangle} \le 1 \} = \{M^{1/2}d: d \in (A{{\mathbb R}}^n) \times {{\mathbb R}}^J, \|d\|\le 1\},$$ where$$\label{eq.hessian} M:= {\begin{bmatrix} A & 0 \\ C_J & I \end{bmatrix}} \nabla^2 f(\bar x, \bar s)^{-1} {\begin{bmatrix} A & 0 \\ C_J & I \end{bmatrix}}\transp$$ and $(\bar x, \bar s)$ is the solution to  for $(y,w):= (0,0) \in {{\mathrm{ri}}}({{\mathcal D}})$. The ellipsoid $\mathcal E$ in  is the Dikin ellipsoid in $(A{{\mathbb R}}^n)\times {{\mathbb R}}^J$ associated to $F$ and centered at $(0,0)$. Therefore from the properties of self-concordant barriers [@NestN94; @Rene01] it follows that $\mathcal E \subseteq {{\mathcal D}}$ and $\{d\in {{\mathcal D}}: {\langle \nabla F(0,0) , d \rangle} \ge 0 \} \subseteq (4p+9) \cdot \mathcal E $. These two properties and  imply that $$\sigma_{\min}(M^{1/2}) \le \frac{1}{H_J(A;C)} \le (4p+9) \cdot \sigma_{\min}(M^{1/2})$$ where $\sigma_{\min}(M^{1/2})$ denotes the smallest positive singular value of $M^{1/2}$. We thus have the following procedure to estimate : First, solve . If this optimal value is zero then the optimal value of  is zero as well. Otherwise, let $(\bar x, \bar s)$ solve  for $(y,w) := (0,0)$ and let $M$ be as in . The values $\sigma_{\min}(M^{1/2})$ and $(4p+9)\cdot \sigma_{\min}(M^{1/2})$ are respectively a lower bound and an upper bound on the optimal value $1/H_J(A;C)$ of . A Hoffman constant for polyhedral sublinear mappings {#sec.proof} ==================================================== We next present a characterization of the Hoffman constant for polyhedral sublinear mappings when the residual is known to intersect a predefined linear subspace. To that end, we will make extensive use of the following correspondence between polyhedral sublinear mappings and polyhedral cones. A set-valued mapping $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ is a [*polyhedral sublinear mapping*]{} if $${{\mathrm{graph}}}(\Phi) = \{(x,y): y\in \Phi(x)\} \subseteq {{\mathbb R}}^n \times {{\mathbb R}}^m$$ is a polyhedral cone. Conversely, if $K \subseteq {{\mathbb R}}^n \times {{\mathbb R}}^m $ is a polyhedral convex cone then the set-valued mapping $\Phi_K: {{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ defined via $$y \in \Phi_K(x) \Leftrightarrow (x,y) \in K$$ is a polyhedral sublinear mapping since ${{\mathrm{graph}}}(\Phi_K) = K$ by construction. Let $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ be a polyhedral sublinear mapping. The domain, image, and norm of $\Phi$ are defined as follows: $$\begin{aligned} {{\mathrm{dom}}}(\Phi) &= \{x\in {{\mathbb R}}^n: (x,y) \in {{\mathrm{graph}}}(\Phi) \text{ for some } y\in {{\mathbb R}}^m\},\\ \operatorname{Im}(\Phi) &= \{y\in {{\mathbb R}}^n: (x,y) \in {{\mathrm{graph}}}(\Phi) \text{ for some } x\in {{\mathbb R}}^n\},\\ \|\Phi\| &= {\displaystyle\max}_{x\in {{\mathrm{dom}}}(\Phi) \atop \|x\|\le 1} \min_{y\in \Phi(x)} \|y\|.\end{aligned}$$ In particular, the norm of the inverse mapping $\Phi^{-1}:{{\mathbb R}}^m\rightrightarrows {{\mathbb R}}^n$ is $$\|\Phi^{-1}\| = {\displaystyle\max}_{y\in {{\mathrm{dom}}}(\Phi^{-1})\atop \|y\|\le 1} \min_{x\in \Phi^{-1}(y)} \|x\|= {\displaystyle\max}_{y\in \operatorname{Im}(\Phi)\atop \|y\|\le 1} \min_{x\in \Phi^{-1}(y)} \|x\|.$$ We will rely on the following more general concept of norm. Let $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ be a polyhedral sublinear mapping and ${{\mathcal L}}\subseteq{{\mathbb R}}^m$ be a linear subspace. Let $$\| \Phi^{-1} \vert {{{\mathcal L}}}\| := {\displaystyle\max}_{y\in \operatorname{Im}(\Phi) \cap {{\mathcal L}}\atop \|y\|\le 1} \min_{x\in \Phi^{-1}(y)} \|x\|.$$ It is easy to see that $\|\Phi^{-1}\vert {{{\mathcal L}}}\|$ is finite if $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ is a polyhedral sublinear mapping and ${{\mathcal L}}\subseteq {{\mathbb R}}^m$ is a linear subspace. For $b\in {{\mathbb R}}^m$ and $S\subseteq {{\mathbb R}}^m$ define $${{\mathrm{dist}}}_{{{\mathcal L}}}(b,S) = \inf\{\|b-y\|: y\in S, b-y\in {{\mathcal L}}\}.$$ Observe that ${{\mathrm{dist}}}_{{{\mathcal L}}}(b,S) < \infty$ if and only if $(S-b)\cap {{\mathcal L}}\ne \emptyset$. Furthermore, observe that $ \| \Phi^{-1} \vert {{{\mathcal L}}}\| = \| \Phi^{-1} \|$ and ${{\mathrm{dist}}}_{{{\mathcal L}}}(b,S)={{\mathrm{dist}}}(b,S)$ when ${{\mathcal L}}={{\mathbb R}}^m$. Let $K\subseteq {{\mathbb R}}^n \times {{\mathbb R}}^m $ be a polyhedral convex cone. Let $\mathcal T(K):=\{T_K(u,v): (u,v)\in K\}$ where $T_K(u,v)$ denotes the [*tangent*]{} cone to $K$ at the point $(u,v)\in K$, that is, $$T_K(u,v) = \{(x,y) \in {{\mathbb R}}^n\times {{\mathbb R}}^m: (u,v) + t(x,y) \in K \; \text{ for some } t > 0\}.$$ Observe that since $K$ is polyhedral the collection of tangent cones $\mathcal T(K)$ is finite. Recall that a polyhedral sublinear mapping $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ is [*relatively surjective*]{} if $\operatorname{Im}(\Phi) = \Phi({{\mathbb R}}^n)\subseteq{{\mathbb R}}^m$ is a linear subspace. Given a polyhedral sublinear mapping $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ let $${{\mathfrak S}}(\Phi):=\{T\in {\mathcal T}({{\mathrm{graph}}}(\Phi)): \Phi_T \; \text{ is relatively surjective}\}$$ and $${\mathcal H}(\Phi \vert {{\mathcal L}}):=\max_{T \in {{\mathfrak S}}(\Phi)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\|.$$  \[thm.main\] Let $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ be a polyhedral sublinear mapping and ${{\mathcal L}}\subseteq{{\mathbb R}}^m$ be a linear subspace. Then for all $b \in \operatorname{Im}(\Phi)$ and $u\in{{\mathrm{dom}}}(\Phi)$ $$\label{eq.Hoffman.bound.symm} {{\mathrm{dist}}}(u,\Phi^{-1}(b))\le {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot{{\mathrm{dist}}}_{{{\mathcal L}}}(b,\Phi(u)).$$ Furthermore, the bound  is tight: If ${\mathcal H}(\Phi \vert {{\mathcal L}}) > 0$ then there exist $b \in \operatorname{Im}(\Phi)$ and $u\in{{\mathrm{dom}}}(\Phi)$ such that $0 < {{\mathrm{dist}}}_{{{\mathcal L}}}(b,\Phi(u)) < \infty$ and $${{\mathrm{dist}}}(u,\Phi^{-1}(b)) = {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot{{\mathrm{dist}}}_{{{\mathcal L}}}(b,\Phi(u)).$$ The following lemma is the main technical component in the proof of Theorem \[thm.main\]. We defer its proof to the end of this section. \[lemma.rel.surj\] Let $\Phi:{{\mathbb R}}^n \rightrightarrows{{\mathbb R}}^m$ be a polyhedral sublinear mapping and ${{\mathcal L}}\subseteq {{\mathbb R}}^m$ be a linear subspace. Then $${\displaystyle\max}_{T\in {\mathcal T}({{\mathrm{graph}}}(\Phi))}\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| = \max_{T \in {{\mathfrak S}}(\Phi)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\|.$$ Assume that $b-v\in {{\mathcal L}}$ for some $v\in \Phi(u)$ as otherwise the right-hand-side in is $+\infty$ and trivially holds. We will prove the following equivalent statement to : For all $b \in \operatorname{Im}(\Phi)$ and $(u,v)\in{{\mathrm{graph}}}(\Phi)$ with $b-v \in {{\mathcal L}}$ $$\label{eq.Hoffman.bound} {{\mathrm{dist}}}(u,\Phi^{-1}(b))\le {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot\|b-v\|.$$ To ease notation, let $K:={{\mathrm{graph}}}(\Phi)$ so in particular $\Phi = \Phi_K$. We will use the following consequence of Lemma \[lemma.rel.surj\]: $\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| \le {\mathcal H}(\Phi \vert {{\mathcal L}})$ for all $T\in {\mathcal T}(K)$. Assume $b-v\ne 0$ as otherwise there is nothing to show. We proceed by contradiction. Suppose $b \in \operatorname{Im}(\Phi)$ and $(u,v) \in K$ are such that $b-v\in {{\mathcal L}}$ and $$\label{eq.contra} {\| x-u \|} > {\mathcal H}(\Phi \vert {{\mathcal L}}) \cdot \|b-v\|$$ for all $x$ such that $(x,b)\in K$. Let $ d:= \frac{b-v}{\|b-v\|} \in {{\mathcal L}}$ and consider the optimization problem $$\label{eq.opt.prob} \begin{array}{rl} \displaystyle\max_{w,t} & t \\ & (u+w,v+td) \in K, \\ & \|w\| \le {\mathcal H}(\Phi \vert {{\mathcal L}}) \cdot t. \end{array}$$ Since $b\in \operatorname{Im}(\Phi)=\operatorname{Im}(\Phi_K)$ it follows that $d = (b-v)/\|b-v\| \in \operatorname{Im}(\Phi_{T_K(u,v)})\cap {{\mathcal L}}$. Hence there exists $(z,d) \in T_K(u,v)$ with $\|z\|\le \|\Phi_{T_K(u,v)}^{-1}\vert {{{\mathcal L}}} \| \le {\mathcal H}(\Phi \vert {{\mathcal L}})$. Since $K$ is polyhedral, for $t > 0$ sufficiently small $(u+tz,v+td) \in K$ and so $(w,t) := (tz,t)$ is feasible for problem . Let $$C:=\{(w,t) \in {{\mathbb R}}^n \times {{\mathbb R}}_+: (w,t) \text{ is feasible for }~\eqref{eq.opt.prob} \}.$$ Assumption  implies that $t < \|b-v\|$ for all $(w,t)\in C$. In addition, since $K$ is polyhedral, it follows that $C$ is compact. Therefore  has an optimal solution $(\bar w,\bar t)$ with $0<\bar t < \|b-v\|.$ Let $(u',v'):= (u + \bar w,v+\bar t d) \in K$. Consider the modification of  obtained by replacing $(u,v)$ with $(u',v')$, namely $$\label{eq.opt.prob.mod} \begin{array}{rl} \displaystyle\max_{w' ,t'} & t' \\ & (u'+w',v'+t'd) \in K, \\ & {\| w' \|} \le {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot t'. \end{array}$$ Observe that $ b - v' = b-v-\bar t d = (\|b-v\| - \bar t)d \ne 0. $ Again since $b\in \operatorname{Im}(\Phi)$ it follows that $d= \frac{b-v'}{\|b-v'\|} \in\operatorname{Im}(\Phi_{T_K(u',v')})\cap {{\mathcal L}}$. Hence there exists $(z',d)\in T_K(u',v')$ such that $\|z'\|\le \|\Phi_{T_K(u',v')}^{-1}\vert {{{\mathcal L}}}\| \le {\mathcal H}(\Phi \vert {{\mathcal L}})$. Therefore,  has a feasible point $(w',t') = (t'z',t')$ with $t' > 0$. In particular $(u'+w',v'+t'd) = (u+\bar w + w', v + (\bar t + t')d) \in K$ with $\|\bar w + w'\| \le \|\bar w\| + \|w'\| \le {\mathcal H}(\Phi \vert {{\mathcal L}}) \cdot(\bar t +t')$ and $\bar t + t' >\bar t$. This contradicts the optimality of $(\bar w,\bar t)$ for . To show that the bound is tight, suppose ${\mathcal H}(\Phi \vert {{\mathcal L}}) = \|\Phi_T^{-1}\vert {{{\mathcal L}}}\| > 0$ for some $T\in {{\mathfrak S}}(\Phi) \subseteq {\mathcal T}(K)$. The construction of ${\| \Phi_T^{-1}\vert {{{\mathcal L}}} \|}$ implies that there exists $d \in {{\mathcal L}}$ with $\|d\|=1$ such that the problem $$\label{eq.tight} \begin{array}{rl} \displaystyle\min_{z} & {\| z \|} \\ & (z,d) \in T \end{array}$$ is feasible and has an optimal solution $\bar z$ with $\|\bar z\| = \|\Phi_T^{-1}\vert {{{\mathcal L}}} \| = {\mathcal H}(\Phi \vert {{\mathcal L}})>0$. Let $(u,v)\in K$ be such that $T = T_K(u,v)$. Let $b:=v+td$ where $t > 0$ is small enough so that $(u,v) + t(\bar z,d)\in K$. Observe that $b \in \operatorname{Im}(\Phi)$ and $b - v = t d \ne 0$. To finish, notice that if $x\in \Phi^{-1}(b)$ then $(x-u,b-v) = (x-u,td) \in T_K(u,v) = T$. The optimality of $\bar z$ then implies that $$\|x-u\| \ge {\mathcal H}(\Phi \vert {{\mathcal L}}) \cdot t = {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot \|b-v\|.$$ Since this holds for all $x\in \Phi^{-1}(b)$ and $b-v\in {{\mathcal L}}\setminus \{0\}$, it follows that ${{\mathrm{dist}}}(u,\Phi^{-1}(b)) \ge {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot \|b-v\| \ge {\mathcal H}(\Phi \vert {{\mathcal L}})\cdot{{\mathrm{dist}}}_{{{\mathcal L}}}(b,\Phi(u))>0.$ The proof of Lemma \[lemma.rel.surj\] relies on a convex duality construction. In each of ${{\mathbb R}}^n$ and ${{\mathbb R}}^m$ let $\|\cdot\|^*$ denote the dual norm of $\|\cdot\|$, that is, for $u\in {{\mathbb R}}^n$ and $v\in{{\mathbb R}}^m$ $$\|u\|^*:={\displaystyle\max}_{x\in {{\mathbb R}}^n \atop \|x\|\le 1} {\langle u , x \rangle}\, \text{ and } \, \|v\|^*:={\displaystyle\max}_{y\in {{\mathbb R}}^m \atop \|y\|\le 1} {\langle v , y \rangle}.$$ Given a cone $K\subseteq{{\mathbb R}}^n \times {{\mathbb R}}^n$, let $K^* \subseteq{{\mathbb R}}^n \times {{\mathbb R}}^m$ denote its dual cone, that is, $$K^*:= \{(u,v) \in {{\mathbb R}}^n\times {{\mathbb R}}^m: {\langle u , x \rangle}+{\langle v , y \rangle} \ge 0 \text{ for all } (x,y) \in K\}.$$ Given a sublinear mapping $\Phi: {{\mathbb R}}^n\rightrightarrows{{\mathbb R}}^m$, let $\Phi^*: {{\mathbb R}}^m\rightrightarrows {{\mathbb R}}^n$ denote its [*upper adjoint,*]{} that is $$u\in \Phi^*(v) \Leftrightarrow {\langle u , x \rangle}\le {\langle v , y \rangle} \text{ for all } (x,y) \in {{\mathrm{graph}}}(\Phi).$$ Equivalently, $u \in \Phi^*(v) \Leftrightarrow (-u,v) \in {{\mathrm{graph}}}(\Phi)^*$. Observe that for a polyhedral convex cone $T\subseteq{{\mathbb R}}^n \times {{\mathbb R}}^m$ and a linear subspace ${{\mathcal L}}\subseteq {{\mathbb R}}^m$ $$\begin{array}{rl} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\| = {\displaystyle\max}_{y} & {\| \Phi_T^{-1}(y) \|}\\ & y \in \operatorname{Im}(\Phi_T) \cap {{\mathcal L}},\\ & \|y\|\le 1, \end{array}$$ where $$\label{primal.Hoffman} \begin{array}{rl} {\| \Phi_T^{-1}(y) \|} := {\displaystyle\min}_{x} & \|x\| \\ & (x,y) \in T. \end{array}$$ By convex duality it follows that $$\label{dual.Hoffman} \begin{array}{rl} {\| \Phi_T^{-1}(y) \|} = {\displaystyle\max}_{u,v} & -{\langle v , y \rangle},\\ & \|u\|^* \le 1, \\ & (u,v)\in T^*. \end{array}$$ Therefore when $T$ is a polyhedral cone $$\label{eq.norm.Hoffman} \begin{array}{rl} {\| \Phi_T^{-1}\vert {{{\mathcal L}}} \|} = {\displaystyle\max}_{u,v,y} & -{\langle v , y \rangle} \\ & y \in \operatorname{Im}(\Phi_T) \cap {{\mathcal L}},\\ & \|y\|\le 1, \\ & \|u\|^* \le 1, \\ & (u,v)\in T^*. \end{array}$$ For a linear subspace ${{\mathcal L}}\subseteq {{\mathbb R}}^m$ let $\Pi_{{{\mathcal L}}}: {{\mathbb R}}^m \rightarrow {{\mathcal L}}$ denote the orthogonal projection onto ${{\mathcal L}}$. The following proposition is in the same spirit as Borwein’s norm-duality Theorem [@Borw83]. \[prop.Hoffman.surj\] Let $\Phi:{{\mathbb R}}^n\rightrightarrows{{\mathbb R}}^m$ be a polyhedral sublinear mapping and ${{\mathcal L}}\subseteq {{\mathbb R}}^m$ be a linear subspace. If $\Phi$ is relatively surjective then $${\mathcal H}(\Phi \vert {{\mathcal L}}) = \|\Phi^{-1}\vert {{{\mathcal L}}}\| = {\displaystyle\max}_{u\in \Phi^*(v)\atop \|u\|^*\le 1} \|\Pi_{\operatorname{Im}(\Phi) \cap {{\mathcal L}}}(v)\|^* = \frac{1} {{\displaystyle\min}_{u\in \Phi^*(v)\atop\|\Pi_{\operatorname{Im}(\Phi) \cap {{\mathcal L}}}(v)\|^*=1}\|u\|^*}.$$ Since ${{\mathrm{graph}}}(\Phi) \subseteq T$ for all $T\in \mathcal T({{\mathrm{graph}}}(\Phi))$ and $\Phi$ is relatively surjective, it follows that ${\| \Phi_T^{-1}\vert {{{\mathcal L}}} \|} \le {\| \Phi^{-1}\vert {{{\mathcal L}}} \|}$ for all $T\in \mathcal T({{\mathrm{graph}}}(\Phi))$. Consequently $ {\mathcal H}(\Phi \vert {{\mathcal L}}) = \|\Phi^{-1}\vert {{{\mathcal L}}}\|.$ Furthermore, since $\Phi$ is relatively surjective, from  it follows that $$\begin{array}{rl} {\| \Phi^{-1}\vert {{{\mathcal L}}} \|} = {\displaystyle\max}_{u,v} & \|\Pi_{\operatorname{Im}(\Phi) \cap {{\mathcal L}}}(v)\|^* \\ & \|u\|^* \le 1, \\ & u\in \Phi^*(v). \end{array}$$ The latter quantity is evidently the same as $\dfrac{1} {{\displaystyle\min}_{u\in \Phi^*(v)\atop\|\Pi_{\operatorname{Im}(\Phi) \cap {{\mathcal L}}}(v)\|^*=1}\|u\|^*}.$ We will rely on the following equivalence between [*surjectivity*]{} and [*non-singularity*]{} of sublinear mappings. A standard convex separation argument shows that a closed sublinear mapping $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ is surjective if and only if $$\label{eq.non.sing} (0,v) \in {{\mathrm{graph}}}(\Phi)^* \Rightarrow v=0.$$ Condition  is a kind of [*non-singularity*]{} of $\Phi^*$ as it can be rephrased as $0 \in \Phi^*(v) \Rightarrow v=0.$ Without loss of generality assume ${{\mathrm{span}}}(\operatorname{Im}(\Phi)) = {{\mathbb R}}^m$ as otherwise we can work with the restriction of $\Phi$ as a mapping from ${{\mathbb R}}^n$ to ${{\mathrm{span}}}(\operatorname{Im}(\Phi))$. To ease notation let $K:={{\mathrm{graph}}}(\Phi)$. We need to show that $${\displaystyle\max}_{T\in {\mathcal T}(K)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\| = {\displaystyle\max}_{T\in {{\mathfrak S}}(\Phi)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\|.$$ By construction, it is immediate that $${\displaystyle\max}_{T\in \mathcal T(K)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\| \ge {\displaystyle\max}_{T\in {{\mathfrak S}}(\Phi)} \|\Phi_T^{-1}\vert {{{\mathcal L}}}\|.$$ To prove the reverse inequality let $T \in \mathcal T(K)$ be fixed and let $(\bar u,\bar v, \bar y)$ attain the optimal value $\|\Phi_T^{-1}\vert {{{\mathcal L}}}\|$ in . Let $\bar F$ be the minimal face of $K^*$ containing $(\bar u, \bar v)$ and $\bar T := \bar F^* \in \mathcal T(K)$. As we detail below, $(\bar u,\bar v, \bar y)$ can be chosen so that $\Phi_{\bar T}$ is surjective. If $\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| = 0$ then it trivially follows that $\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| \le \|\Phi_{\bar T}^{-1}\vert {{{\mathcal L}}}\|.$ Otherwise, since $\|\bar y\|\le 1$ and $\bar y \in {{\mathcal L}}$ we have $$\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| = -{\langle \bar v , \bar y \rangle} \le \|\Pi_{{{\mathcal L}}}(\bar v)\|^*.$$ Since $(\bar u,\bar v) \in T^* = {{\mathrm{graph}}}(\Phi_{\bar T})^*$ and $\|\bar u\|^* \le 1$, Proposition \[prop.Hoffman.surj\] yields $$\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| \le \|\Pi_{{{\mathcal L}}}(\bar v)\|^* \le \|\Phi_{\bar T}^{-1}\vert {{{\mathcal L}}}\|.$$ In either case $\|\Phi_T^{-1}\vert {{{\mathcal L}}}\| \le \|\Phi_{\bar T}^{-1}\vert {{{\mathcal L}}}\|$ where $\bar T \in {{\mathfrak S}}(\Phi)$. Since this holds for any fixed $T \in\mathcal T(K)$, it follows that $${\displaystyle\max}_{T\in \mathcal T(K)} {\| \Phi_T^{-1}\vert {{{\mathcal L}}} \|} \le {\displaystyle\max}_{\bar T\in {{\mathfrak S}}(\Phi)} \|\Phi_{\bar T}^{-1}\vert {{{\mathcal L}}}\|.$$ It remains to show that $(\bar u,\bar v, \bar y)$ can be chosen so that $\Phi_{\bar T}$ is surjective, where $\bar T = \bar F^*$ and $\bar F$ is the minimal face of $K^*$ containing $(\bar u,\bar v)$. To that end, pick a solution $(\bar u,\bar v,\bar y)$ to  and consider the set $$V:=\{v \in {{\mathbb R}}^m: {\langle v , \bar y \rangle} = {\langle \bar v , \bar y \rangle}, \, (\bar u,v)\in T^*\}.$$ In other words, $V$ is the projection of the set of optimal solutions to  of the form $(\bar u, v, \bar y)$. Since $T$ is polyhedral, so is $T^*$ and thus $V$ is a polyhedron. Furthermore, $V$ must have at least one extreme point. Otherwise there exist $\hat v\in V$ and a nonzero $\tilde v\in {{\mathbb R}}^m$ such that $\hat v + t\tilde v \in V$ for all $t \in {{\mathbb R}}$. In particular, $(\bar u, \hat v + t\tilde v) \in T^*$ for all $t \in {{\mathbb R}}$ and thus both $(0,\tilde v) \in T^*\subseteq K^*$ and $-(0,\tilde v)\in T^*\subseteq K^*$. The latter in turn implies $\operatorname{Im}(\Phi) \subseteq \{y\in {{\mathbb R}}^m: {\langle \tilde v , y \rangle} =0\}$ contradicting the assumption ${{\mathrm{span}}}(\operatorname{Im}(\Phi)) ={{\mathbb R}}^m$. By replacing $\bar v$ if necessary, we can assume that $\bar v$ is an extreme point of $V$. We claim that the minimal face $\bar F$ of $K^*$ containing $(\bar u,\bar v)$ satisfies $$(0,v') \in \bar F = \bar T^* \Rightarrow v' = 0$$ thereby establishing the surjectivity of $\Phi_{\bar T}$ (cf., ). To prove this claim, proceed by contradiction. Assume $(0,v') \in \bar F$ for some nonzero $v'\in {{\mathbb R}}^m$. The choice of $\bar F$ ensures that $(\bar u,\bar v)$ lies in the relative interior of $\bar F$ and thus for $t>0$ sufficiently small both $(\bar u,\bar v + tv')\in \bar F\subseteq T^*$ and $(\bar u,\bar v - tv')\in \bar F\subseteq T^*$. The optimality of $(\bar u,\bar v,\bar y)$ implies that both ${\langle \bar v+tv' , \bar y \rangle} \ge {\langle \bar v , \bar y \rangle}$ and ${\langle \bar v -tv' , \bar y \rangle} \ge {\langle \bar v , \bar y \rangle}$ and so ${\langle v' , \bar y \rangle} = 0$. Thus both $\bar v + tv' \in V$ and $\bar v -tv'\in V$ with $tv'\ne 0$ thereby contradicting the assumption that $\bar v$ is an extreme point of $V$. Proofs of propositions in Section \[sec.Hoffman\] {#sec.proof.Hoffman} ================================================= Let $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ be defined by $\Phi(x):= Ax+{{\mathbb R}}^n_+$ and $\mathcal L:=\{y\in {{\mathbb R}}^m: y_{L^c} = 0\}$. Observe that for this $\Phi$ we have $${{\mathrm{graph}}}(\Phi) = \{(x,Ax+s) \in {{\mathbb R}}^n \times {{\mathbb R}}^m: s \ge 0\}.$$ Hence ${\mathcal T}({{\mathrm{graph}}}(\Phi)) = \{T_J: J \subseteq \{1,\dots,m\}\}$ where $$T_J = \{(x,Ax+s) \in {{\mathbb R}}^n \times {{\mathbb R}}^m: s_J \ge 0\}.$$ Furthermore, for $T_J$ as above the mapping $\Phi_{T_J}:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ is defined by $$\Phi_{T_J}(x) = \{Ax + s: s_J \ge 0\}.$$ Therefore, $\Phi_{T_J}$ is relatively surjective if and only if $J$ is $A$-surjective. In other words, $T_J \in {{\mathfrak S}}(\Phi) \Leftrightarrow J \in {\mathcal S}(A)$ and in that case $$\|\Phi_{T_J}^{-1}\vert {\mathcal L}\| = {\displaystyle\max}_{y \in {{\mathbb R}}^L \atop \|y\| = 1} \min_{x\in{{\mathbb R}}^n\atop A_J x\le y_J}\|x\| = H_J(A\vert L).$$ To finish, apply Theorem \[thm.main\] to $\Phi$ and $\mathcal L$. Let $\mathcal L:=\{y\in {{\mathbb R}}^m: y_{L^c} = 0\}$. If $J \in {\mathcal S}(A)$ then the polyhedral sublinear mapping $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m$ defined via $$x\mapsto Ax + \{s\in {{\mathbb R}}^m: s_J \ge 0\}$$ is surjective. Thus Proposition \[prop.Hoffman.surj\] yields $$H_J(A\vert L) = \|\Phi^{-1}\vert {\mathcal L}\| = \frac{1}{{\displaystyle\min}_{(u,v) \in {{\mathrm{graph}}}(\Phi)^*\atop\|\Pi_{\mathcal L}(v)\|^*=1} \|u\|^*}.$$ To get , observe that $u\in \Phi^*(v)$ if and only if $u = A\transp v, v_J \ge 0,$ and $v_{J^c} = 0$, and when that is the case $\Pi_{\mathcal L}(v) = v_{J\cap L}$. Finally observe that  readily follows form . Proposition \[prop.Hoffman.gral\] and Proposition \[prop.Hoffman.A.surj\] follow as special cases of Proposition \[prop.Hoffman.gral.rest\] and Proposition \[prop.Hoffman.A.surj.rest\] by taking $L = \{1,\dots,m\}$. The proofs of the remaining propositions are similar to the proofs of Proposition \[prop.Hoffman.gral.rest\] and Proposition \[prop.Hoffman.A.surj.rest\]. Let $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m\times {{\mathbb R}}^p$ be defined as $$\Phi(x) := {\begin{bmatrix} Ax\\Cx \end{bmatrix}} + \{0\}\times {{\mathbb R}}^p_+$$ and ${{\mathcal L}}:= {{\mathbb R}}^m \times {{\mathbb R}}^p$. Then ${\mathcal T}({{\mathrm{graph}}}(\Phi)) = \{T_J: J\subseteq\{1,\dots,p\}\}$ where $$T_J := \{(x,Ax,Cx+s)\in {{\mathbb R}}^n\times ({{\mathbb R}}^m\times {{\mathbb R}}^p): s_J \ge 0\}.$$ Furthermore, $T_J\in {{\mathfrak S}}(\Phi) \Leftrightarrow J \in {\mathcal S}(A;C)$ and in that case $$\|\Phi_{T_J}^{-1}\vert {{{\mathcal L}}}\|={\displaystyle\max}_{(y,w)\in A{{\mathbb R}}^m\times {{\mathbb R}}^p \atop \|(y,w)\|\le 1} {\displaystyle\min}_{x\in {{\mathbb R}}^n \atop Ax = y, C_Jx \le w_J} \|x\|.$$ Observe that $ u \in \Phi_{T_J}^*(v,z) \Leftrightarrow (-u,v,z) \in T_J^* \Leftrightarrow u = A\transp v + C_J\transp z_J, \; z_J \ge 0,$ and $z_{J^c} = 0.$ Thus for $J\in{\mathcal S}( A;C)$ Proposition \[prop.Hoffman.surj\] yields $$\begin{aligned} \|\Phi_{T_J}^{-1}\vert {{{\mathcal L}}}\| &= {\displaystyle\max}_{(v,z)\in {{\mathbb R}}^m\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|A\transp v +C\transp z\|^* \le 1} \|\Pi_{\operatorname{Im}(\Phi_{T_J})\cap{{\mathcal L}}}(v,z)\|^* \\ &= {\displaystyle\max}_{(v,z)\in {{\mathbb R}}^m\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|A\transp v +C\transp z\|^* \le 1} \|\Pi_{A({{\mathbb R}}^n)\times {{\mathbb R}}^p}(v,z)\|^* \\&= {\displaystyle\max}_{(v,z)\in (A{{\mathbb R}}^n)\times{{\mathbb R}}^p_+ \atop z_{J^c} = 0, \|A\transp v +C\transp z\|^* \le 1} \|(v,z)\|^*\end{aligned}$$ To finish, apply Theorem \[thm.main\]. The proof is identical to the proof of Proposition \[prop.Hoffman\] if we take ${{\mathcal L}}= {{\mathbb R}}^m \times \{0\}$ instead. The proof is identical to the proof of Proposition \[prop.Hoffman\] if we take ${{\mathcal L}}= \{0\} \times {{\mathbb R}}^p$ instead. The proof is similar to the proof of Proposition \[prop.Hoffman\]. Let $\Phi:{{\mathbb R}}^n \rightrightarrows {{\mathbb R}}^m \times {{\mathbb R}}\times {{\mathbb R}}^n$ be defined as $$\Phi(x) = {\begin{bmatrix} \tilde A x\\ Cx \end{bmatrix}} + \{0\} \times {{\mathbb R}}^n_+$$ and ${{\mathcal L}}= {{\mathbb R}}^m\times \{0\} \times\{0\} \subseteq {{\mathbb R}}^m \times {{\mathbb R}}\times {{\mathbb R}}^n$. Then ${\mathcal T}({{\mathrm{graph}}}(\Phi)) = \{T_J: J\subseteq\{1,\dots,p\}\}$ where $$T_J := \{(x,\tilde Ax,-x+s)\in {{\mathbb R}}^n\times ({{\mathbb R}}^m\times {{\mathbb R}}\times{{\mathbb R}}^n): s_J \ge 0\}.$$ Furthermore, $T_J\in {{\mathfrak S}}(\Phi) \Leftrightarrow J \in {\mathcal S}(\tilde A;C)$ and in that case $$\operatorname{Im}(\Phi_{T_J})\cap{{\mathcal L}}= \left\{(y,0,0) : y = Ax, 0 = {{\mathbf 1}}\transp x, \; 0=-x+s \; \text{for} \; s\in {{\mathbb R}}^n \; \text{with}\; s_J \ge 0\right\}.$$ Thus $$\|\Phi_{T_J}^{-1}\vert {{{\mathcal L}}}\|={\displaystyle\max}_{y\in AK_J \atop \|y\|\le 1} {\displaystyle\min}_{x\in K_J \atop Ax = y} \|x\|.$$ Observe that $ u \in \Phi_{T_J}^*(v,t,z) \Leftrightarrow u = A\transp v + t{{\mathbf 1}}-z, \; z_J \ge 0,$ and $z_{J^c} = 0.$ Thus for $J\in{\mathcal S}(\tilde A;C)$ Proposition \[prop.Hoffman.surj\] yields $$\begin{aligned} \|\Phi_{T_J}^{-1}\vert {{{\mathcal L}}}\| &= {\displaystyle\max}_{(v,t,z)\in {{\mathbb R}}^m\times {{\mathbb R}}\times {{\mathbb R}}^n_+ \atop z_{J^c} = 0,\|A\transp v +t{{\mathbf 1}}-z\|^* \le 1} \|\Pi_{\operatorname{Im}(\Phi_{T_J})\cap{{\mathcal L}}}(v,t,z)\|^* \\ &= {\displaystyle\max}_{(v,t,z)\in {{\mathbb R}}^m\times {{\mathbb R}}\times {{\mathbb R}}^n_+ \atop z_{J^c} = 0, \|A\transp v +t{{\mathbf 1}}-z\|^* \le 1} \|\Pi_{(\tilde A({{\mathbb R}}^n)\times {{\mathbb R}}^n)\cap ({{\mathbb R}}^m\times \{0\}\times \{0\})}(v,t,z)\|^* \\ &= {\displaystyle\max}_{(v,t)\in\tilde A{{\mathbb R}}^n, z\in {{\mathbb R}}^n_+ \atop z_{J^c} = 0, \|A\transp v +t{{\mathbf 1}}-z\|^* \le 1} \|\Pi_{\tilde A({{\mathbb R}}^n)\cap ({{\mathbb R}}^m\times \{0\})}(v,t)\|^* \\ &= {\displaystyle\max}_{(v,t)\in\tilde A{{\mathbb R}}^n, z\in {{\mathbb R}}^n_+ \atop z_{J^c} = 0, \|A\transp v + t{{\mathbf 1}}-z\|^* \le 1} \|\Pi_{L_A}(v)\|^*.\end{aligned}$$ To finish, apply Theorem \[thm.main\]. Acknowledgements {#acknowledgements .unnumbered} ================ Javier Peña’s research has been funded by NSF grant CMMI-1534850. [46]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} S. Akers. Binary decision diagrams. *IEEE Trans. Computers*, 270 (6):0 509–516, 1978. D. Amelunxen and P. Bürgisser. A coordinate-free condition number for convex programming. *SIAM J. on Optim.*, 220 (3):0 1029–1041, 2012. D. Az[é]{} and J. Corvellec. On the sensitivity analysis of [H]{}offman constants for systems of linear inequalities. *SIAM Journal on Optimization*, 120 (4):0 913–927, 2002. A. Beck and S. Shtern. Linearly convergent away-step conditional gradient for non-strongly convex functions. *Mathematical Programming*, 164:0 1–27, 2017. D. Bergman, A. Cire, W. van Hoeve, and J. Hooker. *Decision diagrams for optimization*. Springer, 2016. J. Borwein. Adjoint process duality. *Mathematics of Operations Research*, 80 (3):0 403–434, 1983. P. Bürgisser and F. Cucker. *Condition*. Springer Berlin Heidelberg, 2013. J. Burke and P. Tseng. A unified analysis of [H]{}offman’s bound via [F]{}enchel duality. *SIAM Journal on Optimization*, 60 (2):0 265–282, 1996. M. Epelman and R. Freund. A new condition measure, preconditioners, and relations between different measures of conditioning for conic linear systems. *SIAM J. on Optim.*, 12:0 627–655, 2002. R. Freund. Complexity of convex optimization using geometry-based measures and a reference point. *Math Program.*, 99:0 197–221, 2004. R. Freund and J. Vera. Some characterizations and properties of the “distance to ill-posedness” and the condition measure of a conic linear system. *Math Program.*, 86:0 225–260, 1999. R. Freund and J. Vera. On the complexity of computing estimates of condition measures of a conic linear system. *Mathematics of Operations Research*, 280 (4):0 625–648, 2003. D. Garber. Fast rates for online gradient descent without strong convexity via [H]{}offman’s bound. *arXiv preprint arXiv:1802.04623*, 2018. D. Garber and E. Hazan. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. *SIAM J. on Optim.*, 26:0 1493–1528, 2016. F. Granot and J. Skorin-Kapov. Some proximity and sensitivity results in quadratic integer programming. *Mathematical Programming*, 470 (1-3):0 259–268, 1990. O. G[ü]{}ler, A. Hoffman, and U. Rothblum. Approximations to solutions to systems of linear inequalities. *SIAM Journal on Matrix Analysis and Applications*, 160 (2):0 688–696, 1995. D. Gutman and J. Pe[ñ]{}a. The condition number of a function relative to a polytope. *arXiv preprint arXiv:1802.00271*, 2018. A. Hoffman. On approximate solutions of systems of linear inequalities. *Journal of Research of the National Bureau of Standards*, 490 (4):0 263–265, 1952. A. Jourani. Hoffman’s error bound, local controllability, and sensitivity analysis. *SIAM Journal on Control and Optimization*, 380 (3):0 947–970, 2000. D. Klatte and G. Thiere. Error bounds for solutions of linear equations and inequalities. *Zeitschrift f[ü]{}r Operations Research*, 410 (2):0 191–214, 1995. S. Lacoste-Julien and M. Jaggi. On the global linear convergence of [F]{}rank-[W]{}olfe optimization variants. In *Advances in Neural Information Processing Systems (NIPS)*, 2015. D. Leventhal and A. Lewis. Randomized methods for linear constraints: Convergence rates and conditioning. *Math. Oper. Res.*, 35:0 641–654, 2010. A. Lewis. Ill-conditioned convex processes and linear inequalities. *Math. Oper. Res.*, 24:0 829–834, 1999. A. Lewis. The structured distance to ill-posedness for conic systems. *Math. Oper. Res.*, 29:0 776–785, 2005. W. Li. The sharp [L]{}ipschitz constants for feasible and optimal solutions of a perturbed linear program. *Linear algebra and its applications*, 187:0 15–40, 1993. Z. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. *Annals of Operations Research*, 460 (1):0 157–178, 1993. O. Mangasarian and T-H Shiau. Lipschitz continuity of solutions of linear inequalities, programs and complementarity problems. *SIAM Journal on Control and Optimization*, 250 (3):0 583–595, 1987. I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for non-strongly convex optimization. *To Appear in Mathematical Programming*, 2018. Y. Nesterov and A. Nemirovskii. *Interior-Point Polynomial Algorithms in Convex Programming*. SIAM Studies in Applied Mathematics. SIAM, 1994. T. Nguyen. A stroll in the jungle of error bounds. *arXiv preprint arXiv:1704.06938*, 2017. J. S. Pang. Error bounds in mathematical programming. *Math. Program.*, 79:0 299–332, 1997. J. Pe[ñ]{}a. Understanding the geometry on infeasible perturbations of a conic linear system. *SIAM J. on Optim.*, 10:0 534–550, 2000. J. Pe[ñ]{}a. A characterization of the distance to infeasibility under block-structured perturbations. *Linear algebra and its applications*, 370:0 193–216, 2003. J. Pe[ñ]{}a. On the block-structured distance to non-surjectivity of sublinear mappings. *Mathematical programming*, 1030 (3):0 561–573, 2005. J. Pe[ñ]{}a and D. Rodríguez. Polytope conditioning and linear convergence of the [F]{}rank-[W]{}olfe algorithm. *To Appear in Mathematics of Operations Research*, 2018. A. Ramdas and J. Pe[ñ]{}a. Towards a deeper geometric, analytic and algorithmic understanding of margins. *Optimization Methods and Software*, 310 (2):0 377–391, 2016. J. Renegar. Incorporating condition measures into the complexity theory of linear programming. *SIAM J. on Optim.*, 5:0 506–524, 1995. J. Renegar. Linear programming, complexity theory and elementary functional analysis. *Math. Program.*, 70:0 279–351, 1995. J. Renegar. *A Mathematical View of Interior-Point Methods in Convex Optimization*, volume 3 of *MPS/SIAM Ser. Optim.* SIAM, 2001. S. Robinson. Bounds for error in the solution set of a perturbed linear program. *Linear Algebra and its applications*, 6:0 69–81, 1973. O. Stein. Error bounds for mixed integer linear optimization problems. *Mathematical Programming*, 1560 (1-2):0 101–123, 2016. H. Van Ngai and M. Th[é]{}ra. Error bounds for systems of lower semicontinuous functions in [A]{}splund spaces. *Mathematical Programming*, 1160 (1-2):0 397–427, 2009. P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. *Journal of Machine Learning Research*, 150 (1):0 1523–1548, 2014. W. Xia, J. Vera, and L. F. Zuluaga. Globally solving non-convex quadratic programs via linear integer programming techniques. *arXiv preprint arXiv:1511.02423*, 2015. C. Zalinescu. Sharp estimates for [H]{}offman’s constant for systems of linear inequalities and equalities. *SIAM Journal on Optimization*, 140 (2):0 517–533, 2003. Z. Zhou and A. So. A unified approach to error bounds for structured convex optimization problems. *Mathematical Programming*, 1650 (2):0 689–728, 2017. [^1]: Tepper School of Business, Carnegie Mellon University, USA, [jfp@andrew.cmu.edu]{} [^2]: Department of Econometrics and Operations Research, Tilburg University, The Netherlands, [j.c.veralizcano@uvt.nl]{} [^3]: Department of Industrial and Systems Engineering, Lehigh University, USA, [luis.zuluaga@lehigh.edu]{}
February 2008 [****]{} 1.5cm [**Niels A. Obers** ]{} .5cm *The Niels Bohr Institute\ Blegdamsvej 17, 2100 Copenhagen Ø, Denmark*\ .5cm [obers@nbi.dk]{} [**Abstract**]{} 0.1cm These lectures review some of the recent progress in uncovering the phase structure of black hole solutions in higher-dimensional vacuum Einstein gravity. The two classes on which we focus are Kaluza-Klein black holes, static solutions with an event horizon in asymptotically flat spaces with compact directions, and stationary solutions with an event horizon in asymptotically flat space. Highlights include the recently constructed multi-black hole configurations on the cylinder and thin rotating black rings in dimensions higher than five. The phase diagram that is emerging for each of the two classes will be discussed, including an intriguing connection that relates the phase structure of Kaluza-Klein black holes with that of asymptotically flat rotating black holes. Introduction and motivation \[obesec:intr\] =========================================== The study of the phase structure of black objects in higher-dimensional gravity (see the reviews [@obeKol:2004ww; @obeEmparan:2006mm; @obeHarmark:2007md; @obeEmparan:2008eg]) is interesting for a wide variety of reasons. First of all, it is of intrinsic interest in gravity where the spacetime dimension can be viewed as a tunable parameter. In this way one may discover which properties of black holes are universal and which ones show a dependence on the dimension. We know for example that the laws of black hole mechanics are of the former type, while, as will be illustrated in this lecture, properties such as uniqueness and horizon topology are of the latter type. In particular, recent research has revealed that as the dimension increases the phase structure becomes increasingly intricate and diverse. In this context, another interesting phenomenon that has been observed is the existence of critical dimensions, above which certain properties of black holes can change drastically. Uncovering the phases of black holes is also relevant for the issue of classical stability of black hole solutions as well as gravitational phase transitions between different solutions, such as those that involve a change of topology of the event horizon. Furthermore, information about the full structure of the static or stationary phases of the theory can provide important clues about the time-dependent trajectories that interpolate between different phases. Going beyond pure Einstein gravity, there are also important motivations originating from String Theory. String/M-Theory at low energies is described by higher-dimensional theories of gravity, namely various types of supergravities. As a consequence, black objects in pure gravity are often intimately related to black hole/brane solutions in string theory. These charged cousins and their near-extremal limits play an important role in the microscopic understanding of black hole entropy [@obeStrominger:1996sh] and other physical properties (see also the reviews [@obeMathur:2005zp; @obeMathur:2005ai]) . A related application is in the gauge/gravity correspondence [@obeMaldacena:1997re; @obeAharony:1999ti], where the near-extremal limits of these black branes give rise to phases in the corresponding dual non-gravitational theories at finite temperature. In this way, finding new black objects can lead to the prediction of new phases in these thermal non-gravitational theories (see [@obeAharony:2004ig; @obeHarmark:2004ws]). Finally, if large extra dimensions [@obeArkaniHamed:1998rs; @obeAntoniadis:1998ig] are realized in Nature, higher-dimensional black holes would be important as possible objects to be produced in accelerators or observed in the Universe (see the review [@obeKanti:2004nr]). In the past seven years, the two classes that have been studied most intensely are: - stationary solutions with an event horizon in asymptotically flat space - static solutions with an event horizon in asymptotically flat spaces with compact directions For brevity, we will often refer in this lecture to the first type as [*rotating black holes*]{} and the second type as [*Kaluza-Klein black holes*]{}. In this nomenclature, the term “black hole” stands for any object with an event horizon, regardless its horizon topology (not necessarily spherical). We also allow for the possibility of multiple disconnected event horizons, to which we refer as [*multi-black hole solutions*]{}. For rotating black holes most progress in recent years has been in five dimensions. Here, it has been found that in addition to the Myers-Perry (MP) black holes [@obeMyers:1986un], there exist rotating black rings [@obeEmparan:2001wn; @obeEmparan:2006mm] and multi-black hole solutions like black-Saturns and multi-black rings [@obeElvang:2007rd; @obeElvang:2007hg; @obeIguchi:2007is; @obeEvslin:2007fv] including those with two independent angular momenta [@obePomeransky:2006bd; @obeIzumi:2007qx; @obeElvang:2007hs]. All of these are exact solutions which have been obtained with the aid of special ansätze [@obeEmparan:2001wk; @obeHarmark:2004rm] based on symmetries and inverse-scattering techniques [@obeBelinsky:1971nt; @obeBelinsky:1979; @obeBelinski:2001ph; @obePomeransky:2005sj]. We refer in particular to the review [@obeEmparan:2006mm] for further details on the black ring in five dimensions and Ref. [@obeElvang:2007hg] for a discussion of the phase diagram in five dimensions for the case of rotating black holes with a single angular momentum. Moreover, the very recent review [@obeEmparan:2008eg] provides a pedagogical overview of black holes in higher dimensions, including the more general phase structure of five-dimensional stationary black holes and solution generating techniques. Only recently has there been significant progress in exploring the phase structure of stationary solutions in six and more dimensions [@obeEmparan:2007wm]. This includes the explicit construction of thin black rings in six and higher dimensions [@obeEmparan:2007wm] based on a perturbative technique known as matched asymptotic expansion [@obeHarmark:2003yz; @obeGorbonos:2004uc; @obeKarasik:2004ds; @obeGorbonos:2005px; @obeDias:2007hg]. Furthermore, in Ref. [@obeEmparan:2007wm] the correspondence between ultra-spinning black holes [@obeEmparan:2003sy] and black membranes on a two-torus was exploited, to take steps towards qualitatively completing the phase diagram of rotating blackfolds with a single angular momentum. That has led to the proposal that there is a connection between MP black holes and black rings, and between MP black holes and black Saturns, through merger transitions involving two kinds of ‘pinched’ black holes. More generally, this analogy suggests an infinite number of pinched black holes of spherical topology leading to a complicated pattern of connections and mergers between phases. The proposed phase diagram was obtained by importing the present knowledge of phase of Kaluza-Klein black holes on a two-torus. For Kaluza-Klein (KK) black holes, most progress has been for the simplest KK space, namely Minkowski space times a circle. The simplest static solution of Einstein gravity (in five or more dimensions) in this case is the uniform black string, which has a factorized form consisting of a Schwarzschild-Tangherlini black hole and an extra flat (compactified) direction. But there are many more phases of KK black holes, which in recent years have been uncovered by a combination of perturbative techniques (matched asymptotic expansion), numerical methods and exact solutions. These phases include non-uniform black strings (see [@obeGubser:2001ac; @obeWiseman:2002zc; @obeSorkin:2004qq; @obeKleihaus:2006ee; @obeSorkin:2006wp; @obeKleihaus:2007cf] for numerical results), localized black holes (see [@obeHarmark:2002tr; @obeHarmark:2003yz; @obeGorbonos:2004uc; @obeGorbonos:2005px; @obeKarasik:2004ds; @obeChu:2006ce; @obeDias:2007hg; @obeKol:2007rx] for analytical results and [@obeSorkin:2003ka; @obeKudoh:2003ki; @obeKudoh:2004hs] for numerical solutions) and bubble-black hole sequences [@obeElvang:2004iz]. Here recent progress [@obeDias:2007hg] includes the construction of small mass multi-black hole configurations localized on the circle which in some sense parallel the multi-black hole configurations obtained for rotating black holes. All of these static, uncharged phases can be depicted in a two-dimensional phase diagram [@obeHarmark:2003dg; @obeKol:2003if; @obeHarmark:2003eg] parameterized by the mass and tension. Mapping out this phase structure has consequences for the endpoint of the Gregory-Laflamme instability [@obeGregory:1993vy; @obeGregory:1994bj] of the neutral black string, which is a long wavelength instability that involves perturbations with an oscillating profile along the direction of the string. The non-uniform black string phase emerges from the uniform black string phase at the Gregory-Laflamme point, which is determined by the (time-independent) threshold mode where the instability sets in. An interesting property that has been found in this context is the existence of a critical dimension [@obeSorkin:2004qq] where the transition of the uniform black string into the non-uniform black string changes from first order into second order. Moreover, it has been shown [@obeKol:2002xz; @obeWiseman:2002ti; @obeKol:2003ja; @obeSorkin:2006wp] that the localized black hole phase meets the non-uniform black string phase in a horizon-topology changing merger point. Turning to more recent developments, we note that the new multi-black hole configurations of Ref. [@obeDias:2007hg] raise the question of existence of new non-uniform black strings. Furthermore, analysis of the three-black hole configuration  [@obeDias:2007hg] suggests the possibility of a new class of static lumpy black holes in Kaluza-Klein space. Many of the insights obtained in this simplest case are expected to carry over as we go to Kaluza-Klein spaces with higher-dimensional compact spaces [@obeKol:2004pn; @obeKol:2006vu; @obeHarmark:2007md], although the degree of complexity in these cases will increase substantially. In summary, recent research has shown that in going from four to higher dimensions in vacuum Einstein gravity a very rich phase structure of black holes is observed with fascinating new properties, such as symmetry breaking, new horizon topologies, merger points and in some cases infinite non-uniqueness. Obviously one of the reasons for this richer phase structure is that as the dimension increases there are many more degrees of freedom for the metric. Furthermore, for stationary solutions every time the dimension increases two units, there is one more orthogonal rotation plane available. Another reason is the existence of extended objects in higher dimensions, such as black $p$-branes (including the uniform black string for $p=1$). Finally, allowing for compact directions introduces extra scales, and hence more dimensionless parameters in the problem. The reasons that make the phase structure so rich, such as the increase of the degrees of freedom and the appearance of fewer symmetries, are those that also make it hard to uncover. As the overview above illustrates, there has been remarkable progress in recent years, but we have probably only seen a glimpse of the full phase structure of black holes in higher-dimensional gravity. However, the cases considered so far will undoubtedly provide essential clues towards a more complete picture and will form the basis for further developments into this fascinating subject. The outline of these lectures is as follows. To set the stage, we first give in Sec. \[obesec:uniq\] a brief introduction to known uniqueness theorems for black holes in pure gravity and some prominent cases of non-uniqueness in higher dimensions. We also give a short overview of some of the most important techniques that have been used to obtain black hole solutions beyond four dimensions. Then we review the current status for Kaluza-Klein black holes in Secs. \[obesec:kkbh\] and \[obesec:mubh\]. In particular, Sec. \[obesec:kkbh\] presents the main results for black objects on the cylinder with one event horizon as well as results for Kaluza-Klein black holes on a two-torus, which will be relevant in the sequel. Sec. \[obesec:mubh\] discusses the recently constructed multi-black hole configurations on the cylinder. Then the focus will be turned to rotating black holes in Secs. \[obesec:robh\] and \[obesec:phas\]. The five-dimensional case will be very briefly reviewed, but most attention will be given to the recent progress for six and higher dimensions, including the construction of thin black rings in Sec. \[obesec:robh\]. We then discuss in Sec. \[obesec:phas\] the proposed phase structure for rotating black holes in six and higher dimensions with a single angular momentum. The resulting picture builds on an interesting connection to the phase structure of Kaluza-Klein black holes discussed in the first part. We end with a future outlook for the subject in Sec. \[obesec:outl\]. Uniqueness theorems and going beyond four dimensions \[obesec:uniq\] ===================================================================== In this section we first review known black hole uniqueness theorems in Einstein gravity as well as the most prominent cases of non-uniqueness of black holes in higher dimensions. We also give an overview of some of the most important techniques that have been used in finding black hole solutions beyond four dimensions. Black hole (non-)uniqueness --------------------------- The purpose of this lecture is to explore possible black hole solutions of the vacuum Einstein equations $R_{\mu\nu}=0$ in dimensions $D \geq 4$. In four-dimensional vacuum gravity, a black hole in an asymptotically flat space-time is uniquely specified by the ADM mass $M$ and angular momentum $J$ measured at infinity [@obeIsrael:1967wq; @obeCarter:1971; @obeHawking:1972vc; @obeRobinson:1975]. In particular, in the static case the unique solution is the four-dimensional Schwarzschild black hole solution, and for the stationary case it is the Kerr black hole \[obeKerr\] $$\begin{aligned} ds^2 & = & - dt^2 + \frac{\mu r}{\Sigma} (dt + a \sin^2 \theta d \phi)^2 + \frac{\Sigma}{\Delta} dr^2 + \Sigma d \theta^2 + (r^2+a^2) \sin^2 \theta d \phi^2 , \\ & & \Sigma = r^2 + a^2 \cos^2 \theta {\ , \ \ }\Delta = r^2 - \mu r + a^2 {\ , \ \ }\mu = 2 G M {\ , \ \ }a = \frac{J}{M} \ .\end{aligned}$$ For $J=0$ this clearly reduces to the Schwarschild black hole, and the angular momentum is bounded by a critical value $J \leq GM^2 $ (the Kerr bound) beyond which there appears a naked singularity. The bound is saturated for the extremal Kerr solution which is non-singular. The uniqueness in four dimensions fits nicely with the fact that black holes in four dimensions are known to be classically stable [@obeRegge:1957td; @obeZerilli:1971wd; @obeTeukolsky:1973ha] (for further references see also the lecture [@obeKodama:2007ph] at this school). The generalization of the Schwarschild black hole to arbitrary dimension $D$ was found by Tangherlini [@obeTangherlini:1963], and is given by the metric $$\label{obeneutbh} ds^2 = - f dt^2 + f^{-1} dr^2 + r^2 d \Omega_{D-2}^2 {\ , \ \ }f = 1 - \frac{r_0^{D-3}}{r^{D-3}} \ .$$ Here $d\Omega_{D-2}^2$ is the metric element of a $(D-2)$-dimensional unit sphere with volume $\Omega_{D-2} = 2 \pi^{(D-1)/2}/\Gamma [ (D-1)/2]$. Since the Newtonian potential $\Phi$ in the weak-field regime $r \rightarrow \infty$ can be obtained from $g_{tt} = - 1 - 2 \Phi$, this shows that $\Phi = - r_0^{D-3}/(2 r^{D-3})$. The mass of the black hole is then easily obtained as $$M = \frac{\Omega_{D-2} (D-2)}{16 \pi G} r_0^{D-3} \ ,$$ by using $ \nabla^2 \Phi = 8 \pi G \frac{D-3}{D-2} T_{tt} $ and $ M = \int dx^{D-1} T_{tt} $ where $T_{tt} $ is the energy density. Uniqueness theorems [@obeGibbons:2002bh; @obeGibbons:2002av] for $D$-dimensional ($D > 4$) asymptotically flat space-times state that the Schwarzschild-Tangherlini black hole solution is the only static black hole in pure gravity. The classical stability of these higher-dimensional black hole solutions was addressed in Refs. [@obeKodama:2003jz; @obeIshibashi:2003ap; @obeKodama:2003kk]. The generalization of the Kerr black hole to arbitrary dimension $D$ was found by Myers and Perry [@obeMyers:1986un], who obtained the metric of a rotating black hole with angular momenta in an arbitrary number of orthogonal planes. The Myers-Perry (MP) black hole is thus specified by the mass and angular momenta $J_k$ where $k=1 \ldots r$ with $r = {\rm rank} (SO(D-2))$. For MP black holes with a single angular momentum, there is again a Kerr bound $J^2 < 32 G M^3/(27\pi)$ in the five-dimensional case, but for six and more dimensions the angular momentum is unbounded, and the black hole can be ultra-spinning. This fact will be important in Secs. \[obesec:robh\] and \[obesec:phas\]. When there are more than one angular momenta one needs at least one or two zero angular momenta to have an ultra-spinning regime depending on whether the dimensions is even or odd [@obeEmparan:2003sy]. Despite the absence of a Kerr bound in six and higher dimensions, it was argued in [@obeEmparan:2003sy] that in six or higher dimensions the Myers-Perry black hole becomes unstable above some critical angular momentum thus recovering a dynamical Kerr bound. The instability was identified as a Gregory-Laflamme instability by showing that in a large angular momentum limit the black hole geometry becomes that of an unstable black membrane. This result is also an indication of the existence of new rotating black holes with spherical topology, where the horizon is distorted by ripples along the polar direction. This will be discussed in more detail in Sec. \[obesec:phas\]. Finally, we note that all of the black hole solutions discussed so far in this section have an event horizon of spherical topology $S^{D-2}$. Contrary to the static case, there are no uniqueness theorems for non-static black holes in pure gravity with $D > 4$.[^1] On the contrary, there are known cases of non-uniqueness. The first example of this was found by Emparan and Reall [@obeEmparan:2001wn] and occurs in five dimensions for stationary solutions in asymptotically flat space-time: for a certain range of mass and angular momentum there exist both a rotating MP black hole with $S^3$ horizon [@obeMyers:1986un] and rotating black rings with $S^2 \times S^1$ horizons [@obeEmparan:2001wn]. As mentioned in the introduction, following the discovery of the rotating black ring [@obeEmparan:2001wn], further generalizations of these to black Saturns and multi-black rings have been found in five dimensions. It is possible that essentially all five-dimensional black holes (up to iterations of multi-black rings) with two axial Killing vectors have been found by now[^2], but the study of non-uniqueness for rotating black holes in six and higher dimensions has only recently begun (see Secs. \[obesec:robh\] and \[obesec:phas\]). Another case where non-uniqueness has been observed is for Kaluza-Klein black holes, in particular for black hole solutions that asymptote to Minkowski space ${\mathcal{M}}^{D-1} $ times a circle $S^1$. Here, the simplest solution one can construct is the uniform black string which is the $(D-1)$-dimensional Schwarzschild-Tangherlini black hole plus a flat direction, which has horizon topology $S^{D-3} \times S^1$. However, at least for a certain range of masses, there are also non-uniform black strings and black holes that are localized on the circle, both of which are non-translationally invariant along the circle direction. All of these solutions, which have in common that they posses an $SO(D-2)$ symmetry, will be further discussed in Sec. \[obesec:kkbh\]. If one allows for disconnected horizons, then also multi-black hole configurations localized on the circle are possible, giving rise to a infinite non-uniqueness. These will be discussed in Sec. \[obesec:mubh\]. In addition there are more exotic black hole solutions, called bubble-black hole sequences [@obeElvang:2004iz], but for simplicity these will not be further dealt with in this lecture. More generally, for black hole solutions that asymptote to Minkowski space ${\mathcal{M}}^{D-p} $ times a torus ${\mathbb{T}}^p$, the simplest class of solutions with an event horizon are black $p$-branes. The metric is that of a $(D-p)$-dimensional Schwarzschild-Tangherlini black hole plus $p$ flat directions. Beyond that there will exist many more phases, which have only been partially explored. As an example, we discuss in Sec. \[obesec:torp\] the phases of KK black holes on ${\mathbb{T}}^2$ that follow by adding a flat direction to the phases of KK black holes on $S^1$. These turn out to be intimately related to the phase structure of rotating black holes for $D \geq 6$, as we will see in Sec. \[obesec:phas\]. Overview of solution methods \[obesec:solm\] -------------------------------------------- We briefly describe here the available methods that have been employed in order to find the new solutions that are the topic of this lecture. The main techniques for finding new solutions are as follows. [.2cm [**Symmetries and ansätze.**]{}]{} It is often advantageous to use symmetries and other physical input to constrain the form of the metric for the putative solution. In this way one may able to find an ansatz for the metric that enables to solve the vacuum Einstein equations exactly. This often involves also a clever choice of coordinate system, adapted to the symmetries of the problem. This ingredient is also important in cases where the Einstein equations can only be solved perturbatively around a known solution (see below). As an example we note the generalized Weyl ansatz [@obeEmparan:2001wk; @obeHarmark:2004rm] for static and stationary solutions with $D-2$ commuting Killing vectors, in which the Einstein equations simplify considerably. For the static case, this ansatz is for example relevant for bubble-black hole sequences [@obeElvang:2004iz] in five and six-dimensional KK space. For the stationary case, it is relevant for rotating black ring solutions in five dimensional asymptotically flat space. Another example relevant for black holes and strings on cylinders is the $SO(D-2)$-symmetric ansatz of [@obeHarmark:2002tr; @obeWiseman:2002ti; @obeHarmark:2003eg] based on coordinates that interpolate between spherical and cylindrical coordinates [@obeHarmark:2002tr]. This has been used to obtain the metric of small black holes on the cylinder [@obeHarmark:2003yz; @obeDias:2007hg]. [.2cm [**Solution generating techniques.**]{}]{} Given an exact solution there are cases where one can use solution generating techniques, such as the inverse scattering method, to generate other new solutions. See for example Refs. [@obeBelinsky:1971nt; @obeBelinsky:1979; @obeBelinski:2001ph; @obePomeransky:2005sj] where this method was first used for stationary black hole solutions in five dimensions, and [@obeGiusto:2007fx] for a further solution generating mechanism. [.2cm [**Matched asymptotic expansion.**]{}]{} In some cases one knows the exact form of the solution in some corner of the moduli space. Then one may attempt to find the solution in a perturbative expansion around this (limiting) known solution. This method, called matched asymptotic expansion [@obeHarmark:2003yz; @obeGorbonos:2004uc; @obeKarasik:2004ds; @obeGorbonos:2005px; @obeDias:2007hg; @obeEmparan:2007wm], has been very successful. It applies to problems that contain two (or more) widely separated scales. In particular for black holes, this means that one solves Einstein equations perturbatively in two different zones, the asymptotic zone and the near-horizon zone and one thereafter matches the solution in the overlap region. One example is that of small black holes on a circle, where the horizon radius of the black holes is much smaller than the size of the circle (see in particular Sec. \[obesec:mubh\]). Another example is that of thin black rings, where the thickness of the ring is much smaller than the radius of the ring (see Sec. \[obesec:robh\]). [.2cm [**Numerical techniques.**]{}]{} Since in many cases the Einstein equations become too complicated to be amenable to analytical methods, even after using symmetries and ansätze, the only way to proceed in the non-linear regime is to try to solve them numerically. Especially for KK black holes these techniques have been successfully applied for non-uniform black strings [@obeGubser:2001ac; @obeWiseman:2002zc; @obeSorkin:2004qq; @obeKleihaus:2006ee; @obeSorkin:2006wp; @obeKleihaus:2007cf] and localized black holes [@obeSorkin:2003ka; @obeKudoh:2003ki; @obeKudoh:2004hs] (see Sec. \[obesec:kkbh\]). [.2cm [**Classical effective field theory.**]{}]{} There exists also a classical effective field theory approach for extended objects in gravity [@obeGoldberger:2004jt]. This can be used as a systematic low-energy (long-distance) effective expansion which gives results only in the region away from the black hole and so it does not provide the corrections to the metric near the horizon, but enables one to compute perturbatively corrected asymptotic quantities. This has been successfully applied in [@obeChu:2006ce] to obtain the second-order correction to the thermodynamics of small black holes on a circle. Recently, it was shown [@obeKol:2007rx] that this method is equivalent to matched asymptotic expansion where the near-horizon zone is replaced by an effective theory. Ref. [@obeKol:2007rx] also contains an interesting new application of the method to the corrected thermodynamics of small MP black holes on a circle. Kaluza-Klein black holes \[obesec:kkbh\] ======================================== In this section we give a general description of the phases of Kaluza-Klein (KK) black holes (see also the reviews [@obeHarmark:2007md; @obeHarmark:2005pp]). A $(d+1)$-dimensional Kaluza-Klein black hole will be defined here as a pure gravity solution with at least one event horizon that asymptotes to $d$-dimensional Minkowski space times a circle (${\mathcal{M}}^d \times S^1$) at infinity. We will discuss only static and neutral solutions, solutions without charges and angular momenta. Obviously, the uniform black string is an example of a Kaluza-Klein black hole, but many more phases are known to exist. In particular, we discuss here the non-uniform black string and the localized black hole phase. Finally, in anticipation of the connection with the phase structure of rotating black holes (discussed in Sec. \[obesec:phas\]) we also discuss part of the phases of KK black holes on Minkowski space times a torus (${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$). Setup and physical quantities \[obesec:prel\] ---------------------------------------------- For any space-time which asymptotes to ${\mathcal{M}}^d \times S^1$ we can define the mass $M$ and the tension ${\mathcal{T}}$. These two asymptotic quantities can be used to parameterize the various phases of Kaluza-Klein black holes in a $(\mu,n)$ phase diagram, as we review below. The Kaluza-Klein space ${\mathcal{M}}^d \times S^1$ consists of the time $t$ and a spatial part which is the cylinder ${\mathbb{R}}^{d-1}\times S^1$. The coordinates of ${\mathbb{R}}^{d-1}$ are $x^1,...,x^{d-1}$ and the radius $r =\sqrt{\sum_i (x^i)^2 }$. The coordinate of the $S^1$ is denoted by $z$ and its circumference is $L$. It is well known that for static and neutral mass distributions in flat space ${\mathbb{R}}^d$ the leading correction to the metric at infinity is given by the mass. For a cylinder ${\mathbb{R}}^{d-1}\times S^1$ we instead need two independent asymptotic quantities to characterize the leading correction to the metric at infinity. [.2cm [**Mass and tension.**]{}]{} Consider a static and neutral distribution of matter which is localized on a cylinder ${\mathbb{R}}^{d-1} \times S^1$. Assume a diagonal energy momentum tensor with components $T_{tt}$, $T_{zz}$ and $T_{ii}$. Here $T_{tt}$ depends on $(x^i,z)$ while $T_{zz}$ depends only on $x^i$ because of momentum conservation. We can then write the mass and tension as $$M = \int {\rm d} x^d T_{tt} {\ , \ \ }{\mathcal{T}}= - \frac{1}{L} \int {\rm d} x^d T_{zz} \ .$$ From these definitions and the method of equivalent sources, one can obtain expressions for $M$ and ${\mathcal{T}}$ in terms of the leading $1/r^{d-3}$ behavior of the metric components $g_{tt}$ and $g_{zz}$ around flat space [@obeHarmark:2003dg; @obeKol:2003if]. See also Refs. [@obeHarmark:2004ch; @obeHarmark:2004ws; @obeMyers:1999ps; @obeTraschen:2001pb; @obeTownsend:2001rg; @obeKastor:2006ti] for more on the gravitational tension of black holes and branes. For a neutral Kaluza-Klein black hole with a single connected horizon, we can find the temperature $T$ and entropy $S$ directly from the metric. Together with the mass $M$ and tension ${\cal{T}}$, these quantities obey the Smarr formula [@obeHarmark:2003dg; @obeKol:2003if] $$\label{obeSmarr1} (d-1) TS = (d-2)M - L {\cal{T}} \ ,$$ and the first law of thermodynamics [@obeTownsend:2001rg; @obeKol:2003if; @obeHarmark:2003eg] $$\label{obefirstlaw} \delta M = T \delta S + {\cal{T}} \delta L \ .$$ This equation includes a “work” term (analogous to $p \delta V$) for variations with respect to the size of the circle at infinity. It is important to note that there are also examples of Kaluza-Klein black hole solutions with more than one connected event horizon [@obeHarmark:2003eg; @obeElvang:2004iz; @obeDias:2007hg]. The Smarr formula and first law of thermodynamics generalize also to these cases. [.2cm [**Dimensionless quantities.**]{}]{} Since for KK black holes we have an intrinsic scale $L$ it is natural to use it in order to define dimensionless quantities, which we take as $$\label{obethemu} \mu = \frac{16\pi G}{L^{d-2}} M {\ , \ \ }{\mathfrak{s}}= \frac{16 \pi G}{L^{d-1}} S {\ , \ \ }{\mathfrak{t}}= L T {\ , \ \ }n = \frac{{\mathcal{T}}L}{M} \ .$$ Here $\mu$, ${\mathfrak{s}}$ and ${\mathfrak{t}}$ are the rescaled mass, entropy and temperature respectively, and $n$ is the relative tension. The relative tension satisfies the bound $0 \leq n \leq d-2$ [@obeHarmark:2003dg]. The upper bound is due to the Strong Energy Condition whereas the lower bound was found in [@obeTraschen:2003jm; @obeShiromizu:2003gc]. The upper bound can also be understood physically in a more direct way from the fact that we expect gravity to be an attractive force. For a test particle at infinity it is easy to see that the gravitational force on the particle is attractive when $n < d-2$ but repulsive when $n > d-2$. The program set forth in [@obeHarmark:2003dg; @obeHarmark:2003eg] is to plot all phases of Kaluza-Klein black holes in a $(\mu,n)$ diagram. Note that it follows from the Smarr formula and the first law of thermodynamics that given a curve $n(\mu)$ in the phase diagram, the entire thermodynamics ${\mathfrak{s}}(\mu)$ of a phase can be obtained [@obeHarmark:2003dg]. We also note that the $ (\mu,n)$ phase diagram appears to be divided into two separate regions [@obeElvang:2004iz]. Here, the region $0 \leq n \leq 1/(d-2)$ contains solutions without Kaluza-Klein bubbles, and the solutions have a local $SO(d-1)$ symmetry and reside in the ansatz proposed in [@obeHarmark:2002tr; @obeHarmark:2003fz] and proven in [@obeWiseman:2002ti; @obeHarmark:2003eg]. Solutions of this type, also referred to as black holes and strings on cylinders will be reviewed in Sec. \[obesec:bhcy\]. Because of the $SO(d-1)$ symmetry there are only two types of event horizon topologies: $S^{d-1}$ for the black hole on a cylinder branch and $S^{d-2} \times S^1$ for the black string. The region $1/(d-2) < n \leq d-2$ contains solutions with Kaluza-Klein bubbles. This part of the phase diagram, which is much more densely populated with solutions compared to the lower part, is the subject of [@obeElvang:2004iz]. [.2cm [**Alternative dimensionless quantities.**]{}]{} The typical dimensionless quantities used for KK black holes in $D$ dimensions, are those defined in . Instead of these, Ref. [@obeEmparan:2007wm] introduced the following new dimensionless quantities, more suitable for the analogy with rotating black holes (see Sec. \[obesec:phas\]), by defining $$\label{obeeladef} \ell^{D-3} \propto \frac{L^{D-3}}{G M} {\ , \ \ }a_H^{D-3} \propto \frac{S^{D-3}}{(G M)^{D-2}} {\ , \ \ }{\mathfrak{t}}_H \propto (G M)^{\frac{1}{D-3}} T \,.$$ In particular, the relation to the dimensionless quantities in is given by $$\label{obeelladef} \ell = \mu^{-\frac{1}{D-3}} {\ , \ \ }a_H = \mu^{-\frac{D-2}{D-3}} {\mathfrak{s}}{\ , \ \ }{\mathfrak{t}}_H = \mu^{\frac{1}{D-3} } {\mathfrak{t}}\ .$$ In the KK black hole literature, entropy plots are typically given as ${\mathfrak{s}}(\mu)$. Instead of these one can also use to consider the area function $a_H (\ell)$, which is obtained as $$\label{obeafroms} a_H (\ell) = \ell^{D-2} {\mathfrak{s}}(\ell^{-D+3})\ .$$ We will employ these alternative quantities when we discuss KK black holes on a torus in Sec. \[obesec:torp\] Black holes and strings on cylinders \[obesec:bhcy\] ---------------------------------------------------- We now discuss the main three types of KK black holes that have $SO(d-1)$ symmetry, to which we commonly refer as black holes and strings on cylinders. These are the uniform black string, the non-uniform black string and the localized black hole. In Sec. \[obesec:mubh\] we will discuss in more detail the recently obtained multi-black hole configurations on the cylinder. ### Uniform black string and Gregory-Laflamme instability {#uniform-black-string-and-gregory-laflamme-instability .unnumbered} The metric for the uniform black string in $D=d+1$ space-time dimensions is $$\label{obeublstr} ds^2 = - f dt^2 + f^{-1} dr^2 + r^2 d\Omega_{d-2}^2 + dz^2 {\ , \ \ }f=1-\frac{r^{d-3}}{r_0^{d-3}} \ ,$$ where $d\Omega_{d-2}^2$ is the metric element of a $(d-2)$-dimensional unit sphere. The metric is found by taking the $d$-dimensional Schwarzschild-Tangherlini static black hole solution [@obeTangherlini:1963] and adding a flat $z$ direction, which is the direction parallel to the string. The event horizon is located at $r=r_0$ and has topology $S^{d-2}\times {\mathbb{R}}$. [.2cm [**Gregory-Laflamme instability.**]{}]{} Gregory and Laflamme found in 1993 a long wavelength instability for black strings in five or more dimensions [@obeGregory:1993vy; @obeGregory:1994bj]. The mode responsible for the instability propagates along the direction of the string, and develops an exponentially growing time-dependent part when its wavelength becomes sufficiently long. The Gregory-Laflamme mode is a linear perturbation of the metric , that can be written as $$\label{obepertmet} g_{\mu\nu} + \epsilon h_{\mu\nu} \ .$$ Here $g_{\mu\nu}$ stands for the components of the unperturbed black string metric , $\epsilon$ is a small parameter and $h_{\mu\nu}$ is the metric perturbation $$\label{obeGLmode1} h_{\mu\nu} = \Re \left\{ \exp \left( \frac{\Omega t}{r_0} + i \frac{kz}{r_0} \right) P_{\mu\nu} ( r/r_0) \right\} \ ,$$ where the symbol $\Re$ denotes the real part. The statement that the perturbation $h_{\mu\nu}$ of $g_{\mu\nu}$ satisfies the Einstein equations of motion can be stated as the differential operator equation $$\label{obelicheq} \Delta_L h_{\mu\nu} = 0 \ ,$$ where $(\Delta_L)_{\mu \nu\rho \sigma} = - g_{\mu \rho}g_{\nu\sigma} D_\kappa D^\kappa + 2 R_{\mu\nu\rho\sigma}$ is the Lichnerowitz operator for the background metric $g_{\mu\nu}$. The resulting Einstein equations for the GL mode can be found in the appendix of the review [@obeHarmark:2007md].[^3] Solution of these equations [@obeGregory:1993vy; @obeGregory:1994bj] shows that there is an unstable mode for any wavelength larger than the critical wavelength $$\label{obelambgl} \lambda_{\rm GL} = \frac{2\pi r_0 }{k_c} ~.$$ for which $\Omega=0$ in . The values of $k_c$ for $d=4,...,14$, as obtained in [@obeGregory:1993vy; @obeGubser:2001ac; @obeSorkin:2004qq], are listed in Table 1 of [@obeHarmark:2007md]. The critical wave-number $k_c$ marks the lower bound of the possible wavelengths for which there is an unstable mode and is called the threshold mode. It is a time-independent mode of the form $ h_{c,\mu\nu} \sim \exp ( i k_c z/r_0 )$. In particular, this suggests the existence of a static non-uniform black string. [.2cm [**GL mode of the compactified uniform black string.**]{}]{} Since we wish to consider the uniform black string in KK space, we now discuss what happens to the GL instability when $z$ is a periodic coordinate with period $L$. The Gregory-Laflamme mode cannot obey the correct periodic boundary condition on $z$ if $L < \lambda_{\rm GL}$, with $\lambda_{\rm GL}$ given by . On the other hand, for $L > \lambda_{\rm GL}$, we can fit the Gregory-Laflamme mode into the compact direction with the frequency and wave number $\Omega$ and $k$ in determined by the ratio $r_0/L$. Translating this in terms of the mass of the neutral black string, one finds the critical Gregory-Laflamme mass $$\label{obemugl} \mu_{\rm GL} = (d-2)\Omega_{d-2} \left( \frac{k_c}{2\pi} \right)^{d-3} ~.$$ For $\mu < \mu_{\rm GL}$ the Gregory-Laflamme mode can be fitted into the circle, and the compactified neutral uniform black string is unstable. For $\mu > \mu_{\rm GL}$, on the other hand, the Gregory-Laflamme mode is absent, and the neutral uniform black string is stable. For $\mu=\mu_{\rm GL}$ there is a marginal mode which signals the emergence of a new branch of black string solutions which are non-uniformly distributed along the circle. See Table 2 in [@obeHarmark:2007md] for the values of $\mu_{\rm GL}$ for $ 4 \leq d \leq 14$. The large $d$ behavior of $\mu_{\rm GL}$ was examined numerically in [@obeSorkin:2004qq] and analytically in [@obeKol:2004pn]. We also note that there is an interesting correspondence between the Rayleigh-Plateau instability of long fluid cylinders and the Gregory-Laflamme instability of black strings [@obeCardoso:2006ks; @obeCardoso:2006sj]. In particular, the critical wave numbers $k_{RP}$ and $k_{c}$ agree exactly at large dimension $d$ (scaling both as $\sqrt d$ for $d \gg 1$). ### Non-uniform black string {#non-uniform-black-string .unnumbered} It was realized in [@obeGubser:2001ac] (see also [@obeGregory:1988nb]) that the classical instability of the uniform black string for $\mu < \mu_{\rm GL}$ implies the existence of a marginal (threshold) mode at $\mu =\mu_{\rm GL}$, which again suggests the existence of a new branch of solutions. The new branch, which is called the non-uniform string branch, has been found numerically in [@obeGubser:2001ac; @obeWiseman:2002zc; @obeSorkin:2004qq]. This branch of solutions has the same horizon topology $S^1 \times S^{d-2}$ as the uniform string, which is expected since the non-uniform string is continuously connected to the uniform black string. In particular, it emerges from the uniform black string in the point $(\mu,n) = (\mu_{\rm GL},1/(d-2))$ and has $n < 1/(d-2)$ Moreover, the solution is non-uniformly distributed in the circle-direction $z$ since there is an explicit dependence in the marginal mode on this direction. More concretely, considering the non-uniform black string branch for $|\mu-\mu_{\rm GL}| \ll 1$ one obtains for the relative tension the behavior $$\label{obenofmu} n(\mu) = \frac{1}{d-2} - \gamma ( \mu - \mu_{\rm GL}) + {\mathcal{O}}( ( \mu - \mu_{\rm GL})^2 ) \ .$$ Here $\gamma$ is a number representing the slope of the curve that describes the non-uniform string branch near $\mu=\mu_{\rm GL}$ (see Table 3 in [@obeHarmark:2007md] for the values of $\gamma$ for $4 \leq d \leq 14$ obtained from the data of [@obeGregory:1993vy; @obeGregory:1994bj; @obeGubser:2001ac; @obeWiseman:2002zc; @obeSorkin:2004qq]). The qualitative behavior of the non-uniform string branch depends on the sign of $\gamma$. If $\gamma$ is positive, then the branch emerges at the mass $\mu=\mu_{\rm GL}$ with increasing $\mu$ and decreasing $n$. If instead $\gamma$ is negative the branch emerges at $\mu=\mu_{\rm GL}$ with decreasing $\mu$ and decreasing $n$. To see what this means for the entropy we note that from and the first law of thermodynamics one finds $$\label{obenuentro} \frac{{\mathfrak{s}}_{\rm nu} ( \mu )}{{\mathfrak{s}}_{\rm u} ( \mu )} = 1 - \frac{(d-2)^2}{2(d-1)(d-3)^2} \frac{\gamma}{\mu_{\rm GL}} (\mu-\mu_{\rm GL})^2 + {\mathcal{O}}( (\mu - \mu_{\rm GL})^3 ) \ ,$$ where ${\mathfrak{s}}_{\rm u} ( \mu )$ (${\mathfrak{s}}_{\rm nu} ( \mu )$) refers to the rescaled entropy of the uniform (non-uniform) black string branch. It turns out that $\gamma$ is positive for $d \leq 12$ and negative for $d \geq 13$ [@obeSorkin:2004qq]. Therefore, as discovered in [@obeSorkin:2004qq], the non-uniform black string branch has a qualitatively different behavior for small $d$ and large $d$, the system exhibits a critical dimension $D=14$. In particular, for $d \leq 12$ the non-uniform branch near the GL point has $\mu > \mu_{\rm GL}$ and lower entropy than that of the uniform phase, while for $d > 13$ it has $\mu < \mu_{\rm GL}$ and higher entropy. It also follows from that for all $d$ the curve ${\mathfrak{s}}_{\rm nu} (\mu)$ is tangent to the curve ${\mathfrak{s}}_{\rm u}(\mu)$ at the GL point. A large set of numerical data for the non-uniform branch, extending into the strongly non-linear regime, have been obtained in Refs. [@obeWiseman:2002zc; @obeKudoh:2004hs] for six dimensions ($d=5$) in Ref. [@obeKleihaus:2006ee] for five dimensions ($d=4$) and for the entire range $d \leq 5 \leq 10$ in Ref. [@obeSorkin:2006wp]. For $d=5$, these data are displayed in the $(\mu,n)$ phase diagram [@obeHarmark:2003dg] of Fig. \[obefig1\]. ### Localized black holes {#localized-black-holes .unnumbered} On physical grounds, it is natural to expect a branch of neutral black holes in the space-time ${\mathcal{M}}^d \times S^1$ with event horizon of topology $S^{d-1}$. This branch is called the localized black hole branch, because the $ S^{d-1}$ horizon is localized on the $S^1$ of the Kaluza-Klein space. Neutral black hole solutions in the space-time ${\mathcal{M}}^3 \times S^1$ were found and studied in [@obeMyers:1987rx; @obeBogojevic:1991hv; @obeKorotkin:1994dw; @obeFrolov:2003kd]. However, the study of black holes in the space-time ${\mathcal{M}}^{d} \times S^1$ for $d \geq 4$ is relatively new. The complexity of the problem stems from the fact that such black holes are not algebraically special [@obeDeSmet:2002fv] and moreover from the fact that the solution cannot be found using a Weyl ansatz since the number of Killing vectors is too small. In [@obeHarmark:2003yz; @obeGorbonos:2004uc; @obeGorbonos:2005px] the metric of small black holes, black holes with mass $\mu \ll 1$, was found analytically using the method of matched asymptotic expansion. The starting point in this construction is the fact that as $\mu \rightarrow 0$, one has $n \rightarrow 0$ so that the localized black hole solution becomes more and more like a $(d+1)$-dimensional Schwarzschild black hole in this limit. For $d=4$, the second order correction to the metric and thermodynamics have been studied in [@obeKarasik:2004ds]. More generally, the second order correction to the thermodynamics was obtained in Ref. [@obeChu:2006ce] (see also [@obeKol:2007rx]) for all $d$ using an effective field theory formalism in which the structure of the black hole is encoded in the coefficients of operators in an effective worldline Lagrangian. The first order result of [@obeHarmark:2003yz] and second order result of [@obeChu:2006ce] can be summarized by giving the first and second order corrections to the relative tension $n$ of the localized black hole branch as a function of $\mu$$$\label{obebhslope} n = \frac{(d-2)\zeta(d-2)}{2(d-1)\Omega_{d-1}} \mu - \left( \frac{(d-2)\zeta(d-2)}{2(d-1)\Omega_{d-1}} \mu \right)^2 + {\mathcal{O}}(\mu^3) \ ,$$ where $\zeta(p) = \sum_{n=1}^\infty n^{-p}$ is the Riemann zeta function. The corresponding correction to the thermodynamics can be found in Eq. (3.18) of [@obeHarmark:2007md]. The black hole branch has been studied numerically for $d=4$ in [@obeSorkin:2003ka; @obeKudoh:2004hs] and for $d=5$ in [@obeKudoh:2003ki; @obeKudoh:2004hs]. For small $\mu$, the impressively accurate data of [@obeKudoh:2004hs] are consistent with the analytical results of [@obeHarmark:2003yz; @obeGorbonos:2004uc; @obeKarasik:2004ds]. The results of [@obeKudoh:2004hs] for $d=5$ are displayed in a $(\mu,n)$ phase diagram in Figure \[obefig1\]. Phase diagram and copied phases \[obesec:pdia\] ----------------------------------------------- In Figure \[obefig1\] the $(\mu,n)$ diagram for $d=5$ is displayed, which is one of the cases most information is known. We have shown the complete non-uniform branch, as obtained numerically by Wiseman [@obeWiseman:2002zc], which emanates at $\mu_{\rm GL} = 2.31$ from the uniform branch that has $n=1/3$. These data were first incorporated into the $(\mu,n)$ diagram in Ref. [@obeHarmark:2003dg]. For the black hole branch we have plotted the numerical data of Kudoh and Wiseman [@obeKudoh:2004hs]. It is evident from the figure that this branch has an approximate linear behavior for a fairly large range of $\mu$ close to the origin and the numerically obtained slope agrees very well with the analytic result . (0,0)(0,0) (345,30)[$\mu$]{} (118,230)[$n$]{} [.2cm [**Merger point.**]{}]{} The figure strongly suggests that the localized black hole branch meets with the non-uniform black string branch in a topology changing transition point, which is the scenario earlier suggested by Kol [@obeKol:2002xz] (see [@obeHarmark:2003eg] for a list of scenarios). For this reason, it seems reasonable to expect that the localized black hole branch is connected with the non-uniform string branch in any dimension. This means that we can go from the uniform black string branch to the localized black hole branch through a connected series of static classical geometries. The point in which the two branches are conjectured to meet is called the merger point. [.2cm [**Copied phases.**]{}]{} In [@obeHarmark:2003eg] it was shown that one can generate new solutions by copying solutions on the circle several times, following an idea of Horowitz [@obeHorowitz:2002dc]. This works for solutions which vary along the circle direction (in the $z$ direction), so it works both for the black hole branch and the non-uniform string branch. Let $k$ be a positive integer. Then if we copy a solution $k$ times along the circle we get a new solution with the following parameters: $$\label{obecoptrans} \tilde{\mu} = \frac{\mu}{k^{d-3}}{\ , \ \ }\tilde{{\mathfrak{s}}} = \frac{{\mathfrak{s}}}{k^{d-2}} {\ , \ \ }\tilde{{\mathfrak{t}}} = k {\mathfrak{t}}{\ , \ \ }\tilde{n} = n \ .$$ See Ref. [@obeHarmark:2003eg] for the corresponding expression of the metric of the copies in the $SO(d-1)$-symmetric ansatz. Using the transformation , one easily sees that the non-uniform and localized black hole branches depicted in Fig. \[obefig1\] are copied infinitely many times in the $(\mu,n)$ phase diagrams, and we have depicted the $k=2$ copy in this figure. [.2cm [**General dimension.**]{}]{} The six-dimensional phase diagram displayed in Fig. \[obefig1\] is believed to be representative for the black string/localized black hole phases on ${\mathcal{M}}^{D-1} \times S^1$ for all $ 5 \leq D \leq 13$. Here the upper bound follows from fact that, as mentioned above, there is a critical dimension $D=13$ above which the behavior of the non-uniform black string phase is qualitatively different [@obeSorkin:2004qq]. The phase diagram for $D \geq 14$ is much poorly known in comparison, since there are no data like fig. \[obefig1\] available for the localized and non-uniform phases, only the asymptotic behaviors. However, we do know from that the non-uniform branch will extend to the left (lower values of $\mu$) as it emerges from the GL point, and on general grounds is expected to merge again with the localized black hole branch. KK phases on ${\mathbb{T}}^2$ from phases on $S^1$ \[obesec:torp\] ------------------------------------------------------------------- We show here how one can translate the known results for KK black holes on the circle (on ${\mathcal{M}}^{D-2} \times S^1$) to results for KK black holes on the torus ($\ie$ on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$). The resulting phases are relevant in connection with the phases of rotating black holes in asymptotically flat spacetime, as shown in Sec \[obesec:phas\]. We recall first the definitions of dimensionless quantities in . While these quantities were originally introduced in [@obeHarmark:2003dg; @obeHarmark:2004ws] for black holes on a KK circle of circumference $L$, we may similarly use these definitions for KK black holes in $D$ dimensions with a square torus of side lengths $L$, to which we restrict in the following. Likewise, we can use the alternative dimensionless quantities for that case. [.2cm [**Map from circle to torus compactification.**]{}]{} We first want to establish a map for these dimensionless quantities from KK black holes on ${\mathcal{M}}^{D-2} \times S^1$ (denoted with hatted quantities) to those for KK black holes on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$ (denoted with unhatted quantities), obtained by adding an extra compact direction of size $L$. Suppose we are given an entropy function $\hat {\mathfrak{s}}(\hat \mu)$ for a phase of KK black holes on ${\mathcal{M}}^{D-2} \times S^1$. Any such phase lifts trivially to a phase of KK black holes on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$ that is uniform in one of the torus directions. We show below how to obtain the function $a_H(\ell)$ for the latter in terms of $\hat {\mathfrak{s}}(\hat \mu)$ of the former. In the following we will use the notation $D = n +4$. It is not difficult to see that in terms of the original dimensionless quantities we have the simple mapping $$\label{obemsmap} \mu = \hat \mu {\ , \ \ }{\mathfrak{s}}= \hat {\mathfrak{s}}{\ , \ \ }{\mathfrak{t}}= \hat {\mathfrak{t}}\ .$$ It then follows from and that the area function $a_H (\ell)$ of KK black holes on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$ is obtained via the mapping relation $$\label{obeaHmap} a_H (\ell) = \ell^{n+2} \hat {\mathfrak{s}}(\ell^{-n-1} ) \ .$$ (0,0)(0,0) (345,37)[$\ell$]{} (115,260)[$a_H$]{} ![ $a_H (\ell)$ phase diagram in seven dimensions (${\mathcal{M}}^5 \times {\mathbb{T}}^2$) for Kaluza-Klein black hole phases with one uniform direction. Shown are the uniform black membrane phase (dotted), the non-uniform black membrane phase (solid) and the localized black string phase (dashed). For the latter two phases, we have also shown their $k=2$ copy. The non-uniform black membrane phase emanates from the uniform black membrane phase at the GL point $\ell_{\rm GL} = 0.811 $, while the $k=2$ copy starts at the 2-copied GL point $\ell_{\rm GL}^{(2)} = \sqrt{2} \ell_{\rm GL} =1.15 $. This figure is representative for the phase diagram of phases on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$ for all $ 6 \leq D \leq 14$. Reprinted from Ref. [@obeEmparan:2007wm]. \[obefig:KKphases7\]](7dKKbh.eps){width="9cm"} [.2cm [**Application to known phases.**]{}]{} Using now the entropy function $\hat {\mathfrak{s}}_{\rm uni} (\hat \mu) \sim \hat \mu^{ \frac{n}{n-1}}$ of the uniform black string in ${\mathcal{M}}^{n+2} \times S^1$ we get from the result $$\label{obeaHmem} a_H^{\rm ubm} (\ell) \sim \ell^{- \frac{2}{n-1}} \ .$$ for the uniform black membrane (ubm) $ 4 +n$ dimensions. Furthermore, using that for small $\mu$ (or equivalently large $\ell $) the entropy of the localized black hole in ${\mathcal{M}}^{n+2} \times S^1$ is $ \hat {\mathfrak{s}}_{\rm loc} (\hat \mu) \sim \hat \mu^{ \frac{n+1}{n}}$ we find via the map the result $$\label{obeaHstr} a_H^{\rm lbs} (\ell) \sim \ell^{- \frac{1}{n}} \qquad (\ell \rightarrow \infty) \ ,$$ for the large $\ell$ limit of the localized black string (lbs) in $4+n$ dimensions. Finally, for the non-uniform string in ${\mathcal{M}}^{n+2} \times S^1$ dimensions we use to obtain $$\label{obeahnum} a_H^{\rm nubm} (\ell) = a_H^{\rm ubm} (\ell) \left[ 1 - \frac{n^2 (n+1)}{2 (n-1)^2} \frac{ \gamma_{n+2} }{\ell_{\rm GL}^{n+4}} ( \ell - \ell_{\rm GL})^2 + {\mathcal{O}}( (\ell - \ell_{\rm GL})^3 ) \right] \ ,$$ for the non-uniform black membrane (nubm). Here, $\ell_{\rm GL} = (\mu_{{\rm GL},n+2})^{-\frac{1}{n+1}}$ is the critical GL wavelength in terms of the dimensionless GL mass $\mu_{{\rm GL},d}$ given in and $\gamma_d$ the coefficient in . [.2cm [**Copies.**]{}]{} As remarked in Sec. \[obesec:pdia\] the localized black hole and non-uniform black string phase on ${\mathcal{M}}^{n+2} \times S^1$ have copied phases with multiple non-uniformity or multiple localized black objects. From the map we then find using and the definitions that corresponding copied phases of KK black holes on the torus obey the transformation rule $$\label{obecopy} \tilde \ell = k^{\frac{n-1}{n+1}} \ell {\ , \ \ }\tilde a_H = k^{- \frac{2}{n+1}} a_H {\ , \ \ }\tilde{{\mathfrak{t}}}_H = k {\mathfrak{t}}_H \ .$$ [.2cm [**Seven-dimensional phase diagram.**]{}]{} As an explicit example, we give the mapping that can be used to convert the known results for KK black holes on ${\mathcal{M}}^5 \times S^1$ to KK black holes on ${\mathcal{M}}^5 \times {\mathbb{T}}^2$. $$(\ell , a_H ) = ( \hat \mu^{-1/4}, \hat \mu^{-5/4} \hat {\mathfrak{s}}) \ .$$ This can be used to convert plots of points $(\hat \mu , \hat {\mathfrak{s}})$ (see [@obeHarmark:2007md]) for six-dimensional KK black holes on a circle to the phase diagram in Fig. \[obefig:KKphases7\] for seven-dimensional KK black holes (with one uniform direction) on a torus. It includes the uniform black membrane, the black membrane with one uniform and one non-uniform direction, and the black string localized in one of the circles of ${\mathbb{T}}^2$. The figure also includes the $k=2$ copies obtained from these data and the map . Both the uniform black membrane phase and the localized black string phase extend to $\ell \rightarrow \infty$ where they obey the behavior and respectively with $n=3$. Multi-black hole configurations on the cylinder \[obesec:mubh\] ================================================================ We now turn to the construction of multi-black hole configurations on the cylinder, recently obtained in Ref. [@obeDias:2007hg]. In Sec. \[obesec:pdia\] we already encountered a special subset of these, namely the copied phases of the localized black hole branch, corresponding to multi-black hole configurations in which all black holes have the same mass. Here, we describe the main points of the construction of more general multi-black hole configurations [@obeDias:2007hg] using matched asymptotic expansion. We also show how the thermodynamics of these configurations can be understood from a Newtonian point of view. Finally we comment on the consequences of these configurations for the phase diagram of KK black holes. Construction of multi-black hole solutions \[obesec:cons\] ----------------------------------------------------------- The copies of the single-black hole localized on the circle, correspond to a multi-black hole configurations of equal mass black holes that are spread with equal distance from each other on the circle. Beyond these, there exist more general multi-black hole configurations which have recently been considered in Ref. [@obeDias:2007hg]. These solutions correspond to having several localized black holes of different sizes located at different points along the circle direction of the cylinder ${\mathbb{R}}^{d-1} \times S^1$. The location of each black hole is such that the total force on each of them is zero, ensuring that they are in equilibrium. It is moreover necessary for being in equilibrium that the black holes are all located in the same point in the ${\mathbb{R}}^{d-1}$ part of the cylinder. The metric constructed in Ref. [@obeDias:2007hg] are solutions to the Einstein equations to first order in the mass. More precisely, they are valid in a regime where the gravitational interaction between any one of the black holes and the others (and their images on the circle) is small. The solutions in Ref. [@obeDias:2007hg] thus describe the small mass limit of these multi-black hole configurations on the cylinder, or equivalently they can be said to describe the situation where the black holes are far apart. The method used for solving the Einstein equations is the one of matched asymptotic expansion [@obeHarmark:2003yz; @obeGorbonos:2004uc; @obeKarasik:2004ds; @obeGorbonos:2005px; @obeDias:2007hg]. The particular construction follows the approach of [@obeHarmark:2003yz] where it was used to find the metric of a small black hole on the cylinder based on an ansatz for the metric found in [@obeHarmark:2002tr]. [.2cm [**General idea and starting point.**]{}]{} We describe here the general idea behind constructing the new solutions for multi-black hole configurations on the $d$-dimensional cylinder ${\mathbb{R}}^{d-1} \times S^1$. The configuration under consideration is that of $k$ black holes placed at different locations $z_i^*$, $i=1 \ldots k$ in the same point of the ${\mathbb{R}}^{d-1}$ part of the cylinder. We write $M$ as the total mass of all of the black holes and define $\nu_i$ as the fraction of mass of the $i^{\rm th}$ black hole, $$\label{obemu} M_i = \nu_i M \,,\qquad \sum_{i=1}^k \nu_i = 1 \,,$$ where $M_i$ is the mass of the $i^{\rm th}$ black hole. Note that $0 < \nu_i \leq 1$. The matched asymptotic expansion is suitable when there are two widely separated scales in the problem. Here they are the size (mass) of each of the black holes, (all of which are taken of the same order) and the length of the circle direction. In particular, we assume that all black holes have a horizon radius (of the same order) which is small compared to the length of the circle. The constructing of the solution then proceeds in the following steps[^4] - Step 1: Find a metric corresponding to the Newtonian gravitational potential sourced by a configuration of small black holes on the cylinder. This metric is valid in the region $R \gg R_0$. - Step 2: Consider the Newtonian solution close to the sources, in the overlap region $R_0 \ll R \ll L $. - Step 3: Find a general solution near a given event horizon and match this solution to the metric in the overlap region found in Step 2. The resulting solution is valid in the region $R_0 \leq R \ll L$. With all these three steps implemented, we have a complete solution for all of the spacetime outside the event horizon. We refer to Ref. [@obeDias:2007hg] for further details, including the explicit form of the first-order corrected metric and thermodynamics of the resulting multi-black hole configurations, but present some of the easy steps here. [.2cm [**Newtonian potential.**]{}]{} Following the discussion in Sec. \[obesec:prel\] for static solutions on the cylinder the two relevant components of the stress tensor are $T_{tt}$ and $T_{zz}$. These components source the two gravitational potentials [@obeHarmark:2003dg] $$\label{obepots} \nabla^2 \Phi = 8\pi G \frac{d-2}{d-1} T_{tt} {\ , \ \ }\qquad \nabla^2 B = \frac{8\pi G}{d-1} T_{zz} \ ,$$ where $G$ is the $(d+1)$-dimensional Newton constant. In the limit of small total mass, we have that $B/(G M) \rightarrow 0$ for $M \rightarrow 0$ which means [@obeHarmark:2003yz; @obeDias:2007hg] that we can neglect the binding energy potential $B$ as compared to the mass density potential $\Phi$. One thus only needs to consider the potential $\Phi$, $\ie$ Newtonian gravity. For the multi-black hole configuration described above, it is not difficult find the solution for $\Phi$ using the method of images in terms of $(r,z)$ coordinates of the cylinder. One finds $$\begin{aligned} \Phi (r,z) = - \frac{8 \pi G M }{(d-1) \Omega_{d-1}} F (r,z) \,, \label{obePhi}\end{aligned}$$ with $$\begin{aligned} F (r,z) = \sum_{i=1}^k \sum_{m=-\infty}^\infty \frac{\nu_i}{ [ r^2 + (z - z_i^* - L m )^2]^{\frac{d-2}{2}}} \, , \label{obedefF}\end{aligned}$$ so that the potential describes the Newtonian gravitational potential sourced by the multi-black hole configuration. One can now study how the potential $\Phi$ looks when going near the sources. To achieve this it is useful to define for the $i^{\rm th}$ black hole the spherical coordinates $\rho$ and $\theta$ by $$\label{oberhotheta} r = \rho \sin \theta {\ , \ \ }z-z^*_i = \rho \cos \theta \, ,$$ where $\theta$ is defined in the interval $[0,\pi]$. In terms of these coordinates one finds that $F(r,z)$ in can be expanded as $$\begin{aligned} F (\rho,\theta) = \nu_i \rho^{-(d-2)} + \Lambda^{(i)} + \Lambda_1^{(i)} \cos\theta \,\rho + {\cal O}\left(\rho^2 \right) \,, \label{obelim F}\end{aligned}$$ for $\rho \ll 1$, where $$\begin{aligned} \Lambda^{(i)} & = & \frac{1}{L^{d-2}} \Big( \nu_i \,2\zeta (d-2)\nonumber \\ & & + \sum_{\substack{j=1 \\j\neq i } }^k \nu_j \left[ \tilde{z}_{ij}^{-(d-2)} + \zeta ( d-2,1-\tilde{z}_{ij})+\zeta ( d-2,1+\tilde{z}_{ij}) \right] \Big) \ . \label{obelambda}\end{aligned}$$ Here $ \zeta(s,1+a)= \sum_{m=1}^{\infty}(m+a)^{-s}$ is the Generalized Riemann Zeta function and $\tilde{z}_{ij} \equiv z_{ij}/L$ labels the distance in the $z$ direction between the $j^{\rm th}$ and $i^{\rm th}$ black hole (see Eq. (2.24) of [@obeDias:2007hg] for precise definitions). Using now with one obtains the behavior of the Newtonian potential $\Phi$ near the $i^{\rm th}$ black hole. This shows that the first term in corresponds to the flat space gravitational potential due to the $i^{\rm th}$ mass $M_i = \nu_i M$ and the second term is a constant potential due to its images and the presence of the other masses and their images. The quantity $\Lambda^{(i)}$ plays a crucial role in the explicit construction of the first-order corrected metric of multi-black holes configurations on the cylinder and also enters the first-order corrected thermodynamics (see Sec. \[obesec:ther\]). [.2cm [**Equilibrium conditions.**]{}]{} The third term in is proportional to $\rho \cos \theta = z - z_i^*$ and therefore this term gives a non-zero constant term in $\partial_z \Phi$ if $\Lambda_1^{(i)}$ is non-zero. This therefore corresponds to the external force on the $i^{\rm th}$ black hole, due to the other $k-1$ black holes. Indeed, $\Lambda_1^{(i)}$ can be written as a sum of the potential gradients corresponding to the gravitational force due to each of the $k-1$ other black holes on the $i^{\rm th}$ black hole as $$\begin{aligned} \Lambda_1^{(i)} = \sum_{\substack{j=1,j\neq i } }^k \nu_j V_{i j} \,, \label{obeLambdaPotent}\end{aligned}$$ where $V_{ij}$ corresponds to the gravitational field on the $i^{\rm th}$ black hole from the $j^{\rm th}$ black hole, given by $$\begin{aligned} V_{ij} = \frac{(d-2)}{L^{d-1}} \left\{ \tilde{z}_{ij}^{-(d-1)} - \zeta ( d-1,1-\tilde{z}_{ij} ) + \zeta (d-1,1+\tilde{z}_{ij} ) \right\} \,, \label{obeVij}\end{aligned}$$ for $j \neq i$. Defining $F_{ij} \equiv \nu_i \nu_j V_{ij}$ as the Newtonian force on the $i^{\rm th}$ mass due to the $j^{\rm th}$ mass (and its images as seen in the covering space of the circle), the condition $\Lambda_1^{(i)}=0$ can be written as the condition of zero external force on each of the $k$ masses $$\label{obenoforce} \sum_{j=1,j \neq i}^k F_{ij} = 0 \ ,$$ for $i=1,...,k$. As a check, note that it is not difficult to verify that Newton’s law $F_{ij} = - F_{ji}$ is verified using an appropriate identity for the Generalized Zeta function (see Eq. (3.6) of [@obeDias:2007hg]). We thus conclude that for static solutions one needs to require the equilibrium condition $\Lambda_1^{(i)}=0$ for all $i$, since otherwise the $i^{\rm th}$ black hole would accelerate along the $z$ axis. This gives conditions on the relation between the positions $z_i^*$ and the mass ratios $\nu_i$, which are examined in detail in Ref. [@obeDias:2007hg]. It is shown how to build such equilibrium configurations and a general copying mechanism is described that builds new equilibrium configurations by copying any given equilibrium configuration a number of times around the cylinder. Note that this equilibrium is an unstable equilibrium, generic small disturbances in the position of one of the black holes will disturb the balance of the configuration and result in the merger of all of the black holes into a single black hole. As also argued in Ref. [@obeDias:2007hg], it is expected that these equilibrium conditions are a consequence of regularity of the solution since with a non-zero Newtonian force present on the black hole the only way to keep it static is to introduce a counter-balancing force supported by a singularity. It turns out that the irregularity of the solution cannot be seen at the leading order since the binding energy, which accounts for the self-interaction of the solution, is neglected. It is therefore expected that singularities will appear at the second order in the total mass for solutions that do no obey the equilibrium condition mentioned above. Newtonian derivation of the thermodynamics \[obesec:ther\] ----------------------------------------------------------- It turns out that there is a quick route to determine the first-order corrected thermodynamics of the multi-black hole configurations, as explained in Ref. [@obeDias:2007hg] following the method first found in Ref. [@obeGorbonos:2005px]. Here one assumes the equilibrium condition to be satisfied and all one needs is the quantity $\Lambda^{(i)}$ defined in , one does not need to compute the first-order corrected metric. To start, we define for each black hole an “areal” radius $\hat{\rho}_{0(i)}$, $i=1, \ldots, k$, such that the individual mass, entropy and temperature of each black hole are given by $$\label{obeSi} M_{0(i)} = \frac{(d-1)\Omega_{d-1}}{16\pi G}\hat{\rho}_{0(i)}^{d-2} {\ , \ \ }S_{0(i)} = \frac{\Omega_{d-1} \hat{\rho}_{0(i)}^{d-1}}{4 G} {\ , \ \ }T _{0(i)} = \frac{d-2}{4\pi\hat{\rho}_{0(i)}} \ .$$ These are the intrinsic thermodynamic quantities associated to each black hole when they would be isolated in flat empty $(d+1)$-dimensional space. If we now imagine placing the black holes on a circle at locations $z_i^*$ each of them will experience a gravitational potential $\Phi_i$. In particular, this is the Newtonian potential created by all images of the $i^{\rm th}$ black hole as well as all other $k-1$ masses (and their images) as seen from the location of the $i^{\rm th}$ black hole. It is not difficult to show that $\Phi_i$ is given by $$\label{obePhii} \Phi_i = - \frac{\Lambda^{(i)}}{2 \nu_i} \hat{\rho}_{0(i)}^{d-2} \ ,$$ in terms of $\Lambda^{(i)}$ defined in Eq. . Taking into account this potential, we can now determine the thermodynamic quantities of the interacting system to leading order. By definition, the entropy $S_i=S_{0(i)}$ is unchanged. The temperature of each black hole, however, receives a redshift contribution coming from the gravitational potential $\Phi_i$, so that $$\label{obetempi} T_i = T_{0(i)} ( 1 + \Phi_i ) \ .$$ The total mass of the configuration is equal to the sum of the individual masses when the black holes would be isolated plus the negative gravitational (Newtonian) potential energy that appears as a consequence of the black holes and their images. We thus have that the total mass is given by $$\label{obeMareal} M =M_{0} +U_{\rm Newton} \ ,$$ where $$M_{0} \equiv \sum_{i=1}^k M_{0(i)} {\ , \ \ }U_{\rm Newton} \equiv \frac{1}{2}\sum_{i=1}^k M_{0(i)} \Phi_i \ .$$ From these Newtonian results one can then derive the formula for the relative tension simply by using the (generalized) first law of thermodynamics (see Eq. ) $$\delta M = \sum_{i=1}^k T_i \delta S_i + \frac{n M }{L} \delta L \ ,$$ from which one finds $$\label{obender} n = \frac{L}{M} \left( \frac{ \partial M}{\partial L} \right)_{S_i} \ .$$ The condition of keeping $S_i$ fixed means that we should keep fixed the mass $M_{0(i)}$ of each black hole, and hence also the total intrinsic mass $M_0$. It this follows from and that $$\label{obender2} n = \frac{L}{M_0} \left( \frac{ \partial U_{\rm Newton}}{\partial L} \right)_{M_{0(i)}} = -\frac{1}{4M_0} \sum_{i=1}^k M_{0(i)} \frac{\hat{\rho}_{0(i)}^{d-2}}{\nu_i} L \frac{ \partial \Lambda^{(i)}}{\partial L} = \frac{d-2}{4} \sum_{i=1}^k \Lambda^{(i)}\hat{\rho}_{0(i)}^{d-2} \ ,$$ where we used that $\Lambda^{(i)} \propto L^{-(d-2)}$ for fixed locations $z_i^*$ (see Eq. ) and $M_{0(i)} = \nu_i M_0$. As shown in [@obeDias:2007hg], the thermodynamics above agrees with the explicitly computed thermodynamic quantities from the first-order corrected metric. We emphasize that these results are correct only to first order in the mass and note that in terms of the reduced mass the expression gives that $n$ as a function of $\mu$ is given for the multi-black hole configurations by $$\begin{aligned} \label{obenofmu2} n (\mu) = \frac{(d-2)(2\pi)^{d-2} }{4(d-1) \Omega_{d-1}} \sum_{i=1}^k \nu_i \Lambda^{(i)} \mu + {\mathcal{O}}(\mu^2) \ ,\end{aligned}$$ which generalizes the single-black hole result given in . In terms of the phase diagram of Fig. \[obefig1\], it follows from this result that (at least for small masses) the $k$ black hole configurations correspond to points lying above the single-black hole phase and below the $k$ copied phase. From the first-order corrected temperatures one can show that the multi-black hole configuration are in general not in thermal equilibrium. The only configurations that are in thermal equilibrium to this order are the copies of the single-black hole solution studied previously [@obeHorowitz:2002dc; @obeHarmark:2003eg; @obeHarmark:2003yz]. As a further comment we note that Hawking radiation will seed the mechanical instabilities of the multi-black hole configurations. The reason for this is that in a generic configuration the black holes have different rates of energy loss and hence the mass ratios required for mechanical equilibrium are not maintained. This happens even in special configurations, when the temperatures are equal, because the thermal radiation is only statistically uniform. Hence asymmetries in the real time emission process will introduce disturbances driving these special configurations away from their equilibrium positions. Consequences for the phase diagram \[obesec:copd\] --------------------------------------------------- The existence of the multi-black hole solutions has striking consequences for the phase structure of black hole solutions on ${\mathcal{M}}^d \times S^1$. It means that one can for example start from a solution with two equal size black holes, placed oppositely to each other on the cylinder, and then continuously deform the solution to be arbitrarily close to a solution with only one black hole (the other black hole being arbitrarily small in comparison). Thus, we get a continuous span of classical static solutions for a given total mass. In particular, a multi-black hole configuration with $k$ black holes has $k$ independent parameters. This implies a continuous non-uniqueness in the $(\mu,n)$ phase diagram (or for a given mass), much like the one observed for bubble-black hole sequences [@obeElvang:2004iz] and for other classes of black hole solutions [@obeEmparan:2004wy; @obeElvang:2007rd; @obeIguchi:2007is; @obeElvang:2007hg] (see also Sec. \[obesec:phas\]). In particular, this has the consequence that if we would live on ${\mathcal{M}}^4 \times S^1$ then from a four-dimensional point of view one would have an infinite non-uniqueness for static black holes of size similar to the size of the extra dimension, thus severely breaking the uniqueness of the Schwarzschild black hole. [.2cm [**New non-uniform strings ?**]{}]{} Another consequence of the new multi-black hole configurations is for the connection to uniform and non-uniforms strings on the cylinder. As discussed in Sec. \[obesec:pdia\], there is evidence that the black hole on the cylinder phase merges with the non-uniform black string phase in a topology changing transition point. It follows from this that the copies of black hole on the cylinder solution merge with the copies of non-uniform black strings. However, due to the multi-black hole configurations we now have a continuous span of solutions connected to the copies of the black hole on the cylinder. Therefore, it is natural to ask whether the new solutions also merge with non-uniform black string solutions in a topology changing transition point. If so, it probes the question whether there exist, in addition to having new black hole on the cylinder solutions, also new non-uniform black string solutions. Thus, these new solution present a challenge for the current understanding of the phase diagram for black holes and strings on the cylinder. For a detailed discussion on this see Ref. [@obeDias:2007hg]. Another connection with strings and black holes on the cylinder is that a Gregory-Laflamme unstable uniform black string is believed to decay to a black hole on the cylinder (when the number of dimensions is less than the critical one [@obeSorkin:2004qq]). However, the new multi-black hole solutions mean that one can imagine them as intermediate steps in the decay. [.2cm [**Lumpy black holes.**]{}]{} Ref. [@obeDias:2007hg] also examines in detail configurations with two and three black holes. For two black holes this confirms the expectation that one maximizes the entropy by transferring all the mass to one of the black holes, and also that if the two black holes are not in mechanical equilibrium then the entropy is increasing as the black holes become closer to each other. These two facts are both in accordance with the general argument that the multi-black hole configurations are in an unstable equilibrium and generic perturbations of one of the positions will result in that all the black holes merge together in a single black hole on the cylinder. A detailed examination of the three black hole solution suggests the possibility of further new types of black hole solutions in Kaluza-Klein spacetimes. In particular, this analysis suggests the possibility that new static configurations may exist that consist of a lumpy black hole, where the non-uniformities are supported by the gravitational stresses imposed by an external field. These new solutions were argued by considering a symmetric configuration of three black holes, with one of mass $M_1$ and two others of equal mass $M_2=M_3$ at equal distance to the first one. Increasing the total mass of the system shows that it is possible that the two black holes (2 and 3) merge before merging with black hole 1. In this way one could end up with a static solution consisting of lumpy black hole (a ‘peanut-like’ shaped black object) together with an ellipsoidal black hole. [.2cm [**Analogue fluid model.**]{}]{} Finally we note that one may consider the multi-black hole configurations in relation to an analogue fluid model for the Gregory-Laflamme (GL) instability, recently proposed in Ref. [@obeCardoso:2006ks]. There it was pointed out that the GL instability of a black string has a natural analogue description in terms of the Rayleigh-Plateau (RP) instability of a fluid cylinder. It turns out that many known properties of the gravitational instability have an analogous manifestation in the fluid model. These include the behavior of the threshold mode with $d$, dispersion relations, the existence of critical dimensions and the initial stages of the time evolution (see Refs. [@obeCardoso:2006ks; @obeCardoso:2006sj; @obeCardoso:2007ka] for details). In the context of this analogue fluid model, Ref. [@obeDias:2007hg] discusses a possible, but more speculative, relation of the multi-black hole configurations to configurations observed in the time evolution of fluid cylinders. Thin black rings in higher dimensions \[obesec:robh\] ====================================================== In this and the next section we turn our attention to rotating black holes. We start by reviewing the recent construction [@obeEmparan:2007wm] of an approximate solution for an asymptotically flat neutral thin rotating black ring in any dimension $D \geq 5$ with horizon topology $S^{D-3}\times S^1$. As in Sec. \[obesec:mubh\], this construction uses the method of matched asymptotic expansion, and we only present the main points. We discuss in particular the equilibrium condition necessary for balancing the ring, and how this enables to obtain the leading order thermodynamics of thin rotating black rings. We also compare the thermodynamics of the thin black ring to that of the MP black hole. In this and the following section we denote the number of spacetime dimensions by $D=4+n$. Thin black rings from boosted black strings \[obesec:boos\] ----------------------------------------------------------- Black rings in $(n+4)$-dimensional asymptotically flat spacetime are solutions of Einstein gravity with an event horizon of topology $S^1\times S^{n+1}$. As we briefly reviewed in Secs. \[obesec:intr\] and \[obesec:uniq\] explicit solutions with this topology in five dimensions $(n=1)$ were first presented in Ref. [@obeEmparan:2001wn] (see also [@obeEmparan:2006mm] for a review). In five dimensions, there is beyond the MP black hole and the black ring one more phase of rotating black holes if one restricts to phases with a single angular momentum that are in thermal equilibrium. This is the black Saturn phase consisting of a central MP black and one black ring around it, having equal temperature and angular velocity. If one abandons the condition of thermal equilibrium there are many more black Saturn phases with multiple rings as well as multi-black ring solutions. We refer to [@obeElvang:2007hg] and the recent review [@obeEmparan:2008eg] for details on the more general phase structure for the five-dimensional case. The construction of analogous solutions in more than five dimensions is considerably more involved, since for $D \geq 6$ these solutions are not contained in the generalized Weyl ansatz [@obeEmparan:2001wk; @obeHarmark:2004rm; @obeHarmark:2005vn] because they do not have $D-2$ commuting Killing symmetries. Furthermore the inverse scattering techniques of [@obeBelinsky:1971nt; @obeBelinsky:1979; @obeBelinski:2001ph; @obePomeransky:2005sj] do not extend to the asymptotically flat case in any $D\geq 6$. Therefore, one way to make progress towards solving this problem can be achieved by first constructing thin black ring solutions in arbitrary dimensions as a perturbative expansion around circular boosted black strings. The idea that rotating thin black rings should be well approximated by boosted black strings is intuitively clear and already appears in earlier works [@obeElvang:2003mj; @obeHovdebo:2006jy; @obeElvang:2006dd]. This was used as a starting point in the explicit construction [@obeEmparan:2007wm]. [.2cm [**Boosted black string.**]{}]{} The zeroth order solution is that of a *straight* boosted black string. The metric of this can easily be obtained from by applying a boost in the $(t,z)$ plane. The result is $$\begin{aligned} \label{obeappab} ds^2 &=& -\left( 1 - \cosh^2 \alpha \frac{r_0^{n}}{r^{n}} \right) dt^2 - 2 \frac{r_0^{n}}{r^{n}} \cosh \alpha \sinh \alpha\, dt dz + \left( 1 + \sinh^2 \alpha \frac{r_0^{n}}{r^{n}} \right) dz^2 {\nonumber}\\ && + \left( 1- \frac{r_0^{n}}{r^{n}} \right)^{-1} dr^2 + r^2 d\Omega_{n+1}^2 \,,\end{aligned}$$ where $r_0$ is the horizon radius and $\alpha$ is the boost parameter. In general, we will take the $z$ direction to be along an $S^1$ with circumference $2\pi R$, which means we can write $z$ in terms of an angular coordinate $\psi$ defined by $ \psi=z/R$ ($ 0\leq \psi <2\pi$). At distances $r\ll R$, the solution is the approximate metric of a thin black ring to zeroth order in $1/R$. By definition, a thin black ring has an $S^1$ radius $R$ that is much larger than its $S^{n+1}$ radius $r_0$. In this limit, the mass of the black ring is small and the gravitational attraction between diametrically opposite points of the ring is very weak. So, in regions away from the black ring, the linearized approximation to gravity will be valid, and the metric will be well-approximated if we substitute the ring by an appropriate delta-like distributional source of energy-momentum. The source has to be chosen so that the metric it produces is the same as that expected from the full exact solution in the region far away from the ring. Since the thin black ring is expected to approach locally the solution for a boosted black string, it is sensible to choose distributional sources that reproduce the metric in the weak-field regime, \[obesource\] $$\begin{aligned} T_{tt}&=&\frac{r_0^{n}}{16\pi G}\,\left(n\cosh^2\alpha+1\right)\,\delta^{(n+2)}(r)\,,\\ \label{obedistsourcea} T_{tz}&=&\frac{r_0^{n}}{16\pi G}\,n\cosh\alpha\sinh\alpha\,\delta^{(n+2)}(r)\,,\\ \label{obedistsourceb} T_{zz}&=&\frac{r_0^{n}}{16\pi G}\,\left(n\sinh^2\alpha-1\right)\,\delta^{(n+2)}(r)\,. \label{obedistsourcec}\end{aligned}$$ The location $r=0$ corresponds to a circle of radius $R$ in the $(n+3)$-dimensional Euclidean flat space, parameterized by the angular coordinate $\psi$. In this construction the mass and angular momentum of the black ring are obtained by integrating the energy and momentum densities, $$\label{obeMJa} M=2\pi R \int_{S^{n+1}} T_{tt} {\ , \ \ }J =2\pi R^2 \int_{S^{n+1}} T_{tz} \ ,$$ where $S^{n+1}$ links the ring once. [.2cm [**Dynamical equilibrium condition.**]{}]{} We now first show that the boost parameter $\alpha$ gets fixed by a dynamical equilibrium condition ensuring that the string tension is balanced against the centrifugal repulsion. To this end note that we are approximating the black ring by a distributional source of energy-momentum. The general form of the equation of motion for probe brane-like objects in the absence of external forces takes the form [@obeCarter:2000wv] $$\label{obeKT} {K_{\mu\nu}}^{\rho}T^{\mu\nu}=0\,,$$ where the indices $\mu,\nu$ are tangent to the brane and $\rho$ is transverse to it. The second fundamental tensor ${K_{\mu\nu}}^{\rho}$ extends the notion of extrinsic curvature to submanifolds of codimension possibly larger than one. The extrinsic curvature of the circle is $1/R$, so a circular linear distribution of energy-momentum of radius $R$ will be in equilibrium only if $$\label{obenotzz} \frac{T_{zz}}{R}=0\,,$$ for finite radius the pressure tangential to the circle must vanish. Hence, for the thin black ring with source , the condition that the ring be in equilibrium translates into a very specific value for the boost parameter $$\label{obeeqboost} \sinh^2\alpha=\frac{1}{n}\,,$$ which we will also refer to as the critical boost. For $D=5$ ($n=1$) this was already observed in Ref. [@obeElvang:2003mj] where the thin black string limit of five-dimensional black rings was first made explicit, but the connection with was first noticed in [@obeEmparan:2007wm]. [.2cm [**Thermodynamics.**]{}]{} Using it is not difficult to obtain the physical quantities of the critically boosted black string, and hence the leading-order thermodynamics of thin black rings (see also Refs. [@obeHovdebo:2006jy; @obeKastor:2007wr] for further details on boosted black strings and their thermodynamics). We find for the mass $M$, entropy $S$, temperature $T$, angular momentum $J$ and angular velocity $\Omega$ the expressions [@obeEmparan:2007wm] \[obetbrthermo\] $$\label{obetbrthermo1} M=\frac{\Omega_{n+1}}{8 G}\,R\, r_0^{n}(n+2) {\ , \ \ }S=\frac{\pi\,\Omega_{n+1}}{2G} R\,r_0^{n+1} \sqrt{\frac{n+1}{n}} {\ , \ \ }T = \frac{n}{4\pi} \sqrt{ \frac{n}{n+1}} \frac{1}{r_0}\,,$$ $$\label{obetbrthermo2} J=\frac{\Omega_{n+1}}{8 G}\,R^2\, r_0^{n}\sqrt{n+1}\,,\qquad \Omega = \frac{1}{\sqrt{n+1}} \frac{1}{R}\,.$$ We also note that an equivalent but more physical form of the equilibrium equation in terms of these quantities is $$\label{obeRJM} R=\frac{n+2}{\sqrt{n+1}}\frac{J}{M}\,.$$ We thus see that the radius grows linearly with $J$ for fixed mass. It is remarkable that with the above reasoning one can already obtain the correct limiting thermodynamics of thin black rings to leading order, without having to solve for any metric. One finds from that the entropy of thin black rings behaves as $$\label{obesmjring} S^{\rm ring}(M,J) \propto J^{-\frac{1}{D-4}}\;M^{\frac{D-2}{D-4}} \,,$$ whereas that of ultra-spinning MP black holes in $D \geq 6$ is given by [@obeEmparan:2003sy] $$\label{obesmjhole} S^{\rm hole} (M,J) \propto J^{-\frac{2}{D-5}}\;M^{\frac{D-2}{D-5}} \,.$$ This already shows the non-trivial fact that in the ultra-spinning regime of large $J$ for fixed mass $M$ the rotating black ring has higher entropy than the MP black hole (see also Sec. \[obesec:mpbh\]). Moreover, as will be explained in Sec. \[obesec:maex\], it turns out that for $D \geq 6$ the results are actually valid up to and including the next order in $r_0/R$, so receives only $O(r_0^2/R^2)$ corrections. This conclusion could already be drawn once one has convinced oneself that the first-order $1/R$ correction terms in the metric only involve dipole contributions which can easily be argued to give zero contribution to all thermodynamic quantities [@obeEmparan:2007wm]. It is important to stress that the above reasoning relies crucially on the assumption that when the boosted black string is curved, the horizon remains regular. To verify this point, and also to obtain a metric for the thin black ring, Ref. [@obeEmparan:2007wm] solves the Einstein equations explicitly by constructing an approximate solution for $r_0\ll R$ using a matched asymptotic expansion. In this analysis one finds that the condition appears as a consequence of demanding absence of singularities on the plane of the ring outside the horizon. Whenever $n\sinh^2\alpha\neq 1$ with finite $R$, the geometry backreacts creating singularities on the plane of the ring. These singularities admit a natural interpretation. Since is a consequence of the conservation of the energy-momentum tensor, when is not satisfied there must be additional sources of energy-momentum. These additional sources are responsible for the singularities in the geometry. Alternatively, the derivation of in Ref. [@obeEmparan:2007wm] from the Einstein equations is an example of how General Relativity encodes the equations of motion of black holes as regularity conditions on the geometry. Matched asymptotic expansion \[obesec:maex\] --------------------------------------------- We now review the highlights of the perturbative construction of thin black rings using matched asymptotic expansion (see also Sec. \[obesec:cons\]). In the problem at hand, the two widely separated scales are the ‘thickness’ of the ring $r_0$ and the radius of the ring $R$, and the thin limit means that $r_0 \ll R$. There are therefore two zones, an asymptotic zone at large distances from the black ring, $r\gg r_0$, where the field can be expanded in powers of $r_0$. The other zone is the near-horizon zone which lies at scales much smaller than the ring radius, $r\ll R$. In this zone the field is expanded in powers of $1/R$. At each step, the solution in one of the zones is used to provide boundary conditions for the field in the other zone, by matching the fields in the ‘overlap’ zone $r_0\ll r \ll R$ where both expansions are valid. As already discussed in Sec. \[obesec:boos\], the starting point is to consider the solution in the near-horizon zone to zeroth order in $1/R$, we take a boosted black string of infinite length, $R\to\infty$. The next steps in the construction are then as follows: - Step 1: One solves the Einstein equations in the linearized approximation around flat space for a source corresponding to a circular distribution of a given mass and momentum density as given in . This metric is valid in the region $r \gg r_0$. - Step 2: We consider the Newtonian solution close to the sources, in the overlap region $r_0\ll r\ll R$. - Step 3: We consider the near-horizon region of the ring and find the linear corrections to the metric of a boosted black string for a perturbation that is small in $1/R$; in other words, we analyze the geometry of a boosted black string that is now slightly curved into a circular shape. This solution is then matched to the metric in the overlap region found in Step 2. The resulting solution is valid in the region $r_0 \leq r \ll L$. To solve Step 1 for a non-zero $T_{\psi\psi}=R^2\, T_{zz}$ is not easy. It is therefore convenient to already assume that the equilibrium condition $T_{\psi\psi}=0$ in is satisfied. This then gives the solution of a black ring in linearized gravity [@obeEmparan:2007wm]. Finding a more general solution with a source for the tension is much easier if one restricts to the overlap zone (Step 2). In this regime we are studying the effects of locally curving a thin black string into an arc of constant curvature radius $R$. To this end it is convenient to introduce ring-adapted coordinates. These are derived in Ref. [@obeEmparan:2007wm] and to first-order in $1/R$ the flat space metric in these coordinates takes the form $$\label{obeadapted} ds^2({\mbox{${\mathbb E}^{n+3}$}}) =\left( 1 + \frac{2r\cos \theta}{R} \right) dz^2 +\left( 1 - \frac{2}{n}\frac{r \cos \theta}{R} \right) \left( dr^2 + r^2 d\theta^2 + r^2 \sin^2 \theta d\Omega_{n}^2 \right)\, .$$ In terms of these coordinates the general form of the metric in the overlap region is then $$\label{obecorover} g_{\mu \nu} \simeq \eta_{\mu \nu} + \frac{r_0^n}{r^n} \left( h_{\mu \nu}^{(0)} (r) + \frac{r \cos \theta}{ R} h_{\mu \nu}^{(1)} (r) \right) \ .$$ Solving Einstein equations to order $1/R$ then explicitly shows that regularity of the solution enforces vanishing of the tension $T_{zz}$ (see Eq. ). The technically most difficult part of the problem is to find the near-horizon solution in step 3. Physically, this corresponds to curving the black string into a circle of large but finite radius $R$. In effect, this means that we are placing the black string in an external potential whose form at large distances is that of and which changes the metric $g_{\mu \nu}^{\rm bbs}$ in Eq.  of the (critically) boosted black string by a small amount, . $$\label{obecorrmet} g_{\mu \nu} \simeq g_{\mu \nu}^{\rm bbs} (r;r_0) + \frac{\cos \theta}{R} h_{\mu \nu} (r;r_0) \ .$$ In Ref. [@obeEmparan:2007wm] the Einstein equations to order $1/R$ are explicitly solved, showing that the perturbations $h_{\mu \nu} (r;r_0)$ can be expressed in terms of hypergeometric functions. [.2cm [**Corrected thermodynamics.**]{}]{} One can find the corrections to the thermodynamics as follows. First, one uses the near-horizon corrected metric to find the corrections to the entropy $S$, temperature $T$, and angular velocity $\Omega$. Then one can use the 1st law, $$\label{obefirstlawa} dM= T \delta S +\Omega \delta J \ ,$$ and the Smarr formula $$\label{obesmarr} (n+1)M=(n+2) \left( T S+\Omega J \right) \ ,$$ to deduce the corrections to the mass and angular momentum.[^5] Using now that the perturbations in are only of dipole type, with no monopole terms, it follows that the area, surface gravity and angular velocity receive no modifications in $1/R$. The reason is that a dipole can not change the total area of the horizon, only its shape. This is true both of the shape of the $S^{n+1}$ as well as the length of the $S^1$, which can vary with $\theta$ but on average (when integrated over the horizon) remains constant. So $S$ is not corrected. The surface gravity and angular velocity can not be corrected either. They must remain uniform on a regular horizon, so, since the dipole terms vanish at $\theta=\pi/2$, no corrections to $T$ and $\Omega$ are possible. It then follows from , that $M$ and $J$ are not corrected either. [^6] So the function $S(M,J)$ obtained in is indeed valid including the first order in $1/R$. It is interesting to observe that this conclusion could be drawn already when the asymptotic form of the metric in the overlap zone, is seen to include only dipole terms at order $1/R$. Black rings versus MP black holes \[obesec:mpbh\] -------------------------------------------------- We now proceed by analyzing the thin black ring thermodynamics and compare it to that of ultra-spinning MP black holes. Recall that the thermodynamics of the thin black ring in the ultra-spinning regime is given by , which is valid up to $O(r_0^2/R^2)$ corrections. [.2cm [**Myers-Perry black hole.**]{}]{} For the MP black hole, exact results can be obtained for all values of the rotation. The two independent parameters specifying the (single-angular momentum) solution are the mass parameter $\mu$ and the rotation parameter $a$, from which the horizon radius $r_0$ is found as the largest (real) root of the equation $$\label{obemueq} \mu = (r_0^2 + a^2) r_0^{n-1}\,.$$ In terms of these parameters the thermodynamics take the form [@obeMyers:1986un] \[obeTMPJOm\] $$\label{obeTMP} M = \frac{ (n+2) \Omega_{n+2} \, \mu}{16 \pi G} \,,\qquad S = \frac{\Omega_{n+2} \, r_0 \, \mu}{4 G} \,,\qquad T = \frac{1}{4 \pi} \left( \frac{2 r_0^n}{\mu} + \frac{n-1}{r_0} \right)\,,$$ $$\label{obeJOm} J = \frac{ \Omega_{n+2}\, a \, \mu }{ 8 \pi G} \,,\qquad \Omega = \frac{a \, r_0^{n-1}}{\mu}\,.$$ Note the similarity between $a=\frac{n+2}{2} \frac{J}{M}$ and the black ring relation . An important simplification occurs in the ultra-spinning regime of $J\to\infty$ with fixed $M$, which corresponds to $a \rightarrow \infty$. Then becomes $ \mu \rightarrow a^2 r_0^{n-1}$ leading to simple expressions for Eq.  in terms of $r_0$ and $a$, which in this regime play roles analogous to those of $r_0$ and $R$ for the black ring. Specifically, $a$ is a measure of the size of the horizon along the rotation plane and $r_0$ a measure of the size transverse to this plane [@obeEmparan:2003sy]. In fact, in this limit $$\label{obeTMP2} M \to \frac{ (n+2) \Omega_{n+2}}{16 \pi G}\; a^2 r_0^{n-1} \,,\qquad S \to \frac{\Omega_{n+2}}{4 G}\;a^2 r_0^{n} \,,\qquad T \to \frac{n-1}{4 \pi r_0} \ ,$$ take the same form as the expressions characterizing a black membrane extended along an area $\sim a^2$ with horizon radius $r_0$. This identification lies at the core of the ideas in [@obeEmparan:2003sy], which were further developed in Ref. [@obeEmparan:2007wm] and will be summarized in Sec. \[obesec:phas\]. We note that the quantities $J$ and $\Omega$ disappear since the black membrane limit is approached in the region near the axis of rotation of the horizon and so the membrane is static in the limit. Note furthermore that is valid up to $O(r_0^2/a^2)$ corrections. Finally, we remark that the transition to the membrane-like regime is signaled by a qualitative change in the thermodynamics of the MP black holes. At $a/r_0 = \sqrt(\frac{n+1}{n-1})$ the temperature reaches a minimum and $\left(\partial^2 S/\partial J^2\right)_M$ changes sign. For $a/r_0$ smaller than this value, the thermodynamic quantities of the MP black holes such as $T$ and $S$ behave similarly to those of the Kerr solution and one should not expect any membrane-like behavior. However, past this point they rapidly approach the membrane results. We do not expect that the onset of thermodynamic instability at this point is directly associated to any dynamical instability. Rather, one expects a GL-like instability to happen at a larger value of $a/r_0$ [@obeEmparan:2003sy; @obeEmparan:2007wm]. [.2cm [**Dimensionless quantities.**]{}]{} Contrary to the case of KK black holes where we could use the circle length to define dimensionless quantities (cf. or ) in this case we need to use one of the physical parameters of the solutions to define dimensionless quantities. We choose the mass $M$ and thus introduce dimensionless quantities for the spin $j$, the area $a_H$, the angular velocity $\omega_H$ and the temperature $\mathfrak{t}_H$ via \[obejaot\] $$\label{obejaHdef} j^{n+1} \propto \frac{J^{n+1}}{GM^{n+2}} \,,\qquad a_H^{n+1} \propto \frac{S^{n+1}}{(GM)^{n+2}} ~,$$ $$\label{obeotdef} \omega_H \propto \Omega (GM)^{\frac{1}{n+1}} \,,\qquad \mathfrak{t}_H \propto (GM)^{\frac{1}{n+1}}\, T\,,$$ where convenient normalization factors can be found in Eq. (7.9) of [@obeEmparan:2007wm]. We take $j$ as our control parameter and now study and compare the functions $a_H (j)$, $\omega_H(j)$ and $\mathfrak{t}_H (j) $ for black rings and MP black holes in the ultra-spinning regime. These asymptotic phase curves can now obtained using together with and respectively. In the following we denote the results for the thin black ring with $^{(r)}$ and for the ultra-spinning MP black holes with $^{(h)}$, and generally omit numerical prefactors. (0,0)(0,0) (400,13)[$j$]{} (55,230)[$a_H$]{} (77,3)[$j_{\rm mem}$]{} (123,3)[$j_{\rm GL}$]{} ![Area vs spin for fixed mass, $a_H(j)$, in seven dimensions. For large $j$, the thin curve is the result for thin black rings and is extrapolated here down to $j\sim O(1)$. The thick curve is the exact result for the MP black hole. The gray line corresponds to the conjectured phase of pinched black holes (see Sec. \[obesec:phas\]), which branch off tangentially from the MP curve at a value $j_{\rm GL}> j_{\rm mem}$. At any given dimension, the phases should not necessarily display the swallowtail as shown in this diagram, but could also connect more smoothly via a pinched black hole phase that starts tangentially in $j_{\rm GL}$ and has increasing $j$. Reprinted from Ref. [@obeEmparan:2007wm].[]{data-label="obefig:sevendphases"}](sevendphases.eps){width="13cm"} [.2cm [**Comparison of the thermodynamics.**]{}]{} Starting with the reduced area function we see that $$\label{obeaHrh} a_H^{(r)}\sim \frac{1}{{j}^{1/n}}\,,\qquad a_H^{(h)}\sim \frac{1}{{j}^{2/(n-1)}}\,,$$ and so, for any $D=4+n\geq 6$, the area decreases faster for MP black holes than for black rings, so we immediately see that black rings dominate entropically in the ultra-spinning regime [@obeEmparan:2007wm]. For illustration, Fig. \[obefig:sevendphases\] shows these curves in $D=7$ ($n=3$). Including prefactors one finds for the angular velocities that $$\omega_H^{(r)} \to \frac{1}{2j} {\ , \ \ }\omega_H^{(h)} \to \frac{1}{j} \ .$$ The ratio $\omega^{(h)}_H/\omega^{(r)}_H=2$, which holds for all $D\geq 6$, is reminiscent of the factor of 2 in Newtonian mechanics between the moment of inertia of a wheel (a ring) and a disk (a pancake) of the same mass and radius, which implies that the disk must rotate twice as fast as the wheel in order to have the same angular momentum. Irrespective of whether this is an exact analogy or not, the fact that $\omega^{(r)}_H<\omega^{(h)}_H$ is clearly expected from this sort of picture. For the temperatures we find $$\label{obetempsa} \mathfrak{t}^{(r)}_H \sim j^{1/n} {\ , \ \ }\mathfrak{t}^{(h)}_H \sim j^{2/(n-1)}\, ,$$ so the thin black ring is colder than the MP black hole. In fact, since the temperature is inversely proportional to the thickness of the object the picture suggested above leads to the following argument: if we put a given mass in the shape of a wheel of given radius, then we get a thicker object than if we put it in the shape of a pancake of the same radius. Completing the phase diagram \[obesec:phas\] ============================================= In this section we will discuss the phase structure of asympotically flat neutral rotating black holes in six and higher dimensions by exploiting a connection between, on one side, black holes and black branes in KK spacetimes and, on the other side, higher-dimensional rotating black holes. Building on the basic idea in [@obeEmparan:2003sy], this phase structure was recently proposed in Ref. [@obeEmparan:2007wm]. Part of this picture is conjectural, but is based on well-motivated analogies and appears to be natural from many points. The curve $a_H(j)$ at values of $j$ outside the domain of validity of the computations in Sec. \[obesec:robh\] corresponds to the regime where the gravitational self-attraction of the ring is important. There are no analytical methods presently known to treat such values $j\sim O(1)$, and the precise form of the curve in this regime may require numerical solutions. However, as argued in Ref. [@obeEmparan:2007wm] it is possible to complete the black ring curve and other features of the phase diagram, at least qualitatively. This is done by combining a number of observations and reasonable conjectures about the behavior of MP black holes at large rotation and using as input the presently known phase structure of Kaluza-Klein black holes (see Sec. \[obesec:kkbh\]). GL instability of ultra-spinning MP black hole \[obesec:usgl\] -------------------------------------------------------------- In the ultra-spinning regime in $D\geq 6$, MP black holes approach the geometry of a black membrane $\approx {\mathbb{R}}^2 \times S^{D-4}$ spread out along the plane of rotation [@obeEmparan:2003sy]. In Sec. \[obesec:mpbh\] we have already observed that the extent of the black hole along the plane is approximately given by the rotation parameter $a$, while the ‘thickness’ of the membrane, the size of its $S^{D-4}$, is given by the parameter $r_0$. For $a/r_0$ larger than a critical value of order one we expect that the dynamics of these black holes is well-approximated by a black membrane compactified on a square torus ${\mathbb{T}}^2$ with side length $L\sim a$ and with $S^{D-4}$ size $\sim r_0$. The angular velocity of the black hole is always moderate, so it will not introduce large quantitative differences, but note that the rotational axial symmetry of the MP black holes translates into only one translational symmetry along the ${\mathbb{T}}^2$, the other one being broken. ![Correspondence between phases of black membranes wrapped on a ${\mathbb{T}}^2$ of side $L$ (left) and fastly-rotating MP black holes with rotation parameter $a\sim L\geq r_0$ (right: must be rotated along a vertical axis): (i) Uniform black membrane and MP black hole. (ii) Non-uniform black membrane and pinched black hole. (iii) Pinched-off membrane and black hole. (iv) Localized black string and black ring. Reprinted from Ref. [@obeEmparan:2007wm]. []{data-label="obefig:membranes"}](membranes.eps){width="9.8cm"} Using this analog mapping of membranes and fastly rotating MP black holes, Ref. [@obeEmparan:2003sy] argued that the latter should exhibit a Gregory-Laflamme-type instability. Furthermore, as reviewed in Sec. \[obesec:kkbh\] it is known that the threshold mode of the GL instability gives rise to a new branch of static non-uniform black strings and branes [@obeGregory:1988nb; @obeGubser:2001ac; @obeWiseman:2002zc]. In correspondence with this, Ref. [@obeEmparan:2003sy] argued that it is natural to conjecture the existence of new branches of axisymmetric ‘lumpy’ (or ‘pinched’) black holes, branching off from the MP solutions along the stationary axisymmetric zero-mode perturbation of the GL-like instability, [.2cm [**Map to phases of KK black holes on the torus.**]{}]{} In Ref. [@obeEmparan:2007wm] this analogy was pushed further by drawing a correspondence between the phases of KK black holes on the torus (see Sec. \[obesec:torp\]) and the phases of higher-dimensional black holes, as illustrated in Fig. \[obefig:membranes\]. Here we have restricted to non-uniformities of the membrane along only of the two brane directions, since including non-uniformity in a second direction would not have a counterpart for rotating black holes. These would break axial symmetry and hence would be radiated away. Other limitations of the analogy are discussed in detail in Ref. [@obeEmparan:2007wm]. Using the correspondence between the phases of the two systems, one can import, at least qualitatively, the known phase diagram of black membranes on ${\mathcal{M}}^{D-2} \times {\mathbb{T}}^2$ onto the phase diagram of rotating black objects in ${\mathcal{M}}^D$. To this end one needs to first establish the map between quantities on each side of this correspondence. For unit mass, the quantities $\ell$ (see Eq. ) and $j$ (see Eq. ) measure the (linear) size of the horizon along the torus or rotation plane, respectively. Then $a_H(\ell)$ for KK black holes on ${\mathcal{M}}^{n+2} \times {\mathbb{T}}^2$ is analogous (up to constants) to $a_H(j)$ for rotating black holes in ${\mathcal{M}}^{n+4}$. More precisely, although the normalization of magnitudes in and are different, the functional dependence of $a_H$ on $\ell$ or $j$ must be parametrically the same in both functions, at least in the regime where the analogy is precise. As a check on this, note that the function $a_H(\ell)$ in for the uniform black membrane exhibits exactly the same functional form as $a_H(j)$ for the MP black hole in the ultra-spinning limit. Similarly, for the localized black string shows the same functional form as for the black ring in the large $j$ limit. The most important application of the analogy, though, is to non-uniform membrane phases (see Eq. ), providing information about the phases of pinched rotating black holes and how they connect to MP black holes and black rings. Phase diagram of neutral rotating black holes on ${\mathcal{M}}^D$ \[obesec:conj\] ----------------------------------------------------------------------------------- We present here the main points of the proposed phase diagram [@obeEmparan:2007wm] of neutral rotating black holes (with one angular momentum) in asymptotically flat space that follows from the analogy described above. To this end, we recall that the phases of KK black holes on a two-torus were discussed in Sec. \[obesec:torp\] and depicted in the representative phase diagram Fig. \[obefig:KKphases7\]. [.2cm [**Main sequence.**]{}]{} The analogy developed above suggests that the phase diagram of rotating black holes in the range $j>j_{\rm mem}$ where MP black holes behave like black membranes, is qualitatively the same as that for KK black holes on the torus (see Fig. \[obefig:KKphases7\]), with a pinched (lumpy) rotating black hole connecting the MP black hole with the black ring. This phase is depicted in Fig. \[obefig:sevendphases\] as a gray line emerging tangentially from the MP black hole curve at a critical value $j_{\rm GL}$ that is currently unknown. Arguments were given in [@obeEmparan:2003sy] to the effect that $j_{\rm GL}{\mathrel{\raisebox{-.6ex}{$\stackrel{\textstyle>}{\sim}$}}}j_{\rm mem}$, consistent with the analogy. As one moves along the gray line in Fig. \[obefig:sevendphases\] in the direction away from the MP curve, the pinch at the rotation axis of these black holes grows deeper. Eventually, as depicted in Fig. \[obefig:membranes\], the horizon pinches down to zero thickness at the axis and then the solutions connect to the black ring phase. Note also that we may have the ‘swallowtail’ structure of first-order phase transitions (as depicted Fig. \[obefig:sevendphases\]), or instead that of second-order phase transitions (see Fig. 4 of [@obeEmparan:2007wm]). It may not be unreasonable to expect that a swallowtail appears at least for the lowest dimensions $D=6,7,\dots$, since this is in fact the same type of phase structure that appears for $D=5$. Beyond this main sequence, Ref. [@obeEmparan:2007wm] presents arguments for further completion of the phase diagram, which is summarized in Fig. \[obefig:hidphases\]. The most important features are as follows. ![Proposal for the phase diagram of thermal equilibrium phases of rotating black holes in $D\geq 6$ with one angular momentum. The solid lines and figures have significant arguments in their favor, while the dashed lines and figures might not exist and admit conceivable, but more complicated, alternatives. Some features have been drawn arbitrarily, at any given bifurcation and in any dimension one may either have smooth connections or swallowtails with cusps. If thermal equilibrium is not imposed, the whole semi-infinite strip $0<a_H <a_H(j=0)$, $0\leq j<\infty$ is covered, and multi-rings are possible. Reprinted from Ref. [@obeEmparan:2007wm].[]{data-label="obefig:hidphases"}](hidphasesC.eps){width="11.8cm"} (0,0)(0,0) (40,250)[$a_H$]{} (387,15)[$j$]{} [.2cm [**Infinite sequence of lumpy (pinched) black holes.**]{}]{} Another observation based on the membrane analogy is that the phase diagram of rotating black holes should also exhibit an infinite sequence [@obeEmparan:2003sy; @obeEmparan:2007wm] of lumpy (pinched) black holes emerging from the curve of MP black holes at increasing values of $j$. These are the analogues of the $k$-copied phases in the phase diagram of KK black holes that appear at increasing $\ell$ according to . In this connection note that for the GL zero-modes of MP black holes one must choose axially-symmetric combinations, implying a change of basis from plane waves $\exp(i k_{\rm GL} z)$ to Bessel functions. Axially symmetric modes have a profile $J_0(k_{\rm GL}a\sin\theta)$ [@obeEmparan:2003sy]. The main point here is that the wavelength $\lambda_{\rm GL}$ (see ) of the GL zero-mode remains the same in the two analogue systems, to first approximation, even if the profiles are not the same. One is thus led to the existence of an infinite sequence of pinched black hole phases emanating from the MP curve at increasing values $j_{\rm GL}^{(k)}$. [.2cm [**Black Saturn.**]{}]{} If we focus on the first copy ($k=2$), on the KK black hole side this corresponds to a non-uniform membrane on ${\mathbb{T}}^2$ with a GL zero-mode perturbation of the membrane with two minima, which grows to merge with a configuration of two identical black strings localized on the torus. For the MP black hole, the analogue is the development of a circular pinch, which then grows deeper until the merger with a black Saturn configuration in thermal equilibrium. Thermal equilibrium,  equal temperature and angular velocity on all disconnected components of the event horizon, is in fact naturally expected for solutions that merge with pinched black holes, since the temperature and angular velocity of the latter should be uniform on the horizon all the way down to the merger, and we do not expect them to jump discontinuously there. These appear to be the natural higher-dimensional generalization of the five-dimensional black Saturn [@obeElvang:2007rd], and one may invoke the same arguments as those in Ref. [@obeElvang:2007hg]. When the size of the central black hole is small compared to the radius of the black ring, the interaction between the two objects is small and, to a first approximation, one can simply combine them linearly. It follows that, under the assumption of equal temperatures and angular velocities for the two black objects in the black Saturn, as $j$ is increased a larger fraction of the total mass and the total angular momentum is carried by the black ring, and less by the central black hole. Then, this black Saturn curve must asymptote to the curve of a single black ring. [.2cm [**Pancaked and pinched black Saturns.**]{}]{} The existence of these phases and their appearance in the phase diagram Fig. \[obefig:hidphases\] (in which they appear dashed) is based on comparatively less compelling arguments. Nevertheless, these conjectural phases provide a simple and natural way of completing the curves in the phase diagram that is consistent with the available information. We refer to Ref. [@obeEmparan:2007wm] for further details on these phases. It should also be noted that in the diagram of Fig. \[obefig:hidphases\] only the thermal equilibrium phases among the possible multi-black hole phases are represented. The existence of multi-black rings, with or without a central black hole, in thermal equilibrium is not expected. In general one does expect the existence of multi-black ring configurations, possibly with a central black hole, in which the different black objects have different surface gravities and different angular velocities. These configurations can be seen as the analogue of the multi-localized string configurations on the torus that can be obtained from multi-black hole configurations on the circle [@obeDias:2007hg] discussed in Sec. \[obesec:mubh\] by adding a uniform direction. Outlook \[obesec:outl\] ======================= We conclude by briefly presenting a number of important issues and questions for future research. See also the reviews [@obeKol:2004ww; @obeEmparan:2006mm; @obeHarmark:2007md; @obeEmparan:2008eg] for further discussion and other open problems. [.2cm [**Stability.**]{}]{} In both classes discussed in this lecture it would be interesting to further study the stability of the various solutions. For KK black holes, this includes the classical stability of the non-uniform black string and the localized black hole. For the rotating black hole case, we note that black rings at large $j$ in any $D\geq 5$ are expected to suffer from a GL instability that creates ripples along the $S^1$ and presumably fragments the black ring into black holes flying apart [@obeEmparan:2001wn; @obeHovdebo:2006jy; @obeElvang:2006dd]. This instability may switch off at $j\sim O(1)$. In analogy to the five-dimensional case [@obeArcioni:2004ww; @obeElvang:2006dd], one could also study turning points of $j$. If these are absent, pinched black holes would presumably be stable to radial perturbations. [.2cm [**Other compactified solutions.**]{}]{} It would also be interesting to examine the existence of other classes of solutions with a compactified direction. For example in Ref. [@obeMaeda:2006hd] a supersymmetric rotating black hole in a compactified space-time was found and charged black holes in compactified space-times are considered in Ref. [@obeKarlovini:2005cn]. In another direction, new solutions with Kaluza-Klein boundary conditions for anti-de-Sitter spacetimes have recently been constructed in Refs. [@obeCopsey:2006br; @obeMann:2006yi]. Finally, rotating non-uniform solutions in KK space have been constructed numerically in Ref. [@obeKleihaus:2007dg] (see also Ref. [@obeKleihaus:2007kc]). [.2cm [**Numerical solutions.**]{}]{} For both classes of higher-dimensional black holes presented in this lecture, it would be interesting to attempt to further apply numerical techniques in order to construct the new solutions. For example, for multi-black hole configurations on the cylinder this could confirm whether there are multi-black hole solutions for which the temperatures converge when approaching the merger points (as discussed in Ref. [@obeDias:2007hg]). Furthermore, one could try to confirm the existence of the conjectured lumpy black holes (see Sec. \[obesec:copd\]). Similarly, for rotating black holes, numerical construction of the entire black ring phase and of the pinched black hole phase would be very interesting. [.2cm [**Effective field theory techniques.**]{}]{} As mentioned in Sec. \[obesec:solm\], an alternative to the matched expansion is the use of classical effective field theory [@obeChu:2006ce; @obeKol:2007rx] to obtain the corrected thermodynamics of new solutions in a perturbative expansion. It would be interesting to use this method to go beyond the first order for the solutions discussed in this lecture and apply it to other extended brane-like black holes with or without rotation. [.2cm [**Other black rings.**]{}]{} The method used to construct thin black rings in asymptotically flat space can also be used to study thin black rings in external gravitational potentials, yielding black Saturn or black rings in AdS or dS spacetime.[^7] Similarly, one could study black rings with charges [@obeElvang:2003yy; @obeElvang:2004rt] and with dipoles [@obeEmparan:2004wy]. In this connection we note that the existence of small supersymmetric black rings in $D\geq 5$ was argued in [@obeDabholkar:2006za]. [.2cm [**More rotation parameters.**]{}]{} One may try to extended the analysis to black rings with horizon $S^1\times S^{n+1}$ with rotation not only along $S^1$ but also in the $S^{n+1}$. Rotation in the $S^{n+1}$ will introduce particularly rich dynamics for $n\geq 3$, since it is then possible to have ultra-spinning regimes for this rotation too, leading to pinches of the $S^{n+1}$ and further connections to phases with horizon $S^1\times S^1\times S^{n}$, and so forth. [.2cm [**Blackfolds.**]{}]{} Following the construction in Sec. \[obesec:robh\], one can envision many generalizations. In this way one could study the possible existence of more general blackfolds, obtained by taking a black $p$-brane with horizon topology ${\mathbb{R}}^{p} \times S^{q}$ and bending ${\mathbb{R}}^{p}$ to form some compact manifold. One must then find out under which conditions a curved black $p$-brane can satisfy the equilibrium equation . This method is constructive and uses dynamical information to determine possible horizon geometries. In contrast, conventional approaches based on topological considerations are non-constructive and have only found very weak restrictions in six or more dimensions [@obeHelfgott:2005jn; @obeGalloway:2005mf]. [.2cm [**Plasma balls and rings.**]{}]{} There is also another more indirect approach to higher-dimensional black rings in AdS, using the AdS/CFT correspondence. In Ref. [@obeLahiri:2007ae] stationary, axially symmetric spinning configurations of plasma in ${\cal N}=4$ SYM theory compactified to $d=3$ on a Scherk-Schwarz circle were studied. On the gravity side, these correspond to large rotating black holes and black rings in the dual Scherk-Schwarz compactified AdS$_5$ space. Interestingly, the phase diagram of these rotating fluid configurations, even if dual to black holes larger than the AdS radius, reproduces many of the qualitative features of the MP black holes and black rings in five-dimensional flat spacetime. Higher-dimensional generalizations of this setup give predictions for the phases of black holes in Scherk-Schwarz compactified AdS$_D$ with $D>5$. In this way, evidence was found [@obeLahiri:2007ae] for rotating black rings and ‘pinched’ black holes in AdS$_6$, that can be considered the AdS-analogues of the phases conjectured in [@obeEmparan:2003sy; @obeEmparan:2007wm], discussed in Secs. \[obesec:robh\] and \[obesec:phas\]. [.2cm [**Microscopic entropy for three-charge black holes.**]{}]{} One could extend the work of Ref. [@obeHarmark:2006df; @obeHarmark:2007uy] by applying the boost/U-duality map of [@obeHarmark:2004ws] to the multi-black hole configurations of Ref. [@obeDias:2007hg]. In particular, this would enable to compute the first correction to the finite entropy of the resulting three-charge multi-black hole configurations on a circle. It would be interesting to then try to derive these expressions from a microscopic calculation following the single three-charge black hole case considered in Refs. [@obeHarmark:2006df; @obeChowdhury:2006qn]. [.2cm [**Braneworld black holes.**]{}]{} The higher dimensional black holes and branes described in this lecture also appear naturally in the discussion of the braneworld model of large extra dimensions [@obeArkani-Hamed:1998rs; @obeAntoniadis:1998ig]. In other braneworld models such as the one proposed by Randall and Sundrum [@obeRandall:1999ee; @obeRandall:1999vf] the geometry is warped in the extra direction and the discovery of black hole solutions in this context has proven more difficult. It would be interesting to consider the higher-dimensional black hole solutions considered in this lecture in these contexts. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank the organizers, especially Lefteris Papantonopoulos, of the Fourth Aegean Summer School on Black Holes (Sept. 17-22, 2007, Mytiline, Island of Lesvos, Greece) for a stimulating and interesting school. I also thank Oscar Dias, Roberto Emparan, Troels Harmark, Rob Myers, Vasilis Niarchos and Maria Jose Rodriguez for collaboration on the work presented here. This work is partially supported by the European Community’s Human Potential Programme under contract MRTN-CT-2004-005104 ‘Constituents, fundamental forces and symmetries of the universe’. [100]{} B. Kol, “The phase transition between caged black holes and black strings: [A]{} review,” [*Phys. Rept.*]{} [**422**]{} (2006) 119–165, [[hep-th/0411240]{}](http://arXiv.org/abs/hep-th/0411240). R. Emparan and H. S. Reall, “Black rings,” [*Class. Quant. Grav.*]{} [**23**]{} (2006) R169, [[hep-th/0608012]{}](http://arXiv.org/abs/hep-th/0608012). T. Harmark, V. Niarchos, and N. A. Obers, “Instabilities of black strings and branes,” [*Class. Quant. Grav.*]{} [**24**]{} (2007) R1–R90, [[hep-th/0701022]{}](http://arXiv.org/abs/hep-th/0701022). R. Emparan and H. S. Reall, “[Black Holes in Higher Dimensions]{},” [[arXiv:0801.3471 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0801.3471 [hep-th]). A. Strominger and C. Vafa, “Microscopic origin of the [Bekenstein-Hawking]{} entropy,” [*Phys. Lett.*]{} [**B379**]{} (1996) 99–104, [[hep-th/9601029]{}](http://arXiv.org/abs/hep-th/9601029). S. D. Mathur, “The fuzzball proposal for black holes: [A]{}n elementary review,” [*Fortsch. Phys.*]{} [**53**]{} (2005) 793–827, [[hep-th/0502050]{}](http://arXiv.org/abs/hep-th/0502050). S. D. Mathur, “The quantum structure of black holes,” [*Class. Quant. Grav.*]{} [**23**]{} (2006) R115, [[hep-th/0510180]{}](http://arXiv.org/abs/hep-th/0510180). J. M. Maldacena, “The large [$N$]{} limit of superconformal field theories and supergravity,” [*Adv. Theor. Math. Phys.*]{} [**2**]{} (1998) 231–252, [[hep-th/9711200]{}](http://arXiv.org/abs/hep-th/9711200). O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri, and Y. Oz, “Large [$N$]{} field theories, string theory and gravity,” [*Phys. Rept.*]{} [**323**]{} (2000) 183, [[hep-th/9905111]{}](http://arXiv.org/abs/hep-th/9905111). O. Aharony, J. Marsano, S. Minwalla, and T. Wiseman, “Black hole - black string phase transitions in thermal 1+1 dimensional supersymmetric [Yang-Mills]{} theory on a circle,” [*Class. Quant. Grav.*]{} [**21**]{} (2004) 5169–5192, [[hep-th/0406210]{}](http://arXiv.org/abs/hep-th/0406210). T. Harmark and N. A. Obers, “New phases of near-extremal branes on a circle,” [*JHEP*]{} [**09**]{} (2004) 022, [[hep-th/0407094]{}](http://arXiv.org/abs/hep-th/0407094). N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, “The hierarchy problem and new dimensions at a millimeter,” [*Phys. Lett.*]{} [**B429**]{} (1998) 263–272, [[hep-ph/9803315]{}](http://arXiv.org/abs/hep-ph/9803315). I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, “New dimensions at a millimeter to a [F]{}ermi and superstrings at a [TeV]{},” [ *Phys. Lett.*]{} [**B436**]{} (1998) 257–263, [[hep-ph/9804398]{}](http://arXiv.org/abs/hep-ph/9804398). P. Kanti, “Black holes in theories with large extra dimensions: [A]{} review,” [*Int. J. Mod. Phys.*]{} [**A19**]{} (2004) 4899–4951, [[hep-ph/0402168]{}](http://arXiv.org/abs/hep-ph/0402168). R. C. Myers and M. J. Perry, “Black holes in higher dimensional space-times,” [*Ann. Phys.*]{} [**172**]{} (1986) 304. R. Emparan and H. S. Reall, “A rotating black ring in five dimensions,” [ *Phys. Rev. Lett.*]{} [**88**]{} (2002) 101101, [[hep-th/0110260]{}](http://arXiv.org/abs/hep-th/0110260). H. Elvang and P. Figueras, “Black saturn,” [*JHEP*]{} [**05**]{} (2007) 050, [[hep-th/0701035]{}](http://arXiv.org/abs/hep-th/0701035). H. Elvang, R. Emparan, and P. Figueras, “Phases of five-dimensional black holes,” [*JHEP*]{} [**05**]{} (2007) 056, [[hep-th/0702111]{}](http://arXiv.org/abs/hep-th/0702111). H. Iguchi and T. Mishima, “Black di-ring and infinite nonuniqueness,” [ *Phys. Rev.*]{} [**D75**]{} (2007) 064018, [[hep-th/0701043]{}](http://arXiv.org/abs/hep-th/0701043). J. Evslin and C. Krishnan, “The black di-ring: An inverse scattering construction,” [[arXiv:0706.1231 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0706.1231 [hep-th]). A. A. Pomeransky and R. A. Sen’kov, “Black ring with two angular momenta,” [[hep-th/0612005]{}](http://arXiv.org/abs/hep-th/0612005). K. Izumi, “Orthogonal black di-ring solution,” [[arXiv:0712.0902 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0712.0902 [hep-th]). H. Elvang and M. J. Rodriguez, “Bicycling black rings,” [[arXiv:0712.2425 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0712.2425 [hep-th]). R. Emparan and H. S. Reall, “Generalized [Weyl]{} solutions,” [*Phys. Rev.*]{} [**D65**]{} (2002) 084025, [[hep-th/0110258]{}](http://arXiv.org/abs/hep-th/0110258). T. Harmark, “Stationary and axisymmetric solutions of higher-dimensional general relativity,” [*Phys. Rev.*]{} [**D70**]{} (2004) 124002, [[hep-th/0408141]{}](http://arXiv.org/abs/hep-th/0408141). V. A. Belinsky and V. E. Zakharov, “Integration of the [Einstein]{} equations by the inverse scattering problem technique and the calculation of the exact soliton solutions,” [*Sov. Phys. JETP*]{} [**48**]{} (1978) 985–994. V. A. Belinsky and V. E. Zakharov, “Stationary gravitational solitons with axial symmetry,” [*Sov. Phys. JETP*]{} [**50**]{} (1979) 1. V. Belinski and E. Verdaguer, “Gravitational solitons,”. Cambridge, UK: Univ. Pr. (2001) 258 p. A. A. Pomeransky, “Complete integrability of higher-dimensional [E]{}instein equations with additional symmetry, and rotating black holes,” [*Phys. Rev.*]{} [**D73**]{} (2006) 044004, [[hep-th/0507250]{}](http://arXiv.org/abs/hep-th/0507250). R. Emparan, T. Harmark, V. Niarchos, N. A. Obers, and M. J. Rodriguez, “The phase structure of higher-dimensional black rings and black holes,” [ *JHEP*]{} [**10**]{} (2007) 110, [[arXiv:0708.2181 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0708.2181 [hep-th]). T. Harmark, “Small black holes on cylinders,” [*Phys. Rev.*]{} [**D69**]{} (2004) 104015, [[hep-th/0310259]{}](http://arXiv.org/abs/hep-th/0310259). D. Gorbonos and B. Kol, “A dialogue of multipoles: Matched asymptotic expansion for caged black holes,” [*JHEP*]{} [**06**]{} (2004) 053, [[hep-th/0406002]{}](http://arXiv.org/abs/hep-th/0406002). D. Karasik, C. Sahabandu, P. Suranyi, and L. C. R. Wijewardhana, “Analytic approximation to 5 dimensional black holes with one compact dimension,” [ *Phys. Rev.*]{} [**D71**]{} (2005) 024024, [[hep-th/0410078]{}](http://arXiv.org/abs/hep-th/0410078). D. Gorbonos and B. Kol, “Matched asymptotic expansion for caged black holes: Regularization of the post-[N]{}ewtonian order,” [*Class. Quant. Grav.*]{} [**22**]{} (2005) 3935–3960, [[hep-th/0505009]{}](http://arXiv.org/abs/hep-th/0505009). O. J. C. Dias, T. Harmark, R. C. Myers, and N. A. Obers, “Multi-black hole configurations on the cylinder,” [*Phys. Rev.*]{} [**D76**]{} (2007) 104025, [[arXiv:0706.3645 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0706.3645 [hep-th]). R. Emparan and R. C. Myers, “Instability of ultra-spinning black holes,” [ *JHEP*]{} [**09**]{} (2003) 025, [[hep-th/0308056]{}](http://arXiv.org/abs/hep-th/0308056). S. S. Gubser, “On non-uniform black branes,” [*Class. Quant. Grav.*]{} [ **19**]{} (2002) 4825–4844, [[hep-th/0110193]{}](http://arXiv.org/abs/hep-th/0110193). T. Wiseman, “Static axisymmetric vacuum solutions and non-uniform black strings,” [*Class. Quant. Grav.*]{} [**20**]{} (2003) 1137–1176, [[hep-th/0209051]{}](http://arXiv.org/abs/hep-th/0209051). E. Sorkin, “A critical dimension in the black-string phase transition,” [ *Phys. Rev. Lett.*]{} [**93**]{} (2004) 031601, [[hep-th/0402216]{}](http://arXiv.org/abs/hep-th/0402216). B. Kleihaus, J. Kunz, and E. Radu, “New nonuniform black string solutions,” [*JHEP*]{} [**06**]{} (2006) 016, [[hep-th/0603119]{}](http://arXiv.org/abs/hep-th/0603119). E. Sorkin, “Non-uniform black strings in various dimensions,” [*Phys. Rev.*]{} [**D74**]{} (2006) 104027, [[gr-qc/0608115]{}](http://arXiv.org/abs/gr-qc/0608115). B. Kleihaus and J. Kunz, “Interior of nonuniform black strings,” [[arXiv:0710.1726 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0710.1726 [hep-th]). T. Harmark and N. A. Obers, “Black holes on cylinders,” [*JHEP*]{} [**05**]{} (2002) 032, [[hep-th/0204047]{}](http://arXiv.org/abs/hep-th/0204047). Y.-Z. Chu, W. D. Goldberger, and I. Z. Rothstein, “Asymptotics of [$d$]{}-dimensional [Kaluza-Klein]{} black holes: Beyond the [N]{}ewtonian approximation,” [*JHEP*]{} [**03**]{} (2006) 013, [[hep-th/0602016]{}](http://arXiv.org/abs/hep-th/0602016). B. Kol and M. Smolkin, “Classical effective field theory and caged black holes,” [[arXiv:0712.2822 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0712.2822 [hep-th]). E. Sorkin, B. Kol, and T. Piran, “Caged black holes: Black holes in compactified spacetimes. [II]{}: 5d numerical implementation,” [*Phys. Rev.*]{} [**D69**]{} (2004) 064032, [[hep-th/0310096]{}](http://arXiv.org/abs/hep-th/0310096). H. Kudoh and T. Wiseman, “Properties of [Kaluza-Klein]{} black holes,” [ *Prog. Theor. Phys.*]{} [**111**]{} (2004) 475–507, [[hep-th/0310104]{}](http://arXiv.org/abs/hep-th/0310104). H. Kudoh and T. Wiseman, “Connecting black holes and black strings,” [ *Phys. Rev. Lett.*]{} [**94**]{} (2005) 161102, [[hep-th/0409111]{}](http://arXiv.org/abs/hep-th/0409111). H. Elvang, T. Harmark, and N. A. Obers, “Sequences of bubbles and holes: New phases of [Kaluza-Klein]{} black holes,” [*JHEP*]{} [**01**]{} (2005) 003, [[hep-th/0407050]{}](http://arXiv.org/abs/hep-th/0407050). T. Harmark and N. A. Obers, “New phase diagram for black holes and strings on cylinders,” [*Class. Quantum Grav.*]{} [**21**]{} (2004) 1709–1724, [[hep-th/0309116]{}](http://arXiv.org/abs/hep-th/0309116). B. Kol, E. Sorkin, and T. Piran, “Caged black holes: Black holes in compactified spacetimes. [I]{}: Theory,” [*Phys. Rev.*]{} [**D69**]{} (2004) 064031, [[hep-th/0309190]{}](http://arXiv.org/abs/hep-th/0309190). T. Harmark and N. A. Obers, “Phase structure of black holes and strings on cylinders,” [*Nucl. Phys.*]{} [**B684**]{} (2004) 183–208, [[hep-th/0309230]{}](http://arXiv.org/abs/hep-th/0309230). R. Gregory and R. Laflamme, “Black strings and [$p$]{}-branes are unstable,” [*Phys. Rev. Lett.*]{} [**70**]{} (1993) 2837–2840, [[hep-th/9301052]{}](http://arXiv.org/abs/hep-th/9301052). R. Gregory and R. Laflamme, “The instability of charged black strings and [$p$]{}-branes,” [*Nucl. Phys.*]{} [**B428**]{} (1994) 399–434, [[hep-th/9404071]{}](http://arXiv.org/abs/hep-th/9404071). B. Kol, “Topology change in general relativity and the black-hole black-string transition,” [[hep-th/0206220]{}](http://arXiv.org/abs/hep-th/0206220). T. Wiseman, “From black strings to black holes,” [*Class. Quant. Grav.*]{} [**20**]{} (2003) 1177–1186, [[hep-th/0211028]{}](http://arXiv.org/abs/hep-th/0211028). B. Kol and T. Wiseman, “Evidence that highly non-uniform black strings have a conical waist,” [*Class. Quant. Grav.*]{} [**20**]{} (2003) 3493–3504, [[hep-th/0304070]{}](http://arXiv.org/abs/hep-th/0304070). B. Kol and E. Sorkin, “On black-brane instability in an arbitrary dimension,” [*Class. Quant. Grav.*]{} [**21**]{} (2004) 4793–4804, [[gr-qc/0407058]{}](http://arXiv.org/abs/gr-qc/0407058). B. Kol and E. Sorkin, “[LG]{} ([Landau-Ginzburg]{}) in [GL]{} ([Gregory-Laflamme]{}),” [*Class. Quant. Grav.*]{} [**23**]{} (2006) 4563–4592, [[hep-th/0604015]{}](http://arXiv.org/abs/hep-th/0604015). W. Israel, “Event horizons in static vacuum space-times,” [*Phys. Rev.*]{} [**164**]{} (1967) 1776–1779. B. Carter, “Axisymmetric black hole has only two degrees of freedom,” [ *Phys. Rev. Lett.*]{} [**26**]{} (1971) 331–333. S. W. Hawking, “Black holes in [General Relativity]{},” [*Commun. Math. Phys.*]{} [**25**]{} (1972) 152–166. D. C. Robinson, “Uniqueness of the [Kerr]{} black hole,” [*Phys. Rev. Lett.*]{} [**34**]{} (1975) 905–906. T. Regge, “Stability of a Schwarzschild singularity,” [*Phys. Rev.*]{} [ **108**]{} (1957) 1063–1069. F. J. Zerilli, “Gravitational field of a particle falling in a [S]{}chwarzschild geometry analyzed in tensor harmonics,” [*Phys. Rev.*]{} [**D2**]{} (1970) 2141–2160. S. A. Teukolsky, “Perturbations of a rotating black hole. 1. [F]{}undamental equations for gravitational electromagnetic, and neutrino field perturbations,” [*Astrophys. J.*]{} [**185**]{} (1973) 635–647. H. Kodama, “Perturbations and stability of higher-dimensional black holes,” [[arXiv:0712.2703 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0712.2703 [hep-th]). F. R. Tangherlini, “Schwarzschild field in [*n*]{} dimensions and the dimensionality of space problem,” [*Nuovo Cimento*]{} [**27**]{} (1963) 636. G. W. Gibbons, D. Ida, and T. Shiromizu, “Uniqueness and non-uniqueness of static vacuum black holes in higher dimensions,” [*Prog. Theor. Phys. Suppl.*]{} [**148**]{} (2003) 284–290, [[gr-qc/0203004]{}](http://arXiv.org/abs/gr-qc/0203004). G. W. Gibbons, D. Ida, and T. Shiromizu, “Uniqueness and non-uniqueness of static black holes in higher dimensions,” [*Phys. Rev. Lett.*]{} [**89**]{} (2002) 041101, [[hep-th/0206049]{}](http://arXiv.org/abs/hep-th/0206049). H. Kodama and A. Ishibashi, “A master equation for gravitational perturbations of maximally symmetric black holes in higher dimensions,” [*Prog. Theor. Phys.*]{} [**110**]{} (2003) 701–722, [[hep-th/0305147]{}](http://arXiv.org/abs/hep-th/0305147). A. Ishibashi and H. Kodama, “Stability of higher-dimensional [S]{}chwarzschild black holes,” [*Prog. Theor. Phys.*]{} [**110**]{} (2003) 901–919, [[hep-th/0305185]{}](http://arXiv.org/abs/hep-th/0305185). H. Kodama and A. Ishibashi, “Master equations for perturbations of generalized static black holes with charge in higher dimensions,” [*Prog. Theor. Phys.*]{} [**111**]{} (2004) 29–73, [[hep-th/0308128]{}](http://arXiv.org/abs/hep-th/0308128). S. Hollands, A. Ishibashi, and R. M. Wald, “[A higher dimensional stationary rotating black hole must be axisymmetric]{},” [*Commun. Math. Phys.*]{} [ **271**]{} (2007) 699–722, [[gr-qc/0605106]{}](http://arXiv.org/abs/gr-qc/0605106). Y. Morisawa and D. Ida, “A boundary value problem for the five-dimensional stationary rotating black holes,” [*Phys. Rev.*]{} [**D69**]{} (2004) 124005, [[gr-qc/0401100]{}](http://arXiv.org/abs/gr-qc/0401100). S. Hollands and S. Yazadjiev, “Uniqueness theorem for 5-dimensional black holes with two axial Killing fields,” [[arXiv:0707.2775 \[gr-qc\]]{}](http://arXiv.org/abs/arXiv:0707.2775 [gr-qc]). S. Giusto and A. Saxena, “[Stationary axisymmetric solutions of five dimensional gravity]{},” [*Class. Quant. Grav.*]{} [**24**]{} (2007) 4269–4294, [[arXiv:0705.4484 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0705.4484 [hep-th]). W. D. Goldberger and I. Z. Rothstein, “An effective field theory of gravity for extended objects,” [*Phys. Rev.*]{} [**D73**]{} (2006) 104029, [[hep-th/0409156]{}](http://arXiv.org/abs/hep-th/0409156). T. Harmark and N. A. Obers, “Phases of [Kaluza-Klein]{} black holes: [A]{} brief review,” [[hep-th/0503020]{}](http://arXiv.org/abs/hep-th/0503020). T. Harmark and N. A. Obers, “General definition of gravitational tension,” [*JHEP*]{} [**05**]{} (2004) 043, [[hep-th/0403103]{}](http://arXiv.org/abs/hep-th/0403103). R. C. Myers, “Stress tensors and [Casimir]{} energies in the [AdS/CFT]{} correspondence,” [*Phys. Rev.*]{} [**D60**]{} (1999) 046002, [[hep-th/9903203]{}](http://arXiv.org/abs/hep-th/9903203). J. H. Traschen and D. Fox, “Tension perturbations of black brane spacetimes,” [*Class. Quant. Grav.*]{} [**21**]{} (2004) 289–306, [[gr-qc/0103106]{}](http://arXiv.org/abs/gr-qc/0103106). P. K. Townsend and M. Zamaklar, “The first law of black brane mechanics,” [*Class. Quant. Grav.*]{} [**18**]{} (2001) 5269–5286, [[hep-th/0107228]{}](http://arXiv.org/abs/hep-th/0107228). D. Kastor and J. Traschen, “Stresses and strains in the first law for [Kaluza-Klein]{} black holes,” [*JHEP*]{} [**09**]{} (2006) 022, [[hep-th/0607051]{}](http://arXiv.org/abs/hep-th/0607051). J. H. Traschen, “A positivity theorem for gravitational tension in brane spacetimes,” [*Class. Quant. Grav.*]{} [**21**]{} (2004) 1343–1350, [[hep-th/0308173]{}](http://arXiv.org/abs/hep-th/0308173). T. Shiromizu, D. Ida, and S. Tomizawa, “Kinematical bound in asymptotically translationally invariant spacetimes,” [*Phys. Rev.*]{} [**D69**]{} (2004) 027503, [[gr-qc/0309061]{}](http://arXiv.org/abs/gr-qc/0309061). T. Harmark and N. A. Obers, “Black holes and black strings on cylinders,” [*Fortsch. Phys.*]{} [**51**]{} (2003) 793–798, [[hep-th/0301020]{}](http://arXiv.org/abs/hep-th/0301020). B. Kol, “The power of action: [’The’]{} derivation of the black hole negative mode,” [[hep-th/0608001]{}](http://arXiv.org/abs/hep-th/0608001). B. Kol, “Perturbations around backgrounds with one non-homogeneous dimension,” [[hep-th/0609001]{}](http://arXiv.org/abs/hep-th/0609001). V. Cardoso and O. J. C. Dias, “[Rayleigh-Plateau]{} and [Gregory-Laflamme]{} instabilities of black strings,” [*Phys. Rev. Lett.*]{} [**96**]{} (2006) 181601, [[hep-th/0602017]{}](http://arXiv.org/abs/hep-th/0602017). V. Cardoso and L. Gualtieri, “Equilibrium configurations of fluids and their stability in higher dimensions,” [*Class. Quant. Grav.*]{} [**23**]{} (2006) 7151–7198, [[hep-th/0610004]{}](http://arXiv.org/abs/hep-th/0610004). R. Gregory and R. Laflamme, “Hypercylindrical black holes,” [*Phys. Rev.*]{} [**D37**]{} (1988) 305. R. C. Myers, “Higher dimensional black holes in compactified space- times,” [*Phys. Rev.*]{} [**D35**]{} (1987) 455. A. R. Bogojevic and L. Perivolaropoulos, “Black holes in a periodic universe,” [*Mod. Phys. Lett.*]{} [**A6**]{} (1991) 369–376. D. Korotkin and H. Nicolai, “A periodic analog of the [Schwarzschild]{} solution,” [[gr-qc/9403029]{}](http://arXiv.org/abs/gr-qc/9403029). A. V. Frolov and V. P. Frolov, “Black holes in a compactified spacetime,” [*Phys. Rev.*]{} [**D67**]{} (2003) 124025, [[hep-th/0302085]{}](http://arXiv.org/abs/hep-th/0302085). P.-J. De Smet, “Black holes on cylinders are not algebraically special,” [ *Class. Quant. Grav.*]{} [**19**]{} (2002) 4877–4896, [[hep-th/0206106]{}](http://arXiv.org/abs/hep-th/0206106). G. T. Horowitz, “Playing with black strings,” [[hep-th/0205069]{}](http://arXiv.org/abs/hep-th/0205069). R. Emparan, “Rotating circular strings, and infinite non-uniqueness of black rings,” [*JHEP*]{} [**03**]{} (2004) 064, [[hep-th/0402149]{}](http://arXiv.org/abs/hep-th/0402149). V. Cardoso, O. J. C. Dias, and L. Gualtieri, “The return of the membrane paradigm? [B]{}lack holes and strings in the water tap,” [[arXiv:0705.2777 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0705.2777 [hep-th]). T. Harmark and P. Olesen, “On the structure of stationary and axisymmetric metrics,” [*Phys. Rev.*]{} [**D72**]{} (2005) 124017, [[hep-th/0508208]{}](http://arXiv.org/abs/hep-th/0508208). H. Elvang and R. Emparan, “Black rings, supertubes, and a stringy resolution of black hole non-uniqueness,” [*JHEP*]{} [**11**]{} (2003) 035, [[hep-th/0310008]{}](http://arXiv.org/abs/hep-th/0310008). J. L. Hovdebo and R. C. Myers, “Black rings, boosted strings and [Gregory-Laflamme]{},” [*Phys. Rev.*]{} [**D73**]{} (2006) 084013, [[hep-th/0601079]{}](http://arXiv.org/abs/hep-th/0601079). H. Elvang, R. Emparan, and A. Virmani, “Dynamics and stability of black rings,” [*JHEP*]{} [**12**]{} (2006) 074, [[hep-th/0608076]{}](http://arXiv.org/abs/hep-th/0608076). B. Carter, “Essentials of classical brane dynamics,” [*Int. J. Theor. Phys.*]{} [**40**]{} (2001) 2099–2130, [[gr-qc/0012036]{}](http://arXiv.org/abs/gr-qc/0012036). D. Kastor, S. Ray, and J. Traschen, “The first law for boosted [Kaluza–Klein]{} black holes,” [*JHEP*]{} [**06**]{} (2007) 026, [[arXiv:0704.0729 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0704.0729 [hep-th]). G. Arcioni and E. Lozano-Tellechea, “Stability and critical phenomena of black holes and black rings,” [*Phys. Rev.*]{} [**D72**]{} (2005) 104021, [[hep-th/0412118]{}](http://arXiv.org/abs/hep-th/0412118). K.-i. Maeda, N. Ohta, and M. Tanabe, “A supersymmetric rotating black hole in a compactified spacetime,” [*Phys. Rev.*]{} [**D74**]{} (2006) 104002, [[hep-th/0607084]{}](http://arXiv.org/abs/hep-th/0607084). M. Karlovini and R. von Unge, “Charged black holes in compactified spacetimes,” [*Phys. Rev.*]{} [**D72**]{} (2005) 104013, [[gr-qc/0506073]{}](http://arXiv.org/abs/gr-qc/0506073). K. Copsey and G. T. Horowitz, “Gravity dual of gauge theory on [$S^2 \times S^1 \times \mathbb{R}$]{},” [*JHEP*]{} [**06**]{} (2006) 021, [[hep-th/0602003]{}](http://arXiv.org/abs/hep-th/0602003). R. B. Mann, E. Radu, and C. Stelea, “Black string solutions with negative cosmological constant,” [*JHEP*]{} [**09**]{} (2006) 073, [[hep-th/0604205]{}](http://arXiv.org/abs/hep-th/0604205). B. Kleihaus, J. Kunz, and E. Radu, “Rotating nonuniform black string solutions,” [*JHEP*]{} [**05**]{} (2007) 058, [[hep-th/0702053]{}](http://arXiv.org/abs/hep-th/0702053). B. Kleihaus, J. Kunz, and F. Navarro-Lerida, “Rotating black holes in higher dimensions,” [[arXiv:0710.2291 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0710.2291 [hep-th]). H. K. Kunduri, J. Lucietti, and H. S. Reall, “Do supersymmetric anti-de [Sitter]{} black rings exist?,” [*JHEP*]{} [**02**]{} (2007) 026, [[hep-th/0611351]{}](http://arXiv.org/abs/hep-th/0611351). H. Elvang, “A charged rotating black ring,” [[hep-th/0305247]{}](http://arXiv.org/abs/hep-th/0305247). H. Elvang, R. Emparan, D. Mateos, and H. S. Reall, “A supersymmetric black ring,” [*Phys. Rev. Lett.*]{} [**93**]{} (2004) 211302, [[hep-th/0407065]{}](http://arXiv.org/abs/hep-th/0407065). A. Dabholkar, N. Iizuka, A. Iqubal, A. Sen, and M. Shigemori, “Spinning strings as small black rings,” [*JHEP*]{} [**04**]{} (2007) 017, [[hep-th/0611166]{}](http://arXiv.org/abs/hep-th/0611166). C. Helfgott, Y. Oz, and Y. Yanay, “On the topology of black hole event horizons in higher dimensions,” [*JHEP*]{} [**02**]{} (2006) 025, [[hep-th/0509013]{}](http://arXiv.org/abs/hep-th/0509013). G. J. Galloway and R. Schoen, “A generalization of [H]{}awking’s black hole topology theorem to higher dimensions,” [*Commun. Math. Phys.*]{} [**266**]{} (2006) 571–576, [[gr-qc/0509107]{}](http://arXiv.org/abs/gr-qc/0509107). S. Lahiri and S. Minwalla, “Plasmarings as dual black rings,” [[arXiv:0705.3404 \[hep-th\]]{}](http://arXiv.org/abs/arXiv:0705.3404 [hep-th]). T. Harmark, K. R. Kristjansson, N. A. Obers, and P. B. Ronne, “Three-charge black holes on a circle,” [*JHEP*]{} [**01**]{} (2007) 023, [[hep-th/0606246]{}](http://arXiv.org/abs/hep-th/0606246). T. Harmark, K. R. Kristjansson, N. A. Obers, and P. B. Ronne, “[Entropy of three-charge black holes on a circle]{},” [*Fortsch. Phys.*]{} [**55**]{} (2007) 748–753, [[hep-th/0701070]{}](http://arXiv.org/abs/hep-th/0701070). B. D. Chowdhury, S. Giusto, and S. D. Mathur, “A microscopic model for the black hole - black string phase transition,” [*Nucl. Phys.*]{} [**B762**]{} (2007) 301–343, [[hep-th/0610069]{}](http://arXiv.org/abs/hep-th/0610069). N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, “The hierarchy problem and new dimensions at a millimeter,” [*Phys. Lett.*]{} [**B429**]{} (1998) 263–272, [[hep-ph/9803315]{}](http://arXiv.org/abs/hep-ph/9803315). L. Randall and R. Sundrum, “A large mass hierarchy from a small extra dimension,” [*Phys. Rev. Lett.*]{} [**83**]{} (1999) 3370–3373, [[hep-ph/9905221]{}](http://arXiv.org/abs/hep-ph/9905221). L. Randall and R. Sundrum, “An alternative to compactification,” [*Phys. Rev. Lett.*]{} [**83**]{} (1999) 4690–4693, [[hep-th/9906064]{}](http://arXiv.org/abs/hep-th/9906064). [^1]: See [@obeHollands:2006rj] for recent progress in this direction. [^2]: See [@obeMorisawa:2004tc; @obeHollands:2007aj] for work on how to determine uniquely the black hole solutions with two symmetry axes. [^3]: Various methods and different gauges have been employed to derive the differential equations for the GL mode. See Ref. [@obeKol:2006ga] for a nice summary of these, including a new derivation (see also [@obeKol:2006ux]). [^4]: Here we use the coordinate $R$ which is part of the two-dimensional coordinate system $(R,v)$ introduced in Ref. [@obeHarmark:2002tr] that interpolates between cylindrical coordinates $(r,z)$ and spherical coordinates $(\rho,\theta)$. In terms of $F(r,z)$ in we have $R (r,z) \propto F (r,z)^{-1/(d-3)}$. Note that Refs. [@obeHarmark:2002tr; @obeHarmark:2003yz; @obeDias:2007hg] set $L= 2\pi$, which we choose not to do here for pedagogical clarity. [^5]: This method was also used in Ref. [@obeHarmark:2003yz; @obeDias:2007hg] for small black holes and multi-black holes on the cylinder. [^6]: In five-dimensions ($n=1$) there [*are*]{} corrections to this order. Their origin is discussed in App. A of [@obeEmparan:2007wm]. [^7]: In [@obeKunduri:2006uh] the existence of supersymmetric black rings in AdS is considered.
--- abstract: | We investigate in-hand regrasping by pushing an object against an external constraint and allowing sliding at the fingertips. Each fingertip is modeled as attached to a multidimensional spring mounted to a position-controlled anchor. Spring compliance maps contact forces to spring compressions, ensuring the fingers remain in contact, and sliding “compliance” governs the relationship between sliding motions and tangential contact forces. A spring-sliding compliant regrasp is achieved by controlling the finger anchor motions. We derive the fingertip sliding mechanics for multifingered sliding regrasps and analyze robust regrasping conditions in the presence of finger contact wrench uncertainties. The results are verified in simulation and experiment with a two-fingered sliding regrasp designed to maximize robustness of the operation. author: - 'Jian Shi and Kevin M. Lynch [^1] [^2] [^3]' bibliography: - 'InhandSSCBibtex.bib' title: 'In-hand Sliding Regrasp with Spring-Sliding Compliance' --- Introduction {#sec:intro} ============ In-hand manipulation, and specifically regrasping an object within the hand, offers the promise of increased manipulator dexterity [@Shi2017; @chavan-dafle2014]. Regrasp can be achieved purely by forces applied by the fingers themselves, or it can be achieved by taking advantage of external forces on the object. As one example, in our previous work regrasp is achieved by accelerating the object such that the inertial load can no longer be resisted by friction with the fingers, causing sliding of the object [@Shi2017]. Short bursts of such motion can be used to achieve controllable dynamic in-hand sliding regrasps. In this paper, we focus on quasistatic sliding regrasps taking advantage of contacts between the object and a rigid environment. An example is shown in Figure \[fig:chopsticks\]. After picking up a pair of chopsticks, often the ends of the chopsticks are misaligned, making the chopsticks difficult to use. One strategy is to push the chopsticks against a constraint, bringing the ends into alignment. During this operation, one (or both) of the chopsticks slides within the grasp. ![The extended (top) chopstick is pushed against a constraint, bringing it into alignment with the other chopstick by a sliding regrasp.[]{data-label="fig:chopsticks"}](chopsticks.pdf){width="\colwidth"} We model each finger as a frictional point contact connected by a three-dimensional linear spring to an anchor point whose motion is controlled in three linear directions. Given the stiffness matrix governing the multidimensional spring, by position-controlling the anchor we can control the force applied to the object and initiate sliding when the contact force reaches the boundary of its friction cone. External contacts provide forces that maintain object force balance during the quasistatic sliding regrasp. Similar to spring compliance that governs the relationship between contact forces and displacements, frictional sliding is a kind of nonlinear damping “compliance” that governs the relationship between tangential frictional forces and tangential sliding velocities. Sliding compliance is a passive dissipative mechanical effect, requiring no active feedback control. Spring-sliding compliance models are simple and compact (e.g., no finite-element elastic models) but can approximate many real-world contact interactions. Features of this model of contact interaction include: - Spring compliance ensures that fingers remain in contact while sliding over general surfaces. Spring compliance may be mechanically programmable and passive, ensuring stability [@Hanafusa1977; @Howard1996]. - With spring compliance, contact forces are determined by finger compressions, so contact force control can be achieved by controlling finger anchor motions and sensing the compression. - Sliding compliance bounds the possible tangential contact forces and allows sliding for in-hand regrasp. Figure \[fig:CCexample\] shows an example of an in-hand sliding regrasp of a trapezoid. When the anchors move down, at first the fingertips remain stationary, the finger springs compress, and the contact forces move toward the boundaries of the friction cone. Once the contact forces reach the friction cone boundaries, the fingertips begin to slide and the springs continue to compress. The grasped object is in quasistatic wrench balance if the sum of the gravitational wrench and contact wrench with the table balances the sum of the finger contact wrenches. For a given object and rigid environment, we define a grasp configuration as the configuration of the object, the configuration of the fingertips relative to the object, and the configuration of the finger anchors. The goal of this work is to design a quasistatically consistent (force-balanced at all times) set of finger anchor motions and the object motion relative to the rigid environment such that the fingertips achieve a desired new configuration relative to the object. In the general formulation, the object could slide or roll at its contacts with the environment during the sliding regrasp, but in this paper we focus particularly on the case where the object remains stationary against the environment. This allows us to design sliding regrasps that are robust to force disturbances, in a sense to be defined in Section \[sec:robustness\]. After reviewing related work and the problem description, this paper has the following structure: - *Finger spring compliance model* (Section \[sec:finger\_compliance\_model\]): This section describes finger designs and controls that fit the spring-compliance model. - *Finger contact mechanics* (Section \[sec:contact\_mechanics\]): In this section we derive the mechanics of spring-sliding contact. In particular, given a grasp configuration and the object’s motion, this section derives the relationship between anchor velocities and fingertip velocities. - *Object mechanics* (Section \[sec:obj\_mechanics\]): This section describes the quasistatic wrench-balance conditions considering the object’s motion and wrenches due to the external contacts, fingertips, and gravity. - *Robustness analysis* (Section \[sec:robustness\]): A planned sliding regrasp is robust to finger contact wrench uncertainty if the planned regrasp succeeds in the face of this uncertainty. - *Sliding regrasp planning* (Section \[sec:motion\_planning\]): This section describes a general approach to finding feasible and robust object and finger anchor trajectories that realize a desired regrasp. - *Implementation* (Section \[sec:implementation\]): We describe a particular spring-sliding regrasp motion planner for the case of a two-fingered regrasp, where the objective is to maximize robustness of the spring-sliding regrasp. Simulation and experimental results validating the approach are given. Section \[sec:conclusion\] concludes with directions for future research. Related Work {#sec:related-work} ============ In-hand Manipulation -------------------- As described in pioneering early work, in-hand manipulation involves adjusting finger contacts relative to an object using rolling [@fearing1986; @Cole1989; @Cherif1998], gaiting [@Rus1999], or sliding [@cole1992]. Li et al. [@li1989] and Yoshikawa and Nagai [@yoshikawa1991] used rigid, rolling finger contacts to calculate grasp stability, manipulability, and to develop controllers for tracking a position trajectory while maintaining a desired grasp force. Trinkle and Hunter extended the dexterous manipulation planning problem to consider rolling and slipping contact modes [@trinkle1991]. The hybrid planning problem was further developed by Yashima et al. [@yashima2003]. Brock addressed the problem of controlled in-hand sliding by first generating a constraint state map which outlines constraints on a grasped object due to the contact types and forces [@brock1988]. By varying contact forces, controlled sliding was achieved in desired directions for a grasped cylinder. Sundaralingam and Hermans demonstrated in-hand rolling manipulation using only kinematic models [@Sundaralingam2018]. To address inevitable errors or uncertainties in purely model-based approaches, iterative learning control [@Yashima2018] and model-based reinforcement learning [@Kumar2016] have been applied to learn a specific in-hand manipulation task over a series of trials. Expanding in-hand manipulation to include dynamics, Furukawa et al. demonstrated regrasping by tossing a foam cylinder and catching it [@furukawa2006]. Chavan-Dafle et al. tested hand-coded regrasps that take advantage of external forces such as gravity, dynamic forces, and contact with the environment to regrasp objects using a simple manipulator [@chavan-dafle2014]. Hou et al. studied dynamic planar pivoting of a pinched object driven by hand swing motion and contact normal force control [@yifan2016]. Viña et al. showed that by using adaptive control with vision and tactile feedback, monodirectional pivoting of an object pinched by a pair of fingers can be achieved by changing the gripping forces [@vina2016]. Cruciani et al. derived a Dexterous Manipulation Graph to plan paths for a parallel-jaw gripper to slide along parallel surfaces of an object from one stable grasp to another [@Cruciani2018]. Sintov and Shapiro developed an algorithm to swing up a rod by generating gripper motions, where the contact point was modeled as a pivot joint that can apply frictional torques [@Sintov2016]. In our prior work, we used inertial loads to achieve in-hand sliding regrasps [@Shi2017]. Chavan-Dafle et al. explored in-hand manipulation of an object by external contacts with environmental constraints, as in this paper [@chavan-dafle2015; @chavan-dafle2018; @Chavan-Dafle2018b]. A laminar object is squeezed between two fingers and pushed against a constraint to cause sliding at the fingers. They showed that such actions are similar to pushing an object sliding on a planar surface [@Lynch96c], and that sequences of pushes can be planned to achieve an in-hand regrasp. In this paper, we explicitly model spring compliance so that in-hand sliding regrasp is possible with more complex grasp configurations, where the object is not laminar and any number of fingers can be in contact. In recent work, Dollar et al. demonstrated a simple and robust type of in-hand manipulation based on the clever use of fingers that can switch between two different friction coefficients: high, for rolling or sticking contact, and low, for sliding manipulation [@Dollar2019]. A laminar object, such as a square, is supported by a table and manipulated in the plane by two flat one-joint fingers. Depending on the friction coefficient employed at each finger, the object can be made to slide or roll in the two-finger hand, achieving in-hand manipulation. In this paper, the mechanics of spring-sliding compliance for in-hand regrasp are derived for generic 3D object geometries with no restriction on the number of fingers in contact. Compliant Grasps ---------------- Spring-compliant grasps are a subset of spring-sliding-compliant grasps, as studied in this paper. Hanafusa and Asada modeled the spring compliance of frictionless elastic fingers and formulated a notion of grasp stability [@Hanafusa1977]. In their definition, a stable grasp means that the grasp restores the object to its initial configuration after a small configuration disturbance. Grasp stability is determined by finger stiffness and local contact geometry. Baker [et al.]{} further developed the stability conditions under the same assumptions [@Baker1985]. More generally, Howard and Kumar classified categories of equilibrium grasps and derived conditions for stability [@Howard1996]. Odhner and Dollar demonstrated in-hand rolling with an underactuated compliant hand [@Odhner2015]. Cutkosky and Kao achieved a desired grasp stiffness by controlling finger joint stiffness [@Cutkosky1989]. Cutkosky and Kao also modeled sliding manipulation with spring compliance and limit surface frictional contacts [@Kao1992]. The motions of the contact points were solved by assuming infinitesimal motions while the magnitude of the sliding velocity is fixed. In this paper we allow finite sliding velocities and solve for the sliding velocity using the constraint that sliding contact forces are on the boundary of the friction cone. Spring-compliant grasps have applications in assembly. The remote center of compliance (RCC) device is a mechanical solution to reduce mating forces and the chance of jamming in certain assembly operations [@whitney1982]. Goswami and Peshkin generalized the idea by outlining a design strategy for passive devices to implement desired spring characteristics [@Goswami1993]. Schimmels and Peshkin derived conditions for accommodation control to yield error-corrective assembly with frictional contacts [@Schimmels1992; @Schimmels1994]. Ji and Xiao explored methods to plan compliant assembly based on a contact state graph [@Ji2001]. Meeussen [et al.]{} developed an approach to convert a contact path into a force-based task specification for executing the compliant path via hybrid position and force control [@Meeussen2005]. Park [et al.]{} developed a procedure and a controller that yield compliant behavior using neither force feedback nor passive compliance mechanisms to solve the peg-in-hole assembly problem [@Park2017]. Problem Description {#sec:definition} =================== An $n$-fingered hand grasps an object with $n$ point contacts. Each finger consists of an individually motion-controlled anchor point that is connected by a three-dimensional linear spring to a point fingertip. The object contacts a rigid stationary environment with a total of $m$ frictional point contacts.[^4] A grasp configuration is defined by the positions of the finger anchors, the finger contact points, and the object’s configuration. The problem can be described as: given (1) an initial grasp configuration where the object is in force balance and (2) a desired new grasp configuration, find quasistatically-consistent anchor and object motions that realize the regrasp. Assumptions {#subsec:assumptions} ----------- 1. Gravity and contact wrenches are always balanced (quasistatic assumption). \[enum:quasistatic\_assumption\] 2. Fingers contact the object at point fingertips. \[enum:finger\_assumption\] 3. Each finger is linearly springy and the stiffness is known. Each $3\times 3$ stiffness matrix is symmetric and positive definite. \[enum:stiff\_assumption\] 4. Each finger maintains a positive contact normal force. \[enum:fc\_assumption\] 5. The object is rigid, smooth, and of known geometry. 6. Dry Coulomb friction applies at each point contact. During sliding contact, the tangential friction force $\mathbf{f}_t$ is aligned with the tangential sliding direction and has a magnitude $\mu f_N$, where $\mu \geq 0$ is the friction coefficient and $f_N > 0$ is the magnitude of the normal force; and during sticking contact, the total contact force is confined to a friction cone satisfying $\|\mathbf{f}_t\| \leq \mu f_N$. The friction coefficients at all contacts are known, though this assumption is relaxed in our robustness analysis. For convenience, we assume that finger contacts with the object have a friction coefficient $\mu$ and environment contacts with the object have a friction coefficient $\mu_e$. 7. The $m$ external contact points are known and the environment is assumed rigid and stationary. \[enum:ex\_contact\_assumption\] ![Finger notation. The contact friction cone is indicated in green.[]{data-label="fig:config"}](config.pdf){width="3.4in"} Notation -------- Vectors are written in bold lowercase letters, matrices are in bold capital letters, scalars are italicized, and coordinate frames are denoted with calligraphic letters. All variables are expressed in a world frame $\mathcal{W}$ unless noted otherwise in the superscripts. For example, ${\mathbf{p}_{fi}}$ is the fingertip position of the $i$th finger in the world frame ${\mathcal{W}}$ and ${\mathbf{p}^\mathcal{B}_{fi}}$ is the fingertip position in the object frame ${\mathcal{B}}$. Frames of reference are typically chosen to simplify the mathematical expressions; standard transformations are used to move between frames. Figure \[fig:config\] illustrates some of the quantities for a single finger. ### Object Notation   --------------- ------------------------------- $\mathcal{B}$ Frame attached to the object. --------------- ------------------------------- ---------------- ------------------------------------------------------------------------------------- $\mathbf{p}_o$ The position of the origin of $\mathcal{B}$, ${\mathbf{p}_o}= [x_o, y_o, z_o]^{T}$. ---------------- ------------------------------------------------------------------------------------- ------------------ ---------------------------------------------------------------------------------------- ${\mathbf{R}_o}$ Rotation matrix representing the orientation of the object, ${\mathbf{R}_o}\in SO(3)$. ------------------ ---------------------------------------------------------------------------------------- ------------------ ------------------------------------------------------------------------------------------------------- ${\mathbf{T}_o}$ Object configuration constructed of ${\mathbf{p}_o}$ and ${\mathbf{R}_o}$, ${\mathbf{T}_o}\in SE(3)$. ------------------ ------------------------------------------------------------------------------------------------------- ---------------- ------------------------------------------------------------- $\bm \omega_o$ Object angular velocity, $\bm \omega_o \in {\mathbb{R}}^3$. ---------------- ------------------------------------------------------------- ### Finger Notation   ----------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------- $\mathcal{F}_i$ Finger frame attached to the $i$th ($i=1,...,n$) fingertip. The $z$-axis of $\mathcal{F}_i$ is aligned with the contact normal pointing into the object. ----------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------- ---------------------------------------------------------------------------------- $\mathbf{p}_{fi}$ The $i$th fingertip position, ${\mathbf{p}_{fi}}= [x_{fi}, y_{fi}, z_{fi}]^{T}$. ------------------- ---------------------------------------------------------------------------------- --------------------- ------------------------------------------------------------------ ${\mathbf{R}_{fi}}$ Rotation matrix representing the orientation of $\mathcal{F}_i$. --------------------- ------------------------------------------------------------------ ------------------- ------------------------------------------------------------------------------- $\mathbf{p}_{ai}$ The $i$th anchor position, ${\mathbf{p}_{ai}}= [x_{ai}, y_{ai}, z_{ai}]^{T}$. ------------------- ------------------------------------------------------------------------------- --------------------- -------------------------------------------------- ${\mathbf{d}_{0i}}$ The equilibrium position of the $i$th fingertip. --------------------- -------------------------------------------------- ---------------- ----------------------------------------------------------------------------------------------------------- $\mathbf{d}_i$ Compression of the $i$th finger, $ \mathbf{d}_i= {\mathbf{p}_{fi}}- {\mathbf{d}_{0i}}-{\mathbf{p}_{ai}}$. ---------------- ----------------------------------------------------------------------------------------------------------- ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\mathbf{K}_i$ Stiffness matrix of the $i$th finger, $\mathbf{K}_i \in \mathbb{R}^{3 \times 3}$, which may or may not depend on the finger joint configuration or other parameters. ---------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- ### Contact Forces The contact force applied to the object by the $i$th finger is $$\mathbf{f}_{ci} = -\mathbf{K}_i \mathbf{d}_i = -{\mathbf{K}_i}({\mathbf{p}_{fi}}- {\mathbf{p}_{ai}}- {\mathbf{d}_{0i}}). \label{eq:fc}$$ The contact normal into the object is a function of the finger contact position in ${\mathcal{B}}$, $$\hat{\mathbf{n}}_i({\mathbf{p}_{fi}}^{\mathcal{B}}) = \mathbf{R}_{fi} [0, 0, 1]^{T}, \label{eq:n^hat}$$ where the hat means the vector is a unit vector. The contact normal force is the projection of ${\mathbf{f}_{ci}}$ to the normal direction, $$\mathbf{f}_{Ni} = (\mathbf{f}_{ci} \cdot \hat{\mathbf{n}}_i)\hat{\mathbf{n}}_i = {\mathbf{f}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}, \label{eq:fN}$$ and the contact tangential force is $$\mathbf{f}_{ti} = \mathbf{f}_{ci} -\mathbf{f}_{Ni}. \label{eq:ft}$$ Problem Description {#problem-description} ------------------- We define ${\mathbf{p}_{f}}= [\mathbf{p}_{f1}^{T}, \mathbf{p}_{f2}^{T}, ... , {\mathbf{p}_{fi}}^{T}]^{T}$ to be the stacked vector of all the fingertip positions, and similarly ${\mathbf{p}^\mathcal{B}_{f}}$ to be all the fingertip positions relative to the object and ${\mathbf{p}_{a}}$ to be all the finger anchor positions. The duration of the regrasp is $T$. **Given:** the initial grasp configuration $\{{\mathbf{T}_o}(0)$, ${\mathbf{p}_{f}}(0)$, ${\mathbf{p}_{a}}(0)\}$, the finger stiffness properties, the geometry of the rigid object and stationary environment, and the goal fingertip relative positions $\mathbf{p}^\mathcal{B}_{f, \,\text{goal}}$, **Find:** motions of the object ${\mathbf{T}_o}(t)$ and finger anchors ${\mathbf{p}_{a}}(t)$ such that ${\mathbf{p}^\mathcal{B}_{f}}(T) = \mathbf{p}^\mathcal{B}_{f, \,\text{goal}}$ and the rigid-body conditions and quasistatic force-balance conditions are satisfied at all times, $0\leq t \leq T$. If the task involves carrying the object away from the rigid environment after the regrasp, the goal fingertip and anchor positions $\mathbf{p}^\mathcal{B}_{f, \,\text{goal}}$ and $\mathbf{p}_{a,\text{goal}}$ should be chosen to achieve force closure on the object, or at least to balance the object’s gravitational wrench, without the benefit of the environmental contacts. Note also that the robot itself can provide all or a portion of the stationary, rigid environment, e.g., using its palm or another link of the robot arm. Because we assume quasistatic mechanics, the time variable $t$ in the problem formulation can be rescaled without affecting the spring-sliding regrasp. Finger Spring Compliance Model {#sec:finger_compliance_model} ============================== The springy-finger model can represent several different mechanical finger designs and control strategies. For example, Figure \[fig:finger\_springmodel\] shows two different types of fingers. In Figure \[fig:finger\_springmodel\](a), there is a spring-mounted fingertip attached to the end of a position-controlled finger (e.g., a stiff, highly geared finger). The anchor point is at the attachment of the spring to the finger. This design directly matches our model provided the 3D stiffness of the spring is known. Figure \[fig:finger\_springmodel\](b) represents the case where the fingertip is rigidly mounted to the finger. The effective stiffness may come from an active stiffness control law or from passive compliance at the joints (as with series elastic actuators) or at the links. Another interesting case occurs when passive compliance derives from open-loop torque-controlled joints of the finger. In this case, the anchor is the base of the finger and the entire finger acts as a nonlinear spring. Under certain circumstances, the linearized passive compliance at the contact is positive definite, as required by the assumptions. This case is examined in more detail in Appendix \[app:torque\]. Finger Contact Mechanics {#sec:contact_mechanics} ======================== This section answers the following question: given the object’s motion and the $i$th finger anchor and contact positions, what is the relationship between the finger anchor velocity ${\dot{\mathbf{p}}_{ai}}$ and the corresponding fingertip velocity ${\dot{\mathbf{p}}_{fi}}$? Given the anchor and contact locations, the contact force is determined by the spring compliance. The fingertip sticks to the object when (1) the contact force is in the interior of the friction cone or (2) the contact force is on the boundary of the friction cone but the anchor velocity results in a rate of change of the contact force that keeps it within the friction cone under the assumption of a stationary contact. If these conditions do not hold, the fingertip contact force is on the boundary of the friction cone and the tangential sliding velocity is aligned with the tangential contact force. For the sliding case, the *forward mechanics* problem is to find the contact point velocity ${\dot{\mathbf{p}}_{fi}}$ given the anchor velocity ${\dot{\mathbf{p}}_{ai}}$, and the *inverse mechanics* problem is to find the set of anchor velocities ${\dot{\mathbf{p}}_{ai}}$ corresponding to a contact point velocity ${\dot{\mathbf{p}}_{fi}}$. Forward mechanics is useful for simulation, and inverse mechanics is useful for motion planning. The contact mechanics problems are illustrated by a simple example in Figure \[fig:1fingerEx\]. Sticking Case ------------- When fingertip $i$ sticks to the object, the fingertip follows the object’s motion, i.e., $$\dot{\mathbf{p}}_{fi}^{\mathcal{B}} = \mathbf{0}. \label{eq:relative_vel=0}$$ The transformations of the contact position and velocity from $\mathcal{B}$ to $\mathcal{W}$ can be written as $$\begin{aligned} \mathbf{p}_{fi} &= \mathbf{p}_o + \mathbf{R}_o\mathbf{p}_{fi}^{\mathcal{B}} \;, \\ \dot{\mathbf{p}}_{fi} &= \dot{\mathbf{p}}_o + \dot{\mathbf{R}}_o\mathbf{p}_{fi}^{\mathcal{B}} +\mathbf{R}_o \dot{\mathbf{p}}_{fi}^{\mathcal{B}} . \label{eq:finger_vel_trans}\end{aligned}$$ Substituting Equation  into Equation , the fingertip velocity in $\mathcal{W}$ is $$\dot{\mathbf{p}}_{fi} = \dot{\mathbf{p}}_o + \dot{\mathbf{R}}_o\mathbf{p}_{fi}^{\mathcal{B}} = \dot{\mathbf{p}}_o + \bm\omega_o \times \mathbf{R}_o\mathbf{p}_{fi}^{\mathcal{B}}. \label{eq:sticking_finger_vel}$$ Sliding Case ------------ ### Forward Mechanics {#subsec:contact_mechanics_forward} When sliding, the contact forces of the $i$th finger satisfy $$\|\mathbf{f}_{ti}\| = \mu \|\mathbf{f}_{Ni}\|. \label{eq:sliding_condition}$$ We define the finger sliding velocity relative to $\mathcal{B}$ as $$\dot{\mathbf{p}}_{fi}^{\mathcal{B}} = \lambda_i \mathbf{f}_{ti}^{\mathcal{B}} = \lambda_i \mathbf{R}_o^{T}\mathbf{f}_{ti}, \label{eq:sliding_vel_relation}$$ which enforces the Coulomb friction assumption that the sliding velocity is in the direction of the tangential frictional force applied by the finger to the object. The positive scalar $\lambda_i$, which must be solved for, relates the magnitudes of the friction force and the sliding velocity. Substituting Equation  into , we have $$\begin{aligned} \dot{\mathbf{p}}_{fi} =& ~{\dot{\mathbf{p}}_o}+ \dot{\mathbf{R}}_o \mathbf{p}_{fi}^{\mathcal{B}} + \mathbf{R}_o \lambda_i \mathbf{R}_o^{T}\mathbf{f}_{ti} \nonumber\\ =& ~\mathbf{c}_{fi} + \lambda_i \mathbf{f}_{ti}, \label{eq:finger_vel}\end{aligned}$$ where $\mathbf{c}_{fi} = {\dot{\mathbf{p}}_o}+ [\bm \omega_o] \mathbf{R}_o \mathbf{p}_{fi}^{\mathcal{B}}$ reflects the change of the contact point position due to the object motion, without sliding. From Equation , we find $$\begin{aligned} \|\mathbf{f}_{ti}\| \|\mathbf{f}_{ti}\| &= \mu^2 \|\mathbf{f}_{Ni}\| \|\mathbf{f}_{Ni}\| \nonumber \\ \rightarrow \mathbf{f}_{ti} \cdot \mathbf{f}_{ti} &= \mu^2 \: \mathbf{f}_{Ni} \cdot \mathbf{f}_{Ni} \nonumber \\ \xrightarrow[]{\frac{d}{dt}} \dot{\mathbf{f}}_{ti} \cdot \mathbf{f}_{ti} + \mathbf{f}_{ti} \cdot \dot{\mathbf{f}}_{ti} &= \mu^2( \dot{\mathbf{f}}_{Ni} \cdot \mathbf{f}_{Ni} + \mathbf{f}_{Ni} \cdot \dot{\mathbf{f}}_{Ni}) \nonumber \\ \rightarrow \mathbf{f}^{T}_{ti} \dot{\mathbf{f}}_{ti} &= \mu^2 \: \mathbf{f}^{T}_{Ni} \dot{\mathbf{f}}_{Ni}. \label{eq:sliding_condition_d}\end{aligned}$$ Then from Equation  we have $${\dot{\hat{\mathbf{n}}}_i}= \frac{\partial {\hat{\mathbf{n}}_i}}{\partial {\mathbf{p}^\mathcal{B}_{fi}}}{\dot{\mathbf{p}}^\mathcal{B}_{fi}}= \frac{\partial {\hat{\mathbf{n}}_i}}{\partial {\mathbf{p}^\mathcal{B}_{fi}}} \lambda_i {\mathbf{f}^\mathcal{B}_{ti}}= \lambda_i {\mathbf{g}_{ni}}, \label{eq:Dnhat}$$ where ${\mathbf{g}_{ni}}= \frac{\partial {\hat{\mathbf{n}}_i}}{\partial {\mathbf{p}^\mathcal{B}_{fi}}} {\mathbf{R}_o}^{T}{\mathbf{f}_{ti}}$ and $\frac{\partial {\hat{\mathbf{n}}_i}}{\partial {\mathbf{p}^\mathcal{B}_{fi}}}$ represents the curvature of the object at the contact point. In some cases, such as a linear-spring-mounted fingertip as in Figure \[fig:finger\_springmodel\](a), the finger’s stiffness matrix ${\mathbf{K}_i}$ is constant. In general, the stiffness matrix may be a function of the finger contact location ${\mathbf{p}_{fi}}$ and other parameters $\bm\sigma$ used to control the stiffness (as in variable-stiffness actuators). In this case, the stiffness can be written ${\mathbf{K}_i}({\mathbf{p}_{fi}}, \bm\sigma)$, and taking the derivative of Equation  and combining with Equation  gives $$\begin{aligned} {\dot{\mathbf{f}}_{ci}}&= -{\dot{\mathbf{K}}_i}{\mathbf{d}_i}- {\mathbf{K}_i}({\dot{\mathbf{p}}_{fi}}- {\dot{\mathbf{p}}_{ai}}) \nonumber \\ &= -\left(\frac{\partial {\mathbf{K}_i}}{\partial {\mathbf{p}_{fi}}} {\dot{\mathbf{p}}_{fi}}+ \frac{\partial {\mathbf{K}_i}}{\partial \bm\sigma} \dot{\bm\sigma} \right) {\mathbf{d}_i}- {\mathbf{K}_i}({\mathbf{c}_{fi}}+ \lambda_i {\mathbf{f}_{ti}}) + {\mathbf{K}_i}{\dot{\mathbf{p}}_{ai}}\nonumber \\ &= \lambda_i {\mathbf{g}_{ci}}+ {\mathbf{c}_{ci}}, \label{eq:Dfc}\end{aligned}$$ where $ {\mathbf{g}_{ci}}= -{\mathbf{K}_i}{\mathbf{f}_{ti}}- \frac{\partial {\mathbf{K}_i}}{\partial {\mathbf{p}_{fi}}} {\mathbf{f}_{ti}}{\mathbf{d}_i}$, and ${\mathbf{c}_{ci}}= {\mathbf{K}_i}{\dot{\mathbf{p}}_{ai}}- ( \frac{\partial {\mathbf{K}_i}}{\partial \bm\sigma}\dot{\bm\sigma} + \frac{\partial {\mathbf{K}_i}}{\partial {\mathbf{p}_{fi}}} {\mathbf{c}_{fi}}) {\mathbf{d}_i}- {\mathbf{K}_i}{\mathbf{c}_{fi}}$. By denoting ${\mathbf{h}_i}= \left(\frac{\partial {\mathbf{K}_i}}{\partial \bm\sigma}\dot{\bm\sigma} + \frac{\partial {\mathbf{K}_i}}{\partial {\mathbf{p}_{fi}}} {\mathbf{c}_{fi}}\right){\mathbf{d}_i}+ {\mathbf{K}_i}{\mathbf{c}_{fi}}$, we have ${\mathbf{c}_{ci}}= {\mathbf{K}_i}{\dot{\mathbf{p}}_{ai}}- {\mathbf{h}_i}$. In the case that ${\mathbf{K}_i}$ is constant, Equation  simplifies to $${\dot{\mathbf{f}}_{ci}}= {\mathbf{K}_i}({\dot{\mathbf{p}}_{ai}}- {\dot{\mathbf{p}}_{fi}}) = -{\mathbf{K}_i}\dot{\mathbf{d}}_i.$$ Taking the derivative of Equations  and and combining with Equations  and yields $$\begin{aligned} {\dot{\mathbf{f}}_{Ni}}&= {\dot{\mathbf{f}}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}{\dot{\hat{\mathbf{n}}}_i}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\dot{\hat{\mathbf{n}}}_i}\nonumber \\ &= (\lambda_i {\mathbf{g}_{ci}}+ {\mathbf{c}_{ci}})^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}\lambda_i {\mathbf{g}_{ni}}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}{\hat{\mathbf{n}}_i}\lambda_i {\mathbf{g}_{ni}}\nonumber \\ &= \lambda_i {\mathbf{g}_{Ni}}+ {\mathbf{c}_{Ni}}\label{eq:f_Ni_dot} \end{aligned}$$ where ${\mathbf{g}_{Ni}}= {\mathbf{g}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}{\mathbf{g}_{ni}}{\hat{\mathbf{n}}_i}+ {\mathbf{f}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\mathbf{g}_{ni}}$, ${\mathbf{c}_{Ni}}= {\mathbf{c}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}$, and $$\begin{aligned} {\dot{\mathbf{f}}_{ti}}= {\dot{\mathbf{f}}_{ci}}- {\dot{\mathbf{f}}_{Ni}}= \lambda_i{\mathbf{g}_{ti}}+ {\mathbf{c}_{ti}}, \label{eq:f_ti_dot}\end{aligned}$$ where ${\mathbf{g}_{ti}}= {\mathbf{g}_{ci}}- {\mathbf{g}_{Ni}}$ and ${\mathbf{c}_{ti}}= {\mathbf{c}_{ci}}- {\mathbf{c}_{Ni}}$. Substituting Equations  and into we can solve for $\lambda_i$ as $$\lambda_i = \frac{\mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{c}_{Ni}}- {\mathbf{f}_{ti}}^{T}{\mathbf{c}_{ti}}}{{\mathbf{f}_{ti}}^{T}{\mathbf{g}_{ti}}- \mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{g}_{Ni}}}. \label{eq:lambda1}$$ In the numerator, since ${\mathbf{c}_{Ni}}= {\mathbf{c}_{ci}}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}= ({\mathbf{c}_{ci}}\cdot {\hat{\mathbf{n}}_i}) {\hat{\mathbf{n}}_i}$ is along the contact normal, the term ${\mathbf{f}_{Ni}}^{T}{\mathbf{c}_{Ni}}$ is equivalent to ${\mathbf{f}_{Ni}}^{T}{\mathbf{c}_{ci}}$. By plugging in ${\mathbf{c}_{ti}}= {\mathbf{c}_{ci}}- {\mathbf{c}_{Ni}}$, Equation  simplifies to $$\begin{aligned} \lambda_i &= \frac{\mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{c}_{ci}}- {\mathbf{f}_{ti}}^{T}{\mathbf{c}_{ci}}+ \cancelto{0\text{ (orthogonal)}}{{\mathbf{f}_{ti}}^{T}{\mathbf{c}_{Ni}}} }{{\mathbf{f}_{ti}}^{T}{\mathbf{g}_{ti}}- \mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{g}_{Ni}}} = \frac{{\mathbf{a}_i}^{T}{\mathbf{c}_{ci}}}{{\lambda_{\text{den},i}}} \nonumber \\ &= \frac{{\mathbf{a}_i}^{T}{\mathbf{K}_i}}{{\lambda_{\text{den},i}}}{\dot{\mathbf{p}}_{ai}}- \frac{{\mathbf{a}_i}^{T}{\mathbf{h}_i}}{{\lambda_{\text{den},i}}} \nonumber \\ &= {\mathbf{g}_{\lambda_i}}{\dot{\mathbf{p}}_{ai}}- {c_{\lambda_i}}, \label{eq:lambda2}\end{aligned}$$ where ${\mathbf{a}_i}= \mu^2 {\mathbf{f}_{Ni}}- {\mathbf{f}_{ti}}\in {\mathbb{R}}^{3 \times 1}$ , $\lambda_{\text{den},i} = {\mathbf{f}_{ti}}^{T}{\mathbf{g}_{ti}}- \mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{g}_{Ni}}$, ${\mathbf{g}_{\lambda_i}}= {\mathbf{a}_i}^{T}{\mathbf{K}_i}/{\lambda_{\text{den},i}}\in {\mathbb{R}}^{1 \times 3}$, and ${c_{\lambda_i}}= {\mathbf{a}_i}^{T}{\mathbf{h}_i}/{\lambda_{\text{den},i}}$. The finger contact sliding velocity ${\dot{\mathbf{p}}_{fi}}$ can be solved for by substituting $\lambda_i$ into Equation . ### Inverse Mechanics {#subsec:contact_mechanics_inverse} The result of the forward mechanics, Equation , gives the finger sliding velocity for a given finger anchor velocity. For the inverse mechanics problem, we solve for the anchor motions ${\dot{\mathbf{p}}_{ai}}$ that cause a desired finger contact sliding velocity ${\dot{\mathbf{p}}_{fi}}$. Since the object motion and the contact force are known, a desired finger contact sliding velocity ${\dot{\mathbf{p}}_{fi}}$ is equivalent to a desired $\lambda_i$ from Equation . Therefore we can write all solutions to the inverse problem as $$\begin{aligned} & {\dot{\mathbf{p}}_{ai}}= {\dot{\mathbf{p}}_{ai}}^{*} + {\dot{\mathbf{p}}_{ai}}^\perp, \label{eq:Dpa} \\ \text{where } & {\dot{\mathbf{p}}_{ai}}^{*} = {\mathbf{g}_{\lambda_i}^\dagger}(\lambda_i + {c_{\lambda_i}}) \text{ and } \nonumber \\ & {\dot{\mathbf{p}}_{ai}}^\perp \in \{(\mathbf{I}^{3\times3} - {\mathbf{g}_{\lambda_i}^\dagger}{\mathbf{g}_{\lambda_i}}) \mathbf{v} \; | \; \mathbf{v} \in {\mathbb{R}}^3\}. \nonumber\end{aligned}$$ The vector ${\dot{\mathbf{p}}_{ai}}^*$ is a particular solution for ${\dot{\mathbf{p}}_{ai}}$ found using the pseudoinverse ${\mathbf{g}_{\lambda_i}^\dagger}= {\mathbf{g}^{{T}}_{\lambda_i}}({\mathbf{g}_{\lambda_i}}{\mathbf{g}^{{T}}_{\lambda_i}})^{-1} = {\mathbf{g}^{{T}}_{\lambda_i}}/ \|{\mathbf{g}_{\lambda_i}}\|^2$ and ${\dot{\mathbf{p}}_{ai}}^\perp$ is any vector in the two-dimensional space spanned by $\mathbf{I} - {\mathbf{g}_{\lambda_i}^\dagger}{\mathbf{g}_{\lambda_i}}$, the space of anchor velocities that have no impact on the fingertip sliding velocity. Figure \[fig:1fingerEx3d\] illustrates the space of anchor velocity solutions for a 3D version of Figure \[fig:1fingerEx\](c). ![A 3D version of Figure \[fig:1fingerEx\](c). The direction of fingertip sliding ${\dot{\mathbf{p}}_{fi}}$ is determined by the current force on the boundary of the friction cone, and the magnitude $\|{\dot{\mathbf{p}}_{fi}}\|$ of the desired sliding velocity places one constraint on the anchor velocity, resulting in a plane of anchor velocities ${\dot{\mathbf{p}}_{ai}}$ that achieve the desired fingertip sliding velocity ${\dot{\mathbf{p}}_{fi}}$. This plane is defined by the sum of a particular solution ${\dot{\mathbf{p}}_{ai}}^*$ and any ${\dot{\mathbf{p}}_{ai}}^\perp$ in the two-dimensional space spanned by $\mathbf{I} - {\mathbf{g}_{\lambda_i}^\dagger}{\mathbf{g}_{\lambda_i}}$ (Equation ).[]{data-label="fig:1fingerEx3d"}](1fingerEx3d.jpg){width="2.5in"} For different solutions of ${\dot{\mathbf{p}}_{ai}}$, all the corresponding contact sliding velocities are the same but the changes of the contact force ${\dot{\mathbf{f}}_{ci}}$ are different. By Equation  we can solve the corresponding contact force change ${\dot{\mathbf{f}}_{ci}}$ for each ${\dot{\mathbf{p}}_{ai}}$. The redundancy resolution in the choice of ${\dot{\mathbf{p}}_{ai}}$ could be based on additional constraints on the anchor motions or optimization of desired contact force properties. ### Degenerate Cases {#subsec:sliding_ill_condition} In quasistatic sliding, Equations (coupled with Equation ) and describe the relationship between the anchor motion and the contact point motion. Two degeneracies are possible, when (I) ${\mathbf{g}_{\lambda_i}}= 0$ or (II) ${\lambda_{\text{den},i}}= 0$. For a degeneracy of type I, the anchor velocity has no impact on the sliding velocity of the fingertip. For a degeneracy of type II, the fingertip velocity becomes unbounded and the quasistatic assumption is violated. An example of a degeneracy of type II is shown in Figure \[fig:degeneracy\]. ![A springy finger dragged over a rounded ledge may suddenly slide dynamically before quasistatic motion resumes.[]{data-label="fig:degeneracy"}](degeneracy.pdf) As shown in Proposition \[prop:nonzero\_glan\], a degeneracy of type I cannot occur under our assumptions. The 3-vector ${\mathbf{g}_{\lambda_i}}$ is nonzero under Assumptions \[enum:stiff\_assumption\]) and \[enum:fc\_assumption\]) (the finger has a positive-definite stiffness matrix ${\mathbf{K}_i}$ and maintains a positive contact normal force). \[prop:nonzero\_glan\] The vector ${\mathbf{g}_{\lambda_i}}$ is proportional to the product ${\mathbf{a}_i}^{T}{\mathbf{K}_i}$. By Assumption \[enum:fc\_assumption\]) the term ${\mathbf{a}_i}= \mu^2 {\mathbf{f}_{Ni}}- {\mathbf{f}_{ti}}$ will be nonzero. Proposition \[prop:nonzero\_glan\] holds since the matrix ${\mathbf{K}_i}$ is full rank when it is positive definite. Considering degeneracies of type II, many factors affect the value of ${\lambda_{\text{den},i}}$, including the local curvature of the object and variations in the finger stiffness. In the particular case that the stiffness ${\mathbf{K}_i}$ is constant and the object surface is flat, however, this type of degeneracy cannot occur under our assumptions. When the finger stiffness matrix ${\mathbf{K}_i}$ is constant and the local curvature of the object at the contact is zero, ${\lambda_{\text{den},i}}$ will be nonzero under Assumptions \[enum:stiff\_assumption\]) and \[enum:fc\_assumption\]) (the finger has a positive-definite stiffness matrix ${\mathbf{K}_i}$ and maintains a positive contact normal force). \[prop:nonzero\_lambda\] When $\frac{\partial {\mathbf{K}_i}}{\partial {\mathbf{p}_{fi}}}=0$, $\frac{\partial {\mathbf{K}_i}}{\partial \bm\sigma}=0$ and $\frac{\partial {\hat{\mathbf{n}}_i}}{\partial {\mathbf{p}^\mathcal{B}_{fi}}}=0$, the key variables in Equations  and are $${\mathbf{g}_{ci}}= -{\mathbf{K}_i}{\mathbf{f}_{ti}}\text{ and } ~{\mathbf{g}_{Ni}}= -{\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}^{T}{\hat{\mathbf{n}}_i}{\hat{\mathbf{n}}_i}. \label{eq:gcN_simp}$$ Because ${\mathbf{g}_{Ni}}$ and ${\mathbf{f}_{Ni}}$ are both vectors in the direction of ${\hat{\mathbf{n}}_i}$, we have ${\mathbf{f}_{Ni}}^{T}{\mathbf{g}_{Ni}}= -\|{\mathbf{f}_{Ni}}\| {\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}^{T}{\hat{\mathbf{n}}_i}$. Plugging Equation  into , we have $$\begin{aligned} {\lambda_{\text{den},i}}&= {\mathbf{f}_{ti}}^{T}({\mathbf{g}_{ci}}-{\mathbf{g}_{Ni}}) - \mu^2 {\mathbf{f}_{Ni}}^{T}{\mathbf{g}_{Ni}}\\ &= -{\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}{\mathbf{f}_{ti}}+ \mu^2 {\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}^{T}{\mathbf{f}_{Ni}}. \end{aligned}$$ Since ${\mathbf{K}_i}$ is symmetric, $$\begin{aligned} {\lambda_{\text{den},i}}&= {\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}(\mu^2{\mathbf{f}_{Ni}}-{\hat{\mathbf{f}}_{ti}}) = {\mathbf{f}_{ti}}^{T}{\mathbf{K}_i}{\mathbf{a}_i}, {\addtocounter{equation}{1}\tag{\theequation}}\end{aligned}$$ where ${\mathbf{f}_{ti}}$ and ${\mathbf{a}_i}$ are both nonzero due to Assumption \[enum:fc\_assumption\]). Similar to the proof of Proposition \[prop:nonzero\_glan\], since ${\mathbf{K}_i}$ is positive definite, ${\lambda_{\text{den},i}}$ is nonzero. Object Mechanics {#sec:obj_mechanics} ================ The grasped object has $m$ point contacts with the rigid stationary environment, and according to the planned object motion $\mathbf{T}_o(t)$, $t \in [0,T]$, each contact could be sliding (relative motion at the point of contact) or rolling/sticking (no sliding at the contact). At each sliding contact, the total contact force applied to the object lies on a one-dimensional line on the boundary of the friction cone, such that the tangential frictional force is opposite the direction that the object slides relative to the environment and has a magnitude $\|\mu f_N\|$ (where $f_N$ is the normal force). At each sticking or rolling contact, the contact force lies somewhere inside the three-dimensional circular friction cone. In other words, a sliding contact offers one force freedom and a sticking contact offers three force freedoms to satisfy quasistatic wrench balance, which requires the wrenches from the environmental contacts, the finger contacts, and gravity to sum to zero. If the $j$th external contact with the environment is sticking or rolling, the friction cone can be approximated as an $n_c$-sided polyhedral cone, i.e., the nonnegative linear combination of $n_c$ unit forces on the boundary of the circular friction cone, $\hat{\mathbf{f}}_{jk}, k = 1, \ldots, n_c$. Given the contact location ${\mathbf{p}_{ej}}$ expressed in $\mathcal{W}$, each of these forces corresponds to a wrench ${\mathbf{w}_{jk}}= [({\mathbf{p}_{ej}}\times {\hat{\mathbf{f}}_{jk}})^{T}, {\hat{\mathbf{f}}_{jk}}^{T}]^{T}\in {\mathbb{R}}^6$, and the nonnegative linear combination of the $n_c$ wrenches is the wrench cone $\mathcal{WC}_{ej}$. The contact wrench at the $j$th contact point can be expressed as $${\mathbf{w}_{ej}}= \sum_{k=1}^{n_c} \beta_{jk} {\mathbf{w}_{jk}},\; \beta_{jk} \geq 0,$$ where the nonnegative $\beta_{jk}$ coefficients multiply the wrench cone edges to yield the total contact wrench (see, e.g., [@Kao2016; @Lynchbook2017]). If the $j$th contact is sliding, it provides a single unit force $\hat{\mathbf{f}}_{j1}$ on the friction cone, which corresponds to a single contact wrench $\mathbf{w}_{j1}$ and a single free coefficient $\beta_{j1} \geq 0$ multiplying it, i.e., $\mathbf{w}_{ej} = \beta_{j1} \mathbf{w}_{j1}$. We denote ${\mathbf{w}_e}$ as the sum of all the external contact wrenches, $${\mathbf{w}_e}= \sum_{j=1}^m {\mathbf{w}_{ej}}= {\mathbf{W}}\bm \beta \in \mathcal{WC}_e, \label{eq:weALL}$$ where $\mathcal{WC}_e$ is the wrench cone for all external contacts, ${\mathbf{W}}\in \mathbb{R}^{6 \times p}$ consists of the $p$ column vectors of the individual contact wrench cone edges, and $\bm\beta \in \mathbb{R}^{p \times 1}$ is a column vector of the corresponding nonnegative wrench coefficients. For the finger contact force ${\mathbf{f}_{ci}}$, the corresponding wrench applied to the object is $${\mathbf{w}_{ci}}= [({\mathbf{p}_{fi}}\times {\mathbf{f}_{ci}})^{T}, {\mathbf{f}_{ci}}^{T}]^{T}. \label{eq:total_wc}$$ The object wrench-balance condition can be written as $${\mathbf{w}_c}+ {\mathbf{w}_e}+ {\mathbf{w}_g}= \mathbf{0}, \label{eq:ext_force_bal}$$ where ${\mathbf{w}_c}= \sum_{i=1}^n {\mathbf{w}_{ci}}$ is the total finger contact wrench and ${\mathbf{w}_g}$ is the gravitational wrench. Robustness Analysis {#sec:robustness} =================== At a given time during execution of a planned sliding regrasp, the expected finger contact wrench on the object is ${\bar{\mathbf{w}}_c}$, but due to uncertainty in friction, anchor motions, and contact geometry, the actual contact wrench is assumed to be ${\mathbf{w}_c}= {\bar{\mathbf{w}}_c}+ {\delta\mathbf{w}_{c}}$, where ${\delta\mathbf{w}_{c}}$ is a disturbance. A planned regrasp is *robust to $\varepsilon$ wrench uncertainty* (or *$\varepsilon$-robust* for short) if, for all $t\in[0,T]$, there exists a ${\mathbf{w}_e}(t) \in \mathcal{WC}_e(t)$ such that $$\bar{\mathbf{w}}_c(t) + {\delta\mathbf{w}_{c}}(t) + {\mathbf{w}_e}(t) + {\mathbf{w}_g}(t) = \mathbf{0}, \label{eq:uncertain_balance}$$ where ${\bar{\mathbf{w}}_c}(t)$ is the expected fingertip wrench during the regrasp and each of the six components of the fingertip wrench disturbance ${\delta\mathbf{w}_{c}}(t)$ can take any value in the range $[-\varepsilon,\varepsilon], \varepsilon >0$. \[def:robust\_definition\] The definition of $\varepsilon$-robustness does not differentiate between forces and moments in a wrench. Moments can be divided by a characteristic length-scale factor to have the same units as forces. Since $\varepsilon$-robustness is based on full-dimensional wrench uncertainty at the fingertips, it also implies robustness to small wrench uncertainty at the environmental contacts. A planar example is shown in Figure \[fig:2D\_wrench\_cone\]. The four external basis wrench vectors give the external wrench cone ${\mathcal{WC}_{e}}$. For the nominally-required external wrench $\bar{\mathbf{w}}_e$, as long as the $\varepsilon$ wrench uncertainty cube is within the external wrench cone ${\mathcal{WC}_{e}}$, the plan is robust to $\varepsilon$ wrench uncertainty at this instant. A necessary and sufficient condition for $\varepsilon$-robustness is that each of the $2^6$ corners of the wrench disturbance hypercube lies within ${\mathcal{WC}_{e}}$. Proposition \[prop1\] gives a simple sufficient condition for $\varepsilon$-robustness. A planned regrasp is $\varepsilon$-robust if ${\mathbf{W}}(t)$ has rank six and the planned nominal fingertip wrench ${\bar{\mathbf{w}}_c}(t)$ permits a nominal environmental wrench coefficient vector $\bar{\bm \beta}(t)$ satisfying ------------------------- ------------------------------------------------------------------------------------ -------------------- nominal wrench balance: ${\bar{\mathbf{w}}_c}(t) + {\mathbf{W}}(t) \bar{\bm \beta}(t) + {\mathbf{w}_g}(t)$ $= \mathbf{0}$ robustness: $\bar{\bm \beta}(t) - \varepsilon \|{\mathbf{W}}^\dagger(t)\| \mathbf{1}$ $\geq \mathbf{0}$, ------------------------- ------------------------------------------------------------------------------------ -------------------- for all $t \in [0,T]$, where ${\mathbf{W}}^\dagger(t) = {\mathbf{W}^{T}}(t) ({\mathbf{W}}(t) {\mathbf{W}^{T}}(t))^{-1}$, $\mathbf{0}$ and $\mathbf{1}$ are vectors of zeros and ones, and $\|\cdot \|$ is the matrix norm induced by the vector 2-norm. \[prop1\] Since the uncertainty ${\delta\mathbf{w}_{c}}(t)$ spans all dimensions of the wrench space, the rank of ${\mathbf{W}}(t)$ must be six. From Definition \[def:robust\_definition\] and Equation , at any given time wrench balance with uncertainty requires $${\mathbf{W}}{\delta \bm \beta}= -{\delta\mathbf{w}_{c}},$$ where $\bar{\bm \beta} + {\delta \bm \beta}= \bm \beta$ defines an environmental contact wrench ${\mathbf{W}}\bm \beta$ satisfying wrench balance when including the disturbance ${\delta\mathbf{w}_{c}}$. A particular solution to this equation is $${\delta \bm \beta}= -{\mathbf{W}}^\dagger {\delta\mathbf{w}_{c}}. \label{eq:dbeta}$$ To satisfy the Coulomb friction assumption, we have $$\bar{\bm \beta} + {\delta \bm \beta}\geq \mathbf{0}. \label{eq:friction_condi}$$ Substituting Equation  into gives $$\bar{\bm \beta} - {\mathbf{W}}^\dagger {\delta\mathbf{w}_{c}}\geq \mathbf{0}. \label{eq:beta_condi}$$ Since each component of ${\delta\mathbf{w}_{c}}$ must be in the range $[-\varepsilon,\varepsilon]$, $$\varepsilon \|{\mathbf{W}}^\dagger\| \mathbf{1} \geq {\mathbf{W}}^\dagger {\delta\mathbf{w}_{c}}\label{eq:ineq}$$ and the robustness condition in the proposition follows by substituting  into . The robustness condition in Proposition \[prop1\] implies that $\varepsilon$-robustness can be obtained for larger values of $\varepsilon$ if the environmental contact wrench coefficients $\bar{\bm \beta}$ are larger. Since larger environmental wrenches imply larger fingertip wrenches by quasistatic wrench balance, fingers with greater force-generation capability are generally capable of larger values of $\varepsilon$-robustness. $\varepsilon$-robustness requires a full-dimensional external wrench cone ${\mathcal{WC}_{e}}$, i.e., a wrench cone with a non-empty interior. This can be achieved by two frictional rolling/sticking contacts in the plane or three frictional rolling/sticking contacts in 3D. While it is possible to have a full-dimensional external wrench cone when one or more contacts roll or slide, such cases are exceptions, relying on very specific contact geometries. For this reason, in the remainder of the paper we focus on the case where $\varepsilon$-robustness is achieved by the object remaining stationary relative to the rigid environment. Sliding Regrasp Planning {#sec:motion_planning} ======================== The finger and object mechanics of the previous sections provide constraints that must be satisfied by a sliding regrasp plan. A planning algorithm may be expressed as a constraint satisfaction problem or as a constrained optimization, as in Table \[table:generic\]. The wrench-balance constraint 4) is redundant with the optimization criterion: if the maximum $\varepsilon$ to which the plan is robust is greater than zero, then constraint 4) is automatically satisfied. The regrasp planning problem could be reformulated to encode a robustness condition in constraint 4) and to change the objective function to minimize forces applied by the fingertips, as a way of resolving the finger contact inverse mechanics redundancy. Or the objective function could be eliminated completely, turning the planning problem into a constraint satisfaction problem instead of an optimization. How to efficiently implement the planner depends on properties of the robot hand and other details that may be task-specific, and it is not the purpose of this paper to propose a single implementation for all tasks, objects, and robot hands. Choices include how to represent the trajectories using finite parametrizations; whether to use local gradient-based optimization methods based on collocation or shooting, global optimization methods, search-based methods; etc. Instead of solving for both ${\mathbf{p}_{f}}(t)$ and ${\mathbf{p}_{a}}(t)$ and constraining them to be consistent, we could solve only for ${\mathbf{p}_{a}}(t)$ and use forward mechanics (Section \[subsec:contact\_mechanics\_forward\]) to determine the corresponding ${\mathbf{p}_{f}}(t)$, or we could solve only for ${\mathbf{p}_{f}}(t)$ and use inverse mechanics with redundancy resolution (Section \[subsec:contact\_mechanics\_inverse\]) to solve for ${\mathbf{p}_{a}}(t)$. Also, in a typical regrasp plan, each fingertip starts out sticking while the anchor repositions itself to bring the contact force to the boundary of the friction cone; the fingertip transitions to sliding; and finally the fingertip reverts to sticking while the anchor is repositioned to bring the contact force to the interior of the friction cone, once the new grasp is achieved. The planner can treat these segments (with their different finger contact mechanics) separately, subject to continuity constraints at the transitions. Finally, we could restrict the object to be stationary ($\mathbf{T}_o(t) = \mathbf{T}_o(0)$) during the regrasp to achieve $\varepsilon$-robustness, following the discussion at the end of Section \[sec:robustness\]. In the next section we describe one way to implement the general regrasp planning approach for the specific case of a two-fingered regrasp. Implementation {#sec:implementation} ============== In this section we describe a two-fingered sliding regrasp task and an implementation of the regrasp planner of Table \[table:generic\]. First we introduce the experimental setup; then we describe our methods for experimentally extracting relevant modeling parameters; and finally we give an implementation of the planning strategy outlined in Table \[table:generic\] as well as simulation and experimental results. The experimental regrasp task was designed to be simple enough to yield insight into the derivations of the previous sections and to allow graphical interpretation of the robustness condition. To satisfy $\varepsilon$-robustness, the fingers keep the object stationary during the regrasp. Experimental Regrasp Task {#subsec:2Dsys_description} ------------------------- ![The ERIN manipulation system.[]{data-label="fig:ERIN"}](ERIN.jpg){width="3in"} For our experiments, we used our ERIN manipulation system, consisting of a ten-camera OptiTrack high-speed vision system, a Barrett WAM 7-dof arm, and a four-fingered Allegro robot hand with replaceable fingertips [@Shi2017] (Figure \[fig:ERIN\]). Two fingers of the hand grasp an object with smooth edges (Figure \[fig:2Dconfig\]). The object sits on a fixed table, and the motions of the hand are in the vertical plane. Figure \[fig:2Dconfig\] shows an initial configuration of the fingertips near the top of the object and a desired regrasp configuration near the bottom. The friction coefficient between the object and the table is $\mu_e = 1$ and the gravitational force acting on the object is $10.1$N in the $-y$-direction. Let $\mathcal{H}$ denote a frame attached to the hand with an origin at ${\mathbf{p}_h}$. The finger stiffnesses and anchor positions are assumed to be fixed in $\mathcal{H}$, i.e., ${\mathbf{K}^\mathcal{H}_i}$ and ${\mathbf{p}^\mathcal{H}_{ai}}$ are constant. Therefore the anchor positions are uniquely determined by the hand configuration and in-hand sliding is realized by controlling the hand motion. For simplicity, we allow only $(x,y)$ translational hand motions in the vertical plane, so the anchor velocities are identical and confined to a two-dimensional space. Under these constraints, the sliding inverse mechanics of Section \[subsec:contact\_mechanics\_inverse\] yields unique anchor velocities: the redundancies in the possible anchor velocities from the sliding inverse mechanics are resolved by the limited motions available to the hand (and therefore the finger anchors). While the hand moves downward (in the $-y$-direction), the normal forces, relative sliding velocities at the two fingers, and $\varepsilon$-robustness can be modulated by the hand’s motion in the $x$-direction. The WAM arm controls the hand’s motion at 500 Hz, and markers attached to the object and hand allow the vision system to track their 3D configurations at 360 Hz. Each fingertip is a cone, yielding a well-defined contact point, and each finger consists of four joints individually controlled by geared DC motors. The fingers are joint-torque controlled at $333$Hz to achieve the desired fingertip springiness ${\mathbf{K}^\mathcal{H}_i}$. The constant virtual anchor location ${\mathbf{p}^\mathcal{H}_{ai}}$ of finger $i$ relative to the hand is the controlled location of the fingertip when it applies zero force. The rest length of the virtual spring is zero (i.e., $\mathbf{d}_{0i} = 0$), so the extension of the virtual spring is given by ${\mathbf{p}^\mathcal{H}_{ai}}- {\mathbf{p}^\mathcal{H}_{fi}}$, where ${\mathbf{p}^\mathcal{H}_{fi}}$ is the actual fingertip location, and this spring extension is turned into finger reference joint torques by the equation $$\bm{\tau}_i = \mathbf{J}^{T}_i \left[ {\mathbf{K}^\mathcal{H}_i}({\mathbf{p}^\mathcal{H}_{ai}}- {\mathbf{p}^\mathcal{H}_{fi}}) \right],$$ where $\bm \tau_i$ denotes the joint torques for finger $i$ and $\mathbf{J}_i$ denotes the finger’s Jacobian matrix. Finger joint encoder feedback is used to evaluate ${\mathbf{p}^\mathcal{H}_{fi}}$ and $\mathbf{J}_i$. Parameter Identification {#sec:param_iden} ------------------------ To test our controlled finger stiffnesses ${\mathbf{K}^\mathcal{H}_1}$ and ${\mathbf{K}^\mathcal{H}_2}$, and to verify our estimate of friction $\mu$ between the fingertips and the object, we collected data from experiments where we manually configured the initial grasp of the object (similar to what is shown in Figure \[fig:2Dconfig\]) and commanded the hand to move in the $-y$-direction for $0.15$m. Using the forward contact mechanics from Section \[subsec:contact\_mechanics\_forward\], and using an SQP solver to adjust our estimates of ${\mathbf{K}^\mathcal{H}_1}$, ${\mathbf{K}^\mathcal{H}_2}$, and $\mu$ to minimize the sum of the absolute errors between simulated results and 5000 experimentally-measured finger contact positions, we found good agreement between our controlled finger stiffnesses and the experimentally-estimated finger stiffnesses (see Table \[table:param\_fitting\]). Figure \[2D\_fitting\] shows a comparison between experimental results and simulated results with the fitted friction coefficient and finger stiffnesses. [ @[ ]{}l c@ @c@ ]{}\ parameters &       initial guess          & estimated\ \ [\ ]{}$\mu$ & 0.24 & 0.2502\ [\ ]{}${\mathbf{K}^\mathcal{H}_1}$ (N/m) & $\left[ \begin{array}{@{\,}cc@{\,}} 150 & 0 \\ 0 & 100 \end{array} \right]$ & $\left[ \begin{array}{@{\,}cc@{\,}} 152.06 & 0 \\ 0 & 101.1 \end{array} \right]$\ [\ ]{}[\ ]{}${\mathbf{K}^\mathcal{H}_2}$ (N/m) & $\left[ \begin{array}{@{\,}cc@{\,}} 150 & 0 \\ 0 & 100 \end{array} \right]$ & $\left[ \begin{array}{@{\,}cc@{\,}} 150.23 & 0 \\ 0 & 105.94 \end{array} \right]$\ [\ ]{}[\ ]{}[\ ]{} Robust Regrasp Planning ----------------------- For this regrasp task—where the finger anchors are rigidly attached to the hand, there are two velocity controls for the hand, and the object is stationary—if we know the sliding directions of each fingertip (downward or upward in this example), there exist unique one-to-one mappings between the hand configuration ${\mathbf{p}_h}$, the anchor positions ${\mathbf{p}_{a}}$, the fingertip positions ${\mathbf{p}_{f}}$, and the contact forces ${\mathbf{f}_c}$. To see this, we start by writing the mapping of anchor positions from ${\mathcal{H}}$ to ${\mathcal{W}}$ as $${\mathbf{p}_{ai}}= {\mathbf{p}_h}+ {\mathbf{R}_h}{\mathbf{p}^\mathcal{H}_{ai}}, \label{eq:pa_H2W}$$ where ${\mathbf{R}_h}$ is the rotation matrix of ${\mathcal{H}}$. Based on the previous assumptions, ${\mathbf{R}_h}$ and ${\mathbf{p}^\mathcal{H}_{ai}}$ are fixed. When the fingertips slide on the object, each ${\mathbf{f}_{ci}}$ is along an edge of the fingertip’s friction cone into the object. We denote ${{}^{\perp}\mathbf {\hat f}_{ci}}$ as the direction perpendicular to the contact force ${\mathbf{f}_{ci}}$, so $${{}^{\perp}\mathbf {\hat f}_{ci}}\cdot {\mathbf{f}_{ci}}= 0 \rightarrow {{}^{\perp}\mathbf {\hat f}_{ci}}^{T}{\mathbf{f}_{ci}}= 0. \label{eq:fcDot}$$ Given a fingertip contact position, the direction ${{}^{\perp}\mathbf {\hat f}_{ci}}$ can be obtained from the object geometry, contact friction, and the sliding direction. Substituting Equations  and to , we can solve the hand position for a given pair of finger contact positions $\{{\mathbf{p}_{f1}},\,{\mathbf{p}_{f2}}\}$ as $${\mathbf{p}_h}= \left[ \begin{matrix} {{}^{\perp}\mathbf {\hat f}_{c1}}^{T}{\mathbf{K}_1}\vspace{0.05in} \\ {{}^{\perp}\mathbf {\hat f}_{c2}}^{T}{\mathbf{K}_2}\end{matrix} \right]^{-1} \left[ \begin{matrix} {\Delta}_1 \vspace{0.05in} \\ {\Delta}_2 \end{matrix} \right], \label{eq:ph_2D}$$ where ${\Delta}_i = {{}^{\perp}\mathbf {\hat f}_{ci}}^{T}{\mathbf{K}_i}({\mathbf{p}_{fi}}- {\mathbf{R}_h}{\mathbf{p}^\mathcal{H}_{ai}})$. Knowing ${\mathbf{p}_h}$, the fingertip contact forces can be solved using Equation . Combined with Equations  and , we can test if the fingertip contact wrenches can be balanced by the external contacts. ### Finger Contact Position Map {#subsubsec:FCmap} For the given object, the fingertip contact positions can be parametrized by their $y$-positions in the object frame $\mathcal{B}$. Figure \[fig:feasi\_map\] shows the two-dimensional finger contact position map ([FCmap]{}), with axes defined by $y^{{\mathcal{B}}}_{f1}$ and $y^{{\mathcal{B}}}_{f2}$, when both fingers slide downward on the object. For each point $(y^{{\mathcal{B}}}_{f1}, y^{{\mathcal{B}}}_{f2})$ on the FCmap, we can uniquely calculate ${\mathbf{p}_{f}}$, ${\mathbf{p}_{a}}$, ${\mathbf{f}_c}$, and ${\mathbf{p}_h}$, as described above. Based on Equations  and , we can test if the fingertip forces can be balanced by the external contacts with a linear program: $$\underset{{\bar{\bm{\beta}}}}{\text{min}} ~ \mathbf{1}^{T}\bar{\bm{\beta}} , ~~ \text{subject to} \begin{cases} {\mathbf{W}}\bar{\bm\beta} = -\bar{{\mathbf{w}_c}} - {\mathbf{w}_g}\\ \bar{\bm\beta} \geq \mathbf{0}^{p \times 1} \end{cases}\hspace{-0,15in}. \label{eq:linprog}$$ If a solution $\bar{\bm \beta}$ is found, the fingertip contact locations can satisfy the wrench-balance constraint. In Figure \[fig:feasi\_map\], feasible contact point positions are colored green. Figure \[fig:feasi\_map\] also shows an example regrasp task, where ${\bm{\mathsf{S}}}$ corresponds to the initial fingertip configuration and ${\bm{\mathsf{G}}}$ corresponds to the goal fingertip configuration. The regrasp is achievable by fingertips always sliding in the downward direction if and only if ${\bm{\mathsf{S}}}$ and ${\bm{\mathsf{G}}}$ are in the same green connected component. ### Planning Algorithm Sliding regrasp motion planning is divided into two phases: Phase 1 ($t \in [0,T_1]$), where the fingertips stick to the object and the anchors are repositioned to bring contact forces to the boundaries of the friction cone, and Phase 2 ($t \in [T_1,T_2]$), where the fingertips slide on the object to the desired new configuration ${\bm{\mathsf{G}}}$ in the FCmap. An optional Phase 3 would reposition the anchors again to move the contact forces away from the boundaries of the friction cones. [***Phase 1, anchor repositioning:***]{} The hand trajectory ${\mathbf{p}_h}(t), t \in [0,T_1]$, and therefore the anchor trajectories, is chosen to be a cubic polynomial of time. This polynomial is uniquely defined by the duration $T_1$, the initial and final velocities ${\dot{\mathbf{p}}_h}(0) = {\dot{\mathbf{p}}_h}(T_1) =\mathbf{0}$, the initial configuration ${\mathbf{p}_h}(0) = \mathbf{p}_{h0}$, and the final configuration at the point ${\bm{\mathsf{S}}}$ on the FCmap. The point ${\bm{\mathsf{S}}}$ is defined by the fingers’ initial contact locations and the fact that the fingers will slide downward, as described above. ${\bm{\mathsf{S}}}$ is the unique point of intersection between the space of anchor positions that cause no sliding when the fingertips are at their initial configuration and the space of the FCmap, where the fingers slide downward on the object. During Phase 1 the hand translates along a straight line with a quadratic velocity profile beginning and ending at rest. Fingertip forces are guaranteed to remain within their respective friction cones during the straight-line motions of the anchors due to the convexity of the friction cones. Figure \[fig:2stageplan\] gives a conceptual representation of the hand’s motion during Phase 1, which ends when the anchors have moved so that the grasp configuration is at ${\bm{\mathsf{S}}}$, which resides in both the FCmap and the space of anchor configurations that does not cause sliding at the fingertips. ![In Phase 1 of the sliding regrasp, the anchors move but the fingertips remain stationary. At the transition to Phase 2, at the point ${\bm{\mathsf{S}}}$, the contact forces have moved to the boundary of their friction cones, and the fingertips begin to slide. Phase 2 is plotted in the FCmap corresponding to both fingers sliding downward on the object. The fingertips follow the curve of placements $\bm \xi^*$ that maximize $\varepsilon$-robustness (in red) for most of the plan. The full regrasp plan consists of the hand trajectory ${\mathbf{p}_h}(t)$, $t \in [0,T_2]$, that uniquely corresponds to the curve in black.[]{data-label="fig:2stageplan"}](2StagePlan.pdf) [***Phase 2, sliding regrasp:***]{} Since the fingertip contact positions can be described by the coordinates $(y^{{\mathcal{B}}}_{f1}, y^{{\mathcal{B}}}_{f2})$, we use $\bm\xi(t) = [y^{{\mathcal{B}}}_{f1}(t),\; y^{{\mathcal{B}}}_{f2}(t) ]^{T}, t \in [T_1,T_2]$, to represent sliding trajectories. To accomplish the desired regrasp we have $\bm\xi(T_1) = {\bm{\mathsf{S}}}$ and $\bm\xi(T_2) = {\bm{\mathsf{G}}}$. A sliding trajectory $\bm\xi(t)$ is feasible if it always lies in the feasible region of FCmap. Based on the findings in Section \[sec:robustness\], the further away the required external contact wrench $\bar{\mathbf{w}}_e$ is from the boundaries of ${\mathcal{WC}_{e}}$, the more robust a fingertip configuration is. For this task, given the contact position of one finger, there is an optimally robust contact position of the other finger. The union of these most robust fingertip position pairs is a curve in the FCmap, denoted $\bm\xi^*$. To describe how far a wrench ${\mathbf{w}_e}$ is from the faces of the wrench cone ${\mathcal{WC}_{e}}$, we define a matrix ${{}^{\perp}\mathbf{W}}$ whose rows are unit vectors normal to the faces of ${\mathcal{WC}_{e}}$ and pointing into the cone. The curve $\bm\xi^*$ is found by the following procedure: $$\forall \,y^{{\mathcal{B}}}_{f1},~\text{find} ~ y^{{\mathcal{B}}*}_{f2} ~\text{maximizing } d \text{ such that } {{}^{\perp}\mathbf{W}}\bar{\mathbf{w}}_e \geq d,$$ where $\bar{\mathbf{w}}_e$ is the total expected external contact wrench. The solved $\bm\xi^*$ is shown as the red curve in Figure \[fig:2stageplan\], consisting of points calculated at 1 mm increments in $y_{f1}^{\mathcal{B}}$. The entire FCmap as shown in Figures \[fig:feasi\_map\] and \[fig:2stageplan\] is not explicitly computed during planning; it is only shown to help visualize the planning space and to illustrate the notion of robustness. To maximize robustness, the principle of our planning algorithm is to plan $\bm\xi(t)$ to coincide with $\bm\xi^*$ as much as possible while satisfying the desired final regrasp. By introducing a point ${\bm{\mathsf{S}}}'$ where $\bm\xi(t)$ reaches $\bm\xi^*$ from ${\bm{\mathsf{S}}}$, and a point ${\bm{\mathsf{G}}}'$ where $\bm\xi(t)$ departs $\bm\xi^*$ to go to ${\bm{\mathsf{G}}}$, the sliding trajectory $\bm\xi(t)$ is defined by three pieces: - *1st piece*(${\bm{\mathsf{S}}} \rightarrow {\bm{\mathsf{S}}}'$, $T_1 \leq t \leq T_{21} = T_1 + \Delta T_{21}$): The contact sliding trajectories $\bm\xi(t)$ are cubic time polynomials of duration $\Delta T_{21}$, solved uniquely by the four boundary conditions $\bm\xi(T_1) = {\bm{\mathsf{S}}}$, $\bm\xi(T_{21}) = {\bm{\mathsf{S}}}'$, $\dot{\bm\xi}(T_1) = \mathbf{0}$, and $\dot{\bm\xi}(T_{21}) = \mathbf{v}_{s}$, where $\mathbf{v}_s$ is determined by the initial velocity of the next piece. - *2nd piece*(${\bm{\mathsf{S}}}' \rightarrow {\bm{\mathsf{G}}}'$, $T_{21} \leq t \leq T_{22} = T_{21} + \Delta T_{22}$): The contacts slide along $\bm\xi^*$ for a duration $\Delta T_{22}$. The sliding velocities are assumed to have a constant magnitude $\|\dot{\bm\xi}\| = v_2 = L_2/\Delta T_{22}$, where $L_2$ is the arclength of $\bm\xi^*$ between ${\bm{\mathsf{S}}}'$ and ${\bm{\mathsf{G}}}'$. The initial and final velocities are $\mathbf{v}_s = v_2 \, \hat{\partial \bm\xi^*} |_{{\bm{\mathsf{S}}}'} $ and $\mathbf{v}_g = v_2 \, \hat{\partial \bm\xi^*} |_{{\bm{\mathsf{G}}}'} $, where $\hat{\partial \bm\xi^*} |_{{\bm{\mathsf{X}}}}$ is the normalized tangent vector at point ${{\bm{\mathsf{X}}}}$. - *3rd piece*(${\bm{\mathsf{G}}}' \rightarrow {\bm{\mathsf{G}}}$, $T_{22} \leq t \leq T_2 = T_{22} + \Delta T_{23}$): The contacts slide from ${\bm{\mathsf{G}}}'$ to ${\bm{\mathsf{G}}}$ following cubic time polynomials of duration $\Delta T_{23}$, solved uniquely by the four boundary conditions $\bm\xi(T_{22}) = {\bm{\mathsf{G}}}'$, $\bm\xi(T_{2}) = {\bm{\mathsf{G}}}$, $\dot{\bm\xi}(T_{22}) = \mathbf{v}_g$, and $\dot{\bm\xi}(T_{2}) = \mathbf{0}$. The design variables for Phase 2 are the via points ${\bm{\mathsf{S}}}'$ and ${\bm{\mathsf{G}}}'$ on $\bm \xi^*$ and the durations $\Delta T_{21}$, $\Delta T_{22}$, and $\Delta T_{23}$. The objective function can be expressed as maximizing a function of robustness (e.g., how much the planned sliding trajectory coincides with $\bm \xi^*$) while penalizing large sliding velocities. One formulation of the motion planning problem is the following nonlinear program: $$\begin{aligned} \textbf{find} & \quad {\bm{\mathsf{S}}}', {\bm{\mathsf{G}}}', \Delta T_{21}, \Delta T_{22},\Delta T_{23} \nonumber \\ \textbf{maximizing} & \quad L_2(\bm\xi^*, {\bm{\mathsf{S}}}', {\bm{\mathsf{G}}}') - \kappa V_{\text{max}} \nonumber \\ \textbf{such that} & \quad \text{1) } \texttt{sgn}(\dot{\bm \xi}) = \texttt{sgn}({\bm{\mathsf{G}}}-{\bm{\mathsf{S}}}) \nonumber \\ & \quad \text{2) } \Delta T_{21} + \Delta T_{22} + \Delta T_{23} = T_2 - T_1, \nonumber\end{aligned}$$ where $\kappa$ is a positive weighting scalar and $V_{\text{max}} = \text{max}_t(|\dot{y}_{f1}^{\mathcal{B}}(t)| + |\dot{y}_{f2}^{\mathcal{B}}(t)|)$. The first constraint ensures that the sliding directions are always towards the goal, as assumed in Section \[subsubsec:FCmap\]. ### Experimental Results We defined a sliding regrasp task by ${\bm{\mathsf{S}}} = [0.168~\text{m}, 0.169~\text{m}]^{T}$ and ${\bm{\mathsf{G}}} = [0.055~\text{m}, 0.035~\text{m}]^{T}$, where the initial configuration of the hand is such that the fingertip contact forces are in the interior of the friction cone. Given $T_1 = 5~\text{s}$, $T_2 = 20~\text{s}$, and $\kappa = 0.5$, and using MATLAB’s [fmincon]{}, we find the Phase 2 sliding regrasp plan shown as the black curve in Figure \[fig:2stageplan\]. As expected, the curve $\bm \xi(t)$ coincides with the optimally robust curve $\bm\xi^*$ for much of the Phase 2 portion of the plan, to maximize robustness to force disturbances. The full plan, showing the repositioning of the hand (and anchors) for $5$ s in Phase 1 and the Phase 2 sliding for $15$ s, is shown in snapshots in Figure \[fig:plan\_motion\]. Experimental implementations of the plan followed the expected motions closely, indicating that the robustness-maximizing regrasp planner does indeed deliver a robust motion plan. During execution of the sliding regrasp, the hand’s motion was feedback-controlled to follow the planned hand trajectory, and the stiffnesses of the fingertips were actively controlled. The fingertips were not individually motion-controlled to try to track the planned fingertip trajectories. Figure \[fig:exp\_res\] shows a typical experimental result compared to the planned regrasp. The final fingertip positions deviated from the planned positions by $2.2~\text{mm}$ and $2.6~\text{mm}$ for fingers one and two, respectively, compared to total travel distances of $114.2~\text{mm}$ and $136.3~\text{mm}$. Discussion and Future Work {#sec:conclusion} ========================== In this paper we introduced the concept of spring-sliding compliance for in-hand sliding regrasp by pushing the grasped object against environmental constraints. Sliding provides a passive mechanical nonlinear velocity “compliance” to tangential forces, and spring compliance maintains contact normal forces as the fingertips slide over the object. Spring compliance achieves contact normal force control by motion control of physical or virtual finger anchors. We derived the finger contact forward and inverse mechanics for spring-sliding compliant contacts and formulated the $\varepsilon$-robustness condition for sliding regrasps. An experimental implementation of the theory on a two-fingered robot hand shows that spring-sliding regrasps can be automatically planned and robustly executed. Future work may include modifying the point fingertips, to allow fingertips of more general geometry, and patch contacts, with their ability to provide friction forces resisting spin about fingertip contact normals. This increases the complexity of the analysis and, in the most general case, would require modeling fingertip compliance as a $6 \times 6$ matrix, including three rotational freedoms. These more complex models may be justified by better robot hands that reliably control contact compliance and sense contact locations and forces. In this paper we specified the environmental contact locations and finger contact mode sequences. In future work the motion planning algorithm could be expanded to judiciously choose the environmental contacts and sequences of fingertip sticking and sliding phases to add more design freedoms. Also, while we focused on stationary contact between the object and the environment, spring-sliding regrasps could be obtained with sliding or rolling contacts with the environment, even allowing tasks that assemble the object with the environment. For sliding regrasps with moving contacts with the environment, feedback control (not considered in this paper) could be employed to stabilize plans that do not meet the restrictive definition of $\varepsilon$-robustness. Finally, learning methods could be employed to account for unmodeled effects beyond contact force uncertainty. The modeling in this paper can serve to bootstrap learning, allowing more efficient use of data obtained from experiments and learning of corrections to the model rather than learning from scratch. Compliant Grasps via Open-Loop Torque-Controlled Joints {#app:torque} ======================================================= Passively compliant grasps may arise from fingers under open-loop joint-torque control (e.g., constant torques or currents at the joints). As one example, assume the world frame is at the finger base and $\mathbf{p}_f$ is the fingertip position relative to the anchor. Let $\bm{\theta}$ denote the finger joint angle vector, $\bm\uptau$ denote the joint torque vector, and $\mathbf{J}(\bm\theta)$ denote the Jacobian matrix sastisfying $\dot{\mathbf{p}}_f = \mathbf{J}\dot{\bm \theta}$. From finger kinematics and the principle of virtual work, we have the mapping from fingertip contact forces to the joint torques $\bm\uptau = \mathbf{J}^{T}\mathbf f_c\,$. When $\mathbf J$ is invertible, we have $$\begin{aligned} \mathbf f_c &= \mathbf{J}^{-{T}} \bm\uptau \nonumber\\ \rightarrow \partial \mathbf{f}_c &= \partial(\mathbf{J}^{-{T}}) \bm\uptau + \mathbf{J}^{-{T}} \partial\bm\uptau. \label{eq:partial_fc}\end{aligned}$$ From the definition of the Jacobian we have $$\partial \mathbf{p}_f = \mathbf{J} \partial\bm\theta. \label{eq:partial_pf}$$ Combining Equations and , we can write the finger stiffness matrix as $${\mathbf{K}}= -\frac{\partial \mathbf{f}_c}{\partial \mathbf{p}_f} = -\frac{\partial(\mathbf{J}^{-{T}})}{\partial\bm\theta} \bm\uptau \mathbf{J}^{-1} - \mathbf{J}^{-{T}} \frac{\partial \bm\uptau}{\partial\bm\theta} \mathbf{J}^{-1}. \label{eq:local_stiffness}$$ The specific expression for ${\mathbf{K}}$ depends on the Jacobian and the joint torques $\bm\uptau$. Continuing the example, assume that joint torques are independent of the finger position ($\frac{\partial\bm\uptau}{\partial \bm\theta} = \mathbf 0$) for the two-joint finger shown in Figure \[fig:finger\_springmodel\](b). Assume that the links have unit length and the joint torques have a constant value of $1$. Then the stiffness matrix in Equation  simplifies to $${\mathbf{K}}(\bm{\theta}) = -\frac{\partial(\mathbf{J}^{-{T}})}{\partial\bm\theta} \left[\begin{array}{c}1\\1\end{array}\right] \mathbf{J}^{-1} = \left[ \begin{array}{cc} k_{11} & k_{12}\\ k_{21} & k_{22} \end{array}\right], \label{eq:2R_K}$$ where $$\begin{aligned} k_{11} = &\frac{1}{4} \csc ^3\theta _2 \left(\cos \left(2 \theta _1-\theta _2\right)+ \right. \\ &2 \left(\cos \theta _2+ \cos \left(2 \theta _1+2\theta _2\right)+1\right) \left.+\cos \left(2 \theta _1+\theta _2\right)\right), \\ k_{12} = &~k_{21} = \frac{1}{4} \left(\sin \left(2 \theta _1-\theta _2\right)+2 \sin \left(2 \theta _1+2\theta _2\right) + \right. \\ &\left. \sin \left(2 \theta _1+\theta _2\right)\right) \csc ^3\theta_2, \\ k_{22} = &-\frac{1}{4} \csc ^3\theta _2 \left(\cos \left(2 \theta _1-\theta _2\right)- \right. \\ &\left. 2 ( \cos \theta_2 - \cos \left(2 \theta _1+2\theta _2\right) + 1) +\cos \left(2 \theta _1+\theta _2\right)\right) .\end{aligned}$$ The eigenvalues of the stiffness matrix ${\mathbf{K}}$ are $$\begin{gathered} \uplambda_1 = \frac{1}{2} \csc ^3\theta _2 \left(1 + \cos \theta_2 - \sqrt{1 + \cos \left(3 \theta _2\right) + \cos \theta_2 + \cos^2\theta _2}\right) , \\ \uplambda_2 = \frac{1}{2} \csc ^3\theta _2 \left(1 + \cos \theta_2 + \sqrt{1 + \cos \left(3 \theta _2\right) + \cos \theta_2 + \cos^2\theta _2}\right).\end{gathered}$$ The eigenvalues are only related to $\theta_2$ since $\theta_1$ only changes the finger’s orientation relative to the base. The stiffness matrix ${\mathbf{K}}$ is symmetric and the two eigenvalues must both be positive to satisfy the assumption of positive-definite stiffness. We plot the eigenvalues with respect to $\theta_2$ in Figure \[fig:2R\_eigenvalues\] (Top). Figure \[fig:2R\_eigenvalues\] (Bottom) shows the finger configuration and stiffness for four values of $\theta_2$. The finger configuration should satisfy $0<\theta_2<\frac{\pi}{2}$ to satisfy the positive-definite stiffness assumption of this paper. In cases ****, ****, and ****, the stiffness matrix is not positive definite, which may lead to “runaway” sliding where the quasistatic condition is violated. As an example, Figure \[fig:2R\_ill\_sliding\] shows case **** of Figure \[fig:2R\_eigenvalues\]. Since $\uptau_1=\uptau_2$, the fingertip force is always aligned with the first link of the finger. For the friction cone shown, the contact force with the stationary object is initially on the edge of the friction cone and the finger is force balanced. If the contact location on the object is perturbed by $\partial \mathbf{p}_f$, as shown, the change $\partial\mathbf{f}_c$ in the fingertip force generated by the joint torques causes the total force to move outside the friction cone, meaning friction forces applied by the object to the finger can no longer completely balance the finger force. The fingertip will accelerate in the sliding direction and the motion of the fingertip must be solved for using dynamics; the quasistatic equilibrium assumption is violated. Conditions where the quasistatic assumption are violated are studied further in Section \[subsec:sliding\_ill\_condition\]. ![An unstable sliding example for case **** in Figure \[fig:2R\_eigenvalues\]: since $\uptau_1=\uptau_2$ the fingertip force is always aligned with the first link. For a fingertip displacement $\partial \mathbf{p}_f$ shown as the blue vector, the force applied by the joints at the fingertip changes as shown by $\partial \mathbf{f}_c$. The green shaded area is the friction cone.[]{data-label="fig:2R_ill_sliding"}](2R_ill_sliding.pdf){width="3.in"} In summary, many models of the finger hardware and control strategy satisfy the assumptions of this paper, even certain configurations of the simple open-loop torque-controlled fingers described above. [^1]: Jian Shi is with Dorabot, Inc., Atlanta, GA USA (email: jian.shi@dorabot.com) [^2]: Kevin M. Lynch is with the Center for Robotics and Biosystems and Mechanical Engineering Dept., Northwestern University, Evanston, IL 60208 USA. (email: kmlynch@northwestern.edu). He is also affiliated with the Northwestern Institute on Complex Systems (NICO). [^3]: This work was supported by NSF grant IIS-1527921. We would like to thank Paul Umbanhowar, Zack Woodruff, and Nelson Rosa for their helpful suggestions and comments, and Huan Weng for his work on building the stiffness controller for the Allegro hand. [^4]: A line contact is modeled by two point contacts and a face contact is modeled by three or more points.
--- abstract: 'We study theoretically the measurement of a mechanical oscillator using a single two level system as a detector. In a recent experiment, we used a single electronic spin associated with a nitrogen vacancy center in diamond to probe the thermal motion of a magnetized cantilever at room temperature {Kolkowitz [*et al.*]{}, [*Science*]{} [**335**]{}, 1603 (2012)}. Here, we present a detailed analysis of the sensitivity limits of this technique, as well as the possibility to measure the zero point motion of the oscillator. Further, we discuss the issue of measurement backaction in sequential measurements and find that although backaction heating can occur, it does not prohibit the detection of zero point motion. Throughout the paper we focus on the experimental implementation of a nitrogen vacancy center coupled to a magnetic cantilever; however, our results are applicable to a wide class of spin-oscillator systems. Implications for preparation of nonclassical states of a mechanical oscillator are also discussed.' address: - '$^1$ Department of Physics, Harvard University, Cambridge, MA 02138, USA' - '$^2$ Institute for Quantum Optics and Quantum Information of the Austrian Academy of Science, 6020 Innsbruck, Austria' - | $^3$ Department of Physics, University of California Santa Barbara, Santa Barbara,\ CA 93106, USA - | $^4$ Departments of Physics and Applied Physics, Yale University, New Haven,\ CT 06520, USA author: - | S D Bennett$^1$, S Kolkowitz$^1$, Q P Unterreithmeier$^1$, P Rabl$^2$,\ A C Bleszynski Jayich$^3$, J G E Harris$^4$ and M D Lukin$^1$ bibliography: - 'NVcantilever1.bib' title: Measuring mechanical motion with a single spin --- Introduction ============ Recent interest in mechanical oscillators coupled to quantum systems is motivated by quantum device applications and by the goal of observing quantum behavior of macroscopic mechanical objects. The past decade has seen rapid progress studying mechanical oscillators coupled to quantum two-level systems such as superconducting qubits [@LaHaye:2009ja; @OConnell:2010br; @Armour:2002kta], and single electronic spins [@Rugar:2004cr], and theoretical work has explored strong mechanical coupling to collective atomic spins [@Treutlein:2007km; @Steinke:2011ig]. Recently, it was proposed that a mechanical oscillator could be strongly coupled to an individual spin qubit [@Rabl:2009fz; @Palyi:2011vla]. Experiments based on single spins coupled to mechanical systems have demonstrated scanning magnetometry [@Balasubramanian:2008ga], mechanical spin control [@Hong:2012wr], and detection of mechanical motion [@Arcizet:2011cg; @Kolkowitz:2012iw]. In parallel, pulsed spin control techniques have attracted renewed interest for decoupling a spin from low frequency noise in its environment, extending its coherence [@deLange:2010ga], while also enhancing the sensitivity of the spin for magnetometry [@Maze:2008ws; @Lange:2011jo; @Naydenov:2011eo; @Laraoui:jk]. In this paper we consider pulsed single spin measurements applied to the detection of mechanical motion at the single phonon level. We extend the analysis presented in our recent work [@Kolkowitz:2012iw], providing a detailed theoretical framework and a discussion of measurement backaction. The central concept of our measurement approach is to apply a sequence of control pulses to the spin, synchronizing its dynamics with the period of a magnetized cantilever, thereby enhancing its sensitivity to the motion. By measuring the variance of the accumulated phase imprinted on the spin by the oscillator during a measurement, we directly probe the average phonon number, despite the fact that the oscillator position is linearly coupled to the transition frequency of the spin. We derive the conditions for observing a single phonon using the spin as a detector, and find that these conditions coincide with that of large effective cooperativity, sufficient to perform a two-spin gate mediated by mechanical motion [@Rabl:2010gza]. Further, we consider the backaction arising from sequential measurements and show that this does not prohibit single phonon resolution. Throughout the paper, we focus on the specific spin-oscillator system of a magnetized cantilever coupled to the electronic spin associated with a nitrogen-vacancy (NV) center in diamond. For realistic experimental parameters we find that this system can reach the regime of large cooperative spin-phonon coupling, and the spin may be used to measure and manipulate mechanical motion at the quantum level. We begin in by introducing the coupled system and spin control sequences, and calculate the signal due to thermal and driven motion of the oscillator. Then in we derive the optimal phonon number sensitivity, and show the relation between strong cooperativity and single phonon resolution. Finally, in we consider the limit of zero temperature and calculate the signal due to zero point motion, including a discussion of backaction heating for sequential measurements. Coherent sensing of mechanical motion {#sec:dd} ===================================== Model ----- We consider the setup shown schematically in , in which a magnetized cantilever is coupled to the electronic spin of a single NV center. The magnetic tip generates a field gradient at the location of the NV, and as a result its motion modulates the magnetic field seen by the spin causing Zeeman shifts of its precession frequency. To lowest order in small cantilever motion, the precession frequency depends linearly on the position of the tip and is described by the Hamiltonian ($\hbar = 1$) $$\label{eq:H} \hat H = \frac{\Delta}{2} \sz + \frac{\lambda}{2} \left( \hat a + \hat a^\dagger \right) \sz + \hat H_{\rm osc},$$ where $\sz$ is the Pauli operator of the spin and $\hat a$ is the annihilation operator of the oscillator. For a spin associated with an NV center in diamond, we take $\ket{\uparrow} = \ket{m_s = 1}$ and $\ket{\downarrow} = \ket{m_s = 0}$ in the spin-1 ground state of the NV center, and safely ignore the $\ket{m_s = -1}$ state assuming it is far detuned by an applied dc magnetic field. $\Delta$ is the detuning of the microwave pulses used for spin manipulation, which plays no role in what follows and we take $\Delta = 0$ throughout the paper. The spin-oscillator coupling strength is $\lambda = g_e \mu_B G_m x_0/\hbar$, where $g_e\approx 2$ is the Landé $g$-factor, $\mu_B$ is the Bohr magneton, $G_m$ is the magnetic field gradient along the NV axis, and $x_0 = \sqrt{\hbar/ 2 m \omega_0}$ is the zero point motion of the cantilever mode of mass $m$ and frequency $\omega_0$ (we included $\hbar$ in the definitions of $\lambda$ and $x_0$ for clarity). The damped, driven oscillator is described by $$\label{eq:Hosc} \hat H_{\rm osc} = \omega_0 \hat a^\dagger \hat a + \hat H_\gamma + \hat H_{\rm dr},$$ where $$\hat H_\gamma = \sum_k g_k \left( \hat a + \hat a^\dagger \right) \left( \hat b_k + \hat b^\dagger_k \right) + \sum_k \omega_k \hat b^\dagger_k \hat b_k$$ describes dissipative coupling to a bath of oscillators $\hat b_k$, characterized by damping rate $\gamma$ and temperature $T$. Finally, $\hat H_{\rm dr}$ describes a coherent oscillator drive which we consider briefly in . Note that in we have temporarily omitted intrinsic spin decoherence due to the environment; we will include this explicitly in . ![[**(a)**]{} Schematic of the setup. A single spin can be used to measure mechanical motion via magnetic coupling. [**(b)**]{} Toggling sign of the interaction describing $\pi$ pulses flipping the spin. Each sequence begins and ends with $\pi/2$ pulses, and $\pi$ pulses flip the sign of the interaction at regular intervals of time $\tau$. Thin dashed line shows oscillator position, which is synchronized with pulse sequence for $\tau = \pi/\omega_0$ as shown. The total sequence time is $t = 2\tau$ for spin echo and $t = N\tau$ for CPMG. []{data-label="fig:cartoon"}](fig1a.eps "fig:"){height="4cm"} ![[**(a)**]{} Schematic of the setup. A single spin can be used to measure mechanical motion via magnetic coupling. [**(b)**]{} Toggling sign of the interaction describing $\pi$ pulses flipping the spin. Each sequence begins and ends with $\pi/2$ pulses, and $\pi$ pulses flip the sign of the interaction at regular intervals of time $\tau$. Thin dashed line shows oscillator position, which is synchronized with pulse sequence for $\tau = \pi/\omega_0$ as shown. The total sequence time is $t = 2\tau$ for spin echo and $t = N\tau$ for CPMG. []{data-label="fig:cartoon"}](fig1b.eps "fig:"){height="4cm"} Spin echo and multipulse sequences {#sec:echo} ---------------------------------- The motion of the oscillator imprints a phase on the spin as it evolves under , which can be detected using spin echo [@Armour:2002kta; @Armour:2008kg], or more generally a multiple pulse measurement. Throughout the paper we focus on Carr-Purcell-Meiboom-Gill (CPMG) type pulse sequences, consisting of equally spaced $\pi$ pulses at intervals of time $\tau$, as depicted in . After initialization in $\ket{\uparrow}$, a $\pi/2$ pulse prepares the spin in an eigenstate of $\sx$, $\ket{\psi_0} = \half (\ket{\uparrow} + \ket{\downarrow})$ with $\bra{\psi_0}\sx\ket{\psi_0} = 1$. The spin is then allowed to interact with the oscillator for time $t$, accumulating a phase, and during which time we apply a sequence of $\pi$ pulses which effectively reverse the direction of spin precession. At the end of the sequence, a final $\pi/2$ pulse converts the accumulated phase into a population in $\ket{\uparrow}$ which is then read out. By applying both initial and final $\pi/2$ rotations about the same axis, we measure the probability to find the spin in its initial state $\ket{\psi_0}$ at the end of the sequence, given by $$\label{eq:S} P(t) = \frac{1}{2} \big( 1 + \avg{\sx(t)} \big),$$ where angle brackets denote the average over spin and oscillator degrees of freedom. Our choice to measure $\sx$ probes the accumulated phase variance; this is crucial for our purpose because the average phase imprinted by an undriven fluctuating oscillator is zero. In contrast, by applying the first and final $\pi/2$ pulses about orthogonal axes one would instead measure $\sy$, which probes the average accumulated phase. The sensitivity of the spin to mechanical motion is determined by the impact of the oscillator on the spin coherence $\avg{\sx(t)}$. The key to maximizing this impact is to synchronize the spin evolution with the mechanical period using a CPMG sequence of $\pi$ pulses, increasing the accumulated phase variance and improving the sensitivity as discussed in the context of ac magnetometry [@Taylor:2008cp]. Choosing $\tau=\pi/\omega_0$ between the $\pi$ pulses, we flip the spin every half-period of the oscillator and maximize the accumulated phase variance. At the same time, these pulse sequences decouple the spin from low-frequency magnetic noise of the environment, extending the spin coherence time $T_2$ [@Lange:2011jo; @Naydenov:2011eo]. We describe the effects of the applied $\pi$ pulses using a function $f(t,\tau)$, which flips the sign of the spin-oscillator interaction at regular intervals of time $\tau$ as illustrated in . In this toggling frame, the interaction Hamiltonian is $$\label{eq:Hint} \hat H_{\rm int} (t) = \frac{\lambda}{2} \sz \hat X(t) f(t,\tau),$$ where $\hat X = \hat a + \hat a^\dagger$ and $ \hat X(t) = e^{i \hat H_{\rm osc} t} \hat X e^{-i \hat H_{\rm osc} t}$. We calculate the spin coherence, $\avg{\sx(t)} = \avg{ U^\dagger(t) \sx U(t)}$, where the evolution operator is $\hat U(t) = \TT e^{-\frac{i \lambda}{2} \sz \int_0^t dt' \hat X(t') f(t',\tau)}$ and $\TT$ denotes time ordering. Since the interaction is proportional to $\sz$, it leads to pure dephasing and we obtain [@Makhlin:2004km] $$\label{eq:D} \avg{\sx(t)} = \avg{ \tilde \TT e^{-i \hat \phi/2} \TT e^{-i \hat \phi/2}}_{\rm osc},$$ where we used $\avg{\sx(0)} = 1$, the average $\avg{\cdot}_{\rm osc}$ is over oscillator degrees of freedom, $\tilde \TT$ denotes anti-time ordering, and the accumulated phase operator is $$\label{eq:phase} \hat \phi = \lambda \int_0^t dt' \hat X(t') f(t',\tau).$$ The spin coherence in can be calculated using a cumulant expansion, which is vastly simplified by noting that the full Hamiltonian in , including the oscillator drive and ohmic dissipation, is quadratic in $\hat X$. As a result, the second cumulant—which in general corresponds to a Gaussian approximation—in the present case constitutes the exact result. We use this below to calculate the coherence for both thermal and driven motion. Another consequence of the fact that $\hat H$ is quadratic in $\hat X$ is that the effect of the pulse sequence is completely characterized by its associated filter function [@Taylor:2008cp; @deSousa:2009fp], $F(\omega \tau) = \frac{\omega^2}{2} \big| \tilde f(\omega) \big|^2$ with $\tilde f(\omega) = \int dt e^{i\omega t} f(t,\tau)$. The filter function describes how two-time position correlations $\avg{\hat X(t) \hat X(t')}$ of the oscillator affect the spin coherence in the second cumulant in the expansion of . For the pulse sequences illustrated in , the corresponding filter functions are $$F(\omega\tau) = \cases{ 8 \sin^4(\omega\tau/2) &\quad{\rm spin echo} \\ 2 \sin^2\left(N\omega\tau / 2 \right) \left[ 1-\sec\left( \omega\tau / 2 \right)\right]^2 &\quad{\rm CPMG} }$$ Note that phase-alternated versions of CPMG, such as XY4, which vary the axis of $\pi$ pulse rotation in order to mitigate pulse errors, are also described by the above model in the limit of ideal pulses. Thermal motion {#sec:mech} --------------- As discussed above, the spin coherence in is given exactly by its second order cumulant expansion. Since the total sequence time is $t = N \tau$, the coherence depends only on the time $\tau$ between $\pi$ pulses, $$\label{eq:Dtherm} \avg{\sx(t = N\tau)} = e^{-\chi_N(\tau)},$$ where $$\chi_N(\tau) = \lambda^2 \int \frac{d\omega}{2\pi} \frac{ F(\omega \tau)}{\omega^2} \bar S_X(\omega), \label{eq:chi}$$ and $\bar S_X(\omega) = \int dt e^{i\omega t} \half \langle\{ \hat X(t), \hat X(0)\}\rangle$ is the symmetrized noise spectrum of $\hat X$. For the damped thermal oscillator described by $\hat H_{\rm osc}$ in the abscence of a drive, the symmetrized spectrum is ($k_B = 1$) $$\label{eq:SX} \bar S_X(\omega) = \frac{ 2 \omega_0 \gamma \omega \coth(\omega/2T) } {(\omega^2-\omega_0^2)^2 + \gamma^2 \omega^2},$$ where $\gamma = \omega_0/Q$ is the mechanical damping rate due to coupling to the ohmic environment at temperature $T$. ![Spin coherence for CPMG sequence with $N = 8$, with undriven thermal oscillator at temperature $T = 10~\omega_0$ [**(a)**]{} and $T = 1000~\omega_0$ [**(b)**]{} and values of $Q$ shown. Solid lines show full spin coherence with collapses and revivals, and dashed lines show oscillator-induced dephasing resulting in envelope decay given by . Here we took $\lambda/\omega_0 = 0.01$ and neglected intrinsic spin decoherence, $T_1 = T_2 \rightarrow \infty$. []{data-label="fig:thermal"}](fig2.eps){width="\wide"} We plot the spin coherence due to thermal motion in the classical limit $T \gg \omega_0$ in . The impact of the oscillator is greatest when the pulse sequence is synchronized with the cantilever frequency, $\tau = (2 k + 1)\pi/\omega_0$ with $k$ an integer. At times $\tau = 2k\pi/\omega_0$, the accumulated phase due the oscillator cancels within each free precession time, so that the accumulated phase variance averages nearly to zero and the coherence revives. We stress that this structure of collapse and revival can arise from purely classical motion; it is simply a consequence of averaging the phase variance accumulated by the spin over Gaussian distributed magnetic field fluctuations with a characteristic frequency. In addition to collapses and revivals, the finite $Q$ of the cantilever also causes dephasing of the spin which leads to an exponential decay factor of the envelope as $e^{-\Gamma_\phi \tau}$. In the limit $Q \gg 1$ and $T> \omega_0$, the dephasing rate is given by $$\label{eq:Gphi} \Gamma_\phi \simeq 3 N \eta^2 \gamma \left( \nth + \half \right),$$ where $\eta = \lambda / \omega_0$ is the dimensionless coupling strength and $\bar n_{\rm th} = (e^{\omega_0 / T} - 1 )^{-1}$ is the thermal occupation number of the oscillator. We provide a derivation of in \[app:highQ\]. Increasing $Q$ not only increases the depth of the collapses in spin coherence due to the oscillator, but also decreases the overall spin dephasing resulting in more complete revivals, as shown in . We also see that increasing the temperature increases both the depth of collapse and the dephasing. Below in we use these results to calculate the lowest temperature motion that can be detected, characterized by the phonon number sensitivity at the optimal pulse timing $\tau = \pi /\omega_0$. Driven motion {#sec:driven} ------------- It is straightforward to include the effects of a classical drive through $H_{\rm dr}$ in . This simply adds a classical deterministic contribution to $\hat X(t)$, and we can decompose the accumulated phase in it as $\hat \phi = \phi_{\rm dr} + \hat \phi_{\rm th} $ where $$\phi_{\rm dr} = \lambda A \int dt \cos{( \omega_0 t + \theta_0 )} f(t,\tau)$$ is the classical accumulated phase due to the drive. Here, $A$ is the dimensionless amplitude of driven motion and $\theta_0$ is its phase at the start of a particular measurement. We assume that the cantilever drive is not phase-locked to the pulse sequence, so $\theta_0$ is random and uniformly distributed between 0 and $2\pi$. Using and averaging over $\theta_0$ we obtain $$\label{eq:sxBessel} \avg{\sx(t=N\tau)} = J_0[a(\tau)] e^{-\chi_N(\tau)}$$ where $J_0$ is the zeroth order Bessel function [@Lange:2011jo], $a(\tau) = \eta A \sqrt{2 F(\omega_0 \tau)}$, and $\chi_N(\tau)$ is the thermal contribution given by . For a strong drive, thermal fluctuations are unimportant and the signal is given by the Bessel function. For a weak drive, comparable to thermal motion with $\abs{A}^2 \sim \nth$, both thermal and driven contributions may be important as illustrated in and observed in experiment [@Kolkowitz:2012iw]. In we see that, unlike thermal motion (see ), driven motion can lead to dips in the spin coherence below zero. In the remainder of the paper we focus on detecting undriven thermal or quantum motion with the drive switched off. ![Spin coherence from combined thermal and driven motion for drive amplitudes shown. For a weak drive, both driven and thermal contributions are important. The dips in the spin coherence below zero arise from driven motion, described by the Bessel function in . Parameters are $\omega_0/2\pi = 1$ MHz, $T = 50\ \omega_0$, $Q = 100$. []{data-label="fig:driven"}](fig3.eps){width="\figwidth"} Phonon number sensitivity {#sec:sensitivity} ========================= In this section we discuss the sensitivity limits of the spin used as a detector of undriven mechanical motion. By comparing the signal from thermal motion to the relevant noise sources, we obtain the phonon number sensitivity. We then discuss the sensitivity in several limits relevant to experiments. Signal ------ The impact of an undriven thermal oscillator on the spin coherence in a spin echo or CPMG measurement sequence is described by . In addition to its coupling to the oscillator, the spin is also coupled to an environment which leads to intrinsic decoherence and degrades the signal. For an NV center, decoherence or $T_2$ processes are caused by a 1% natural abundance of $^{13}$C nuclear spins in the otherwise $^{12}$C lattice. Flip-flop processes between pairs of these nuclear spins produce low frequency magnetic noise which leads to decoherence of the form $e^{-N (\tau/T_2)^3}$ for a CPMG sequence with $N$ pulses [@Taylor:2008cp; @deSousa:2009fp]. Note that $T_2$ here refers to the decoherence time in a spin echo sequence (i.e. $N$=1), typically $\sim$ 100 $\mu$s in natural diamond and up to $\sim$ 2 ms in isotopically pure diamond [@Balasubramanian:2009jz]. An added benefit of multipulse sequences is the enhanced spin coherence time, $\tilde T_2 = N^{2/3} T_2$, due to dynamical decoupling [@deLange:2010ga]. Finally, spin-lattice relaxation due to phonon processes leads to exponential decay on a timescale $T_1$, typically $\sim$ 1 ms at room temperature and up to $\sim 200$ s at 10 K [@Jarmola:2011wf]. Including these intrinsic sources of spin decoherence, as well as the oscillator-induced decoherence $\Gphi$ given in , the probability to find the spin in its initial state given in is modified as $$\label{eq:Sdetect} P(t=N \tau) = \frac{1}{2} \left( 1 + e^{-N \left( \tau/T_1 + (\tau/T_2)^3\right) } e^{-\Gphi \tau} \right) - {\mathcal S}(\tau),$$ where we have isolated the coherent signal due to the oscillator, $$\label{eq:signal} {\mathcal S}(\tau) = \half e^{-N \left( \tau/T_1 + (\tau/T_2)^3\right)} e^{-\Gphi \tau} \left( 1 - e^{-\left( \chi_N(\tau) - \Gphi \tau \right)} \right).$$ Note that we have accounted for the oscillator-induced decoherence $\Gphi \tau$ which diminishes the coherent signal we are interested in. We can obtain a simple analytic expression for the signal in the limit $Q \gg 1$. In this limit the oscillator spectrum is well-approximated by Lorentzians at $\omega = \pm \omega_0$, $$\label{eq:SXapprox} \bar S_X(\omega) \simeq \frac{\gamma \left( \bar n_{\rm th} + 1/2\right)} {(\omega - \omega_0)^2 + \gamma^2/4} + \frac{\gamma \left( \bar n_{\rm th} + 1/2\right)} {(\omega + \omega_0)^2 + \gamma^2/4}.$$ Using with we obtain a compact analytic expression for $\chi$ with no further approximation, which we provide in \[app:highQ\]. We choose the pulse timing $\tau$ to maximize the impact of the oscillator motion on the spin coherence, providing optimal sensitivity. This is achieved by setting $\tau = \pi/\omega_0$, flipping the spin every half period of the oscillator and resulting in the maximum accumulated phase variance. For $N \gg1$, the filter function with $\tau = \pi/\omega_0$ is well-approximated by a Lorentzian centered at $\omega_0$ of bandwidth $b \omega_0 / N$, where $b \simeq 1.27$. Together with this yields $$\label{eq:chiLargeN} \chi_N(\pi/\omega_0) \simeq \frac{16 \eta^2 Q N} {\pi\left( 1 + b Q / N \right)} \left( \nth + \half \right),$$ and substituting this into we obtain the signal. Sensitivity ----------- To find the sensitivity we must account for noise. We combine spin projection and photon shot noise into a single parameter $K$ so that the noise averaged over $M$ measurements is $\sigma = 1 / K\sqrt{M}$, where $M = \ttot / N\tau$ is the number of measurements of duration $N \tau$ that can be performed in a total time $\ttot$. It follows that the minimum number of phonons that we can resolve in a given time $t_{\rm tot}$ is $$\nimp = \frac{\sigma}{\abs{d {\mathcal S} / d \bar n_{\rm th}}},$$ and the corresponding phonon number sensitivity is $\sens = \nimp \sqrt{\ttot}$. Using with $\tau = \pi/\omega_0$ we obtain $$\label{eq:sens} \sens \simeq \frac{ \pi^{3/2} }{8K \eta^2 Q N} e^{ N / N_{\phi}} \left( 1 + \frac{b Q}{N} \right) \frac{1}{\sqrt{\omega_0/N}},$$ where we have expressed the total spin dephasing in terms of a single pulse number, $$\label{eq:Nphi} N_{\phi} = \left[ \frac{\pi }{\omega_0 T_1 } + \left(\frac{ \pi}{ \omega_0 T_2} \right)^3 + \frac{3\pi \eta^2}{Q} \left(\nth + \half \right) \right]^{-1},$$ which combines both intrinsic and oscillator-induced decoherence. reflect the competition between the oscillator damping rate $\gamma = \omega_0/Q$, the intrinsic decoherence times $T_1$ and $T_2$ of the spin, and the measurement bandwidth $b \omega_0 / N$. It is clear from that increasing the number of pulses increases the coherent signal due to the oscillator; however, this also leads to increased spin decoherence. As a result, the resolvable phonon number is minimized at an optimal number of pulses, $$\label{eq:Nopt} \Nopt = N_{\phi} - b Q + \sqrt{N_\phi^2 + 6 b Q N_{\phi} + \left( b Q \right)^2}.$$ Note that the optimal pulse number is always set by the spin decoherence, $\Nopt \sim N_{\phi}$, with only a prefactor of order one depending on $Q$. Neglecting pulse imperfections, the optimized sensitivity is determined by an interplay of $Q$, $T_1$ and $T_2$ in . In practice, the optimal pulse number may be very large due to long spin coherence times, and pulse errors may play a role as discussed further below. ![[**(a)**]{} Phonon number sensitivity $\xi$ versus the number of pulses $N$ for values of $Q$ shown and $\lambda/\omega_0 = 0.01$. Lines show the analytic result in and points show the full numerical result using . Squares mark the sensitivity at the optimal pulse number $\Nopt$. [**(b)**]{} Sensitivity optimized with respect to $N$, versus the coupling strength $\lambda/\omega_0$ for the same values of $Q$ as in [**(a)**]{}. Squares mark the optimized sensitivity at $\lambda/\omega_0 = 0.01$, corresponding to the squares in [**(a)**]{}. The dotted lines mark a sensitivity of $\xi = 1/\sqrt{\rm Hz}$. Parameters in both plots are $\omega_0/2\pi = 1$ MHz, $T_2 = 100\ \mu$s, $T_1 = 100$ ms, $T = 4$ K and $K = 0.3$. []{data-label="fig:sens"}](fig4.eps){width="\wide"} In we plot the sensitivity as a function of pulse number $N$, and the optimized sensitivity as a function of coupling strength $\lambda$. To check the validity of the above approximations it is straightforward to calculate the phonon number sensitivity directly from . The numerically exact sensitivity is shown in in agreement with our analytic results. In the remainder of this section we discuss the sensitivity in several experimentally relevant limits. Optimal sensitivity and cooperativity ------------------------------------- An important limit for current experiments is one where the spin coherence is much longer than the oscillator coherence during the measurement, corresponding to $N_\phi>Q$. We assume that the spin coherence is dominated by intrinsic sources described by $T_1$ and $T_2$, and that the oscillator-induced spin decoherence $\Gamma_\phi$ can be neglected, well-justified in the limit of weak coupling. Within these limits, the optimal number of pulses is $\Nopt \sim N_\phi$ and the optimized sensitivity is $$\label{eq:sensCoop} \sens_{\rm opt} \simeq \frac{\pi^{3/2}}{8K C \sqrt{\omega_0 / N_{\phi}}},$$ where the cooperativity is $$\label{eq:C} C = \frac{\lambda^2 \tilde T_2}{\gamma},$$ and $\tilde T_2 = \Nopt^{2/3} T_2$ is the enhanced spin coherence time due to decoupling. For a large number of pulses, the enhanced spin coherence $N^{2/3}T_2$ may be very long, and ultimately the spin coherence may be limited by $T_1$ which is not suppressed by decoupling. In this case are simply modified by $\tilde T_2 \rightarrow T_1$. The cooperativity parameter $C$ is ubiquitous in quantum optics, and marks the onset of Purcell enhancement in cavity quantum electrodynamics. In the present case, $C > 1$ is the requirement for a single phonon to strongly influence the spin coherence, leading to a measurable signal despite the relatively short coherence time of the oscillator. The condition $C>1$ to resolve a single phonon can be simply understood: if the spin coherence is much longer than the oscillator coherence, i.e. $Q \ll N_\phi$, the accumulated phase variance increases at a rate $\sim \lambda^2 / \gamma$ (see with sequence time $N \tau \sim N/\omega_0$) and the maximum interrogation time (assuming that oscillator-induced decoherence is negligible) is $\tilde T_2$. With feasible experimental parameters, $\tilde T_2\sim T_1 \sim 10$ ms, $\lambda/2\pi \sim 150$ Hz, $\omega_0/2\pi \sim 1$ MHz and $Q \sim 1000$, a cooperativity of $C \sim 1$ can be reached. In current experiments, NV centers exhibit a 30% contrast in spin-dependent fluorescence, and collection efficiencies of 5% are realistic [@Taylor:2008cp; @Robledo:2011fs]. These parameters yield $K \sim 0.3$ and an optimal phonon number sensitivity of $\sens_{\rm opt} \sim 1 / \sqrt{\rm Hz}$ with $N \sim N_\phi \sim 15000$ pulses. Due to long spin coherence times $T_1$ and $T_2$, the optimal pulse number $N_\phi$ may be very large, and in practice finite pulse errors may play an important role in limiting the spin coherence. For example if the number of pulses is limited to $N \sim 1000$, a sensitivity of $\xi \sim 3 / \sqrt{\rm Hz}$ can be reached. We discuss this further below when we calculate the signal due to zero point motion. Ideal oscillators and ideal spin qubits {#sec:idealLimits} --------------------------------------- While the cooperativity regime describes an important part of parameter space, it is useful to briefly consider two more simple limits that describe features in . First, we consider a harmonic oscillator that remains coherent for a much longer time than the entire pulse sequence, satisfying $Q \gg N$. In this limit, the long oscillator coherence time plays plays no role and the optimal sensitivity is limited only by the spin coherence, $\sens_{\rm opt} \sim 1 / (K \lambda^2 \tilde T_2^2 \sqrt{\omega_0 / N})$. This limit can be seen on the left side of a, where the sensitivities for different values of $Q$ fall on the same curve at low pulse numbers $N$. Finally, we consider the limit of very strong but incoherent coupling where the spin decoherence is dominated by the oscillator, i.e. $\Gphi$ becomes larger than $1/T_1$ and $1/T_2$. This limit is reached when either the intrinsic spin decoherence is negligible or for very strong coupling, $\eta^2 \nth \gg Q / (\omega_0 T_2)^3, Q / \omega_0 T_1$. In this limit, the coherent signal is large due to strong coupling, but saturates at a low number of pulses; further increasing the coupling strength only increases the oscillator-induced decoherence, reducing the signal. This is reflected in b, where we see that increasing the coupling strength larger than $\eta^2 > 1 / \gamma \nth T_1$ no longer improves the optimized sensitivity but instead degrades it. Detecting quantum motion {#sec:zpm} ======================== Above we found that for realistic experimental parameters, a single phonon can be resolved in one second of averaging time. This raises the intriguing question of whether a single spin can be used to sense the quantum zero point motion of an oscillator in its ground state. It also implies that we must consider the effect of measurement backaction, which we have so far ignored in our discussion. To address these questions we analyze the experimentally relevant scenario where the spin is used to detect the motion of a mechanical resonator which is externally cooled close to its ground state. Measuring a cooled oscillator ----------------------------- Even at cryogenic temperatures, a mechanical oscillator of frequency $\omega_0 / 2\pi \sim$ MHz has an equilibrium occupation number $\nth$ much larger than one. For this reason we assume that the mechanical oscillator is cooled from its equilibrium occupation $\nth$ to a much lower value $\bar n_0\sim 1$ using either optical cooling techniques [@Marquardt:2007dn] or the driven spin itself [@Rabl:2009fz; @Rabl:2010cm]. An important consequence of cooling below the environmental temperature is the effective reduction in $Q$ of the oscillator. For an oscillator coupled to both a thermal environment and an external, effective zero temperature source for cooling, the mean phonon number satisfies $$\avg{\dot n} = - (\gamma + \gamma_{\rm cool}) \avg{n} + \gamma \nth,$$ where $\gamma_{\rm cool}$ is the cooling rate. The steady state occupation number is $$\bar n_0= \avg{n}(t\rightarrow \infty) = \frac{ \gamma \nth}{\gamma +\gamma_{\rm cool}},$$ and in order to maintain $\bar n_0 < 1$ we require $\gamma_{\rm cool} > \gamma\nth$. As a result, the relevant decoherence rate of the oscillator is the rethermalization rate $\gamma \nth$. For this reason, to calculate the signal from a cooled oscillator we replace the equilibrium thermal occupation number $\nth$ by the effective occupation $\bar n_0 \rightarrow 0$ in all expressions, while at the same time replacing the intrinsic $Q$ by the reduced, effective quality factor $\Qeff = \omega_0/\gamma_{\rm cool}\approx Q / \nth$. Single shot readout ------------------- In we calculated the sensitivity $\xi$, which reflects the minimum detectable phonon number $n_{\rm min}$ that can be resolved in one second of averaging time. For the following discussion it is useful to convert the sensitivity to a minimum detectable phonon number per single measurement shot, $\nssr = \sens / \sqrt{(N\pi/\omega_0)}$, where we have taken the total measurement time to be $\ttot = N\tau$ and $\tau = \pi/\omega_0$. Assuming single shot spin readout ($K \rightarrow 1$), which has been demonstrated at low temperature [@Robledo:2011fs], and using we obtain $$\label{eq:SingleShotSensitivity} \nssr = \frac{\pi e^{{N/N_{\phi}}}}{8 \eta^2N\Qeff} \left( 1 + \frac{b\Qeff}{N}\right) \sim \frac{1}{C_{\rm eff}},$$ where $C_{\rm eff} = \lambda^2 \tilde T_2 / \gamma \nth$ is the reduced, effective cooperativity. We see that under the assumption $N\sim \Nopt \gg \Qeff$, the ability to resolve ground state fluctuations of a cooled oscillator within a few spin measurements requires $C_{\rm eff}>1$, which is the same strong cooperativity condition required to perform a quantum gate between two spins mediated by a mechanical oscillator [@Rabl:2010gza]. Alternatively, $\nssr$ corresponds to the occupation number required to produce a signal ${\mathcal S}$ of order one in . It provides a convenient way to directly compare the sensitivity with the backaction due to sequential measurements, as discussed below. ![[**(a)**]{} Spin coherence with $\bar n_0 \sim 0$ for increasing pulse number and $\Qeff = 100$ with $\lambda/\omega = 0.01$. [**(b)**]{} Optimal signal as defined in from zero point motion. Solid lines show optimal signal assuming unlimited pulse number, while dashed lines include a simple treatment of pulse errors with $N_c = 1000$ as described in the text. Parameters are $T_2 = 100$ $\mu$s, $T_1 = 100$ ms, $\omega_0/2\pi = 1$ MHz. []{data-label="fig:zpm"}](fig5.eps){width="\wide"} In we plot the calculated signal due to zero point motion, assuming that the mechanical oscillator is cooled near its ground state $\bar n_0=0$ and using the reduced quality factor $\Qeff$. These plots show that the intrinsic coherence times typical for NV centers are more than sufficient to resolve single phonons provided enough pulses can be applied to exploit the full spin coherence. In practice, the limiting factor is likely to be finite pulse errors, which limit the absolute number of pulses that can be applied before losing the spin coherence. To estimate the effect of finite pulse errors, we include the calculated signal assuming additional spin decoherence of the form $e^{-N/N_c}$ with a cutoff pulse number $N_c$. Pulse numbers of $N \sim 160$ have been demonstrated in experiment [@deLange:2010ga], and with further improvements this can be increased to more than $N \sim 1000$. Based on this we plot the modified signal using $N_c \sim 1000$ and find that even with a limited number of pulses, zero point motion results in a significant signal for realistic coupling strengths. Backaction {#sec:ba} ---------- The result that a single spin magnetometer can resolve the quantum zero point motion of a mechanical oscillator calls for a discussion of measurement backaction. We begin by noting that, despite the linear coupling of the spin to the oscillator position in , the described measurement protocol is sensitive to the *variance* of the accumulated phase $\sim \langle \hat X^2\rangle$, which we obtain by averaging independent spin measurements. As a result, our approach does not correspond to standard continuous position measurement [@Caves:1980jp], nor does it implement a quantum nondemolition measurement of the phonon number, since the interaction in does not commute with $\hat n$. In principle, by cooling between measurements our approach may be used to measure the phonon number with arbitrary precision. Nonetheless, the effect of the spin’s backaction on the oscillator is both a practical issue and interesting in itself, and could be used to prepare nonclassical mechanical states. We describe two possible approaches to observe the influence of measurement backaction on the oscillator. First, we consider directly probing the projective nature of the measurement. For simplicity we assume that the oscillator is initially in its ground state and decoupled from the environment, and assume single shot spin readout. In a single measurement sequence, the oscillator experiences a spin-dependent force according to . Measuring $\avg{\sx} = \pm 1$ at the end of the sequence projects the oscillator onto a superposition of coherent states [@Steinke:2011ig; @Tian:2005cu], $$\label{eq:State1} \ket{\psi_\pm}= \frac{ \ket{i\alpha} \pm \ket{-i\alpha} } {\sqrt{2\pm 2 e^{-2\alpha^2}}},$$ where $\alpha = N \lambda / 2\omega_0$ is the total displacement for a sequence of $N \gg 1$ pulses and $\tau = \pi/\omega_0$. The probabilities to measure $|\pm\rangle$ are given by $$p_\pm =\frac{1}{2}\left( 1\pm e^{-2\alpha^2}\right),$$ which shows, consistent with the discussion above, that for a measurement strength $\alpha>1$ the oscillator in its ground state can significantly affect the spin dynamics. To observe the backaction of this measurement on the oscillator we can perform a second spin measurement, which is sensitive to the state of the oscillator conditioned on the first measurement. In principle, by using techniques developed in cavity quantum electrodynamics, this procedure can be used to fully reconstruct the conditionally prepared oscillator state [@Deleglise:2008gt]. Let us now consider an alternative, indirect way to observe backaction by performing many successive measurements. Again beginning with the oscillator near its ground state, the first measurement projects the oscillator into one of the states $|\psi_\pm\rangle$. By averaging over the two possible spin measurement outcomes, the resulting mixed oscillator state is $$\rho_{\rm osc} = p_+ \ket{\psi_+}\bra{\psi_+}+p_- \ket{\psi_-}\bra{\psi_-},$$ and we see that on average the oscillator energy has increased by $|\alpha|^2$. Repeating this measurement many times, without cooling between measurements, the oscillator amplitude undergoes a random walk of stepsize $\pm \alpha$, and on average the phonon number increases approximately linearly in time. This corresponds to backaction heating described by an effective diffusion rate, $$D_{\rm ba} = \frac{N \eta^2\omega_0}{4\pi}.$$ ![ Solid lines show total inferred phonon number given by from combined phonon resolution and backaction heating. Dashed lines show sensitivity and heating contributions. For each value of $\Qeff$ we set $N = \Qeff / 5$. []{data-label="fig:backaction"}](fig6.eps){width="\figwidth"} Combining the measurement backaction with intrinsic mechanical dissipation and external cooling, the average occupation number satisfies $$\avg{\dot n} = - (\gamma + \gamma_{\rm cool}) \avg{n} + \gamma \nth + D_{\rm ba},$$ and for $\gamma_{\rm cool}\gg\gamma$ the steady state phonon number added due to backaction is $$\label{eq:nadd} \nadd = \frac{D_{\rm ba}}{\gamma_{\rm cool}} = \frac{N \Qeff \eta^2}{4\pi}.$$ We see that increasing the coupling strength not only improves the single shot resolution $\nssr$, but also leads to backaction heating of the oscillator. For sufficiently strong coupling, the steady state backaction phonon number $\nadd$ exceeds the phonon number resolution, and the inferred phonon number is determined by backaction. We thus take the sum $\nmeas = \nssr + \nadd$ as a measure of the minimum inferred phonon number. Note that for simplicity in this discussion we have assumed the limit $N\ll Q_{\rm eff}$, in which the oscillator is coherent within each measurement sequence. Within this limit we find $$\nmeas = \frac{\pi \alpha}{8 \eta^2N^2} +\frac{N \Qeff \eta^2}{4\pi}.$$ The total inferred phonon number $\nmeas$ is shown in as a function of the coupling parameter $\eta$ and a fixed number of pulses $N= Q_{\rm eff}/5$. In this case $\nmeas$ is minimized for $\eta\sim 1/\sqrt{Q_{\rm eff}}$, where it reaches a value of $\nmeas\sim\mathcal{O}(1)$. Observing this minimum in the phonon number resolution as a function of coupling strength would provide an indirect signature of measurement backaction. This observation may be more feasible in near term experiments than directly observing projective backaction as discussed above. Summary and conclusions ======================= We have presented the sensitivity limits of a novel position sensor consisting of a single spin. For realistic experimental parameters, we predict that a single NV center in diamond can be used to resolve single phonons in a cooled, magnetized mechanical cantilever. The condition to resolve single phonons is that of strong effective cooperativity, the same condition needed to perform a quantum gate between two spins mediated by a mechanical oscillator. For even stronger coupling, the backaction of the spin on the oscillator can be probed directly or indirectly, and used to prepare nonclassical mechanical states. This work is supported by NSF, CUA, DARPA and the Packard Foundation. SDB acknowledges support from NSERC of Canada and ITAMP. SK acknowledges support by the DoD through the NDSEG Program, and the NSF through the NSFGRP under Grant No. DGE-1144152. QPU acknowledges support from Deutschen Forschungsgemeinschaft. PR acknowledges support by the Austrian Science Fund (FWF) through SFB FOQUS and the START grant Y 591-N16. Analytic signal for thermal motion in high $Q$ limit {#app:highQ} ==================================================== Here we sketch the derivation of . The impact of the oscillator on the spin coherence is given by , $$\label{eq:chiA} \chi_N(\tau) = 2\omega_0 \gamma \lambda^2 \int \frac{d\omega}{2\pi} \frac{ F(\omega \tau)}{\omega^2} \frac{ \omega \coth(\omega/2T) } {(\omega^2-\omega_0^2)^2 + \gamma^2 \omega^2}.$$ To perform this integral it is useful to decompose the filter function as $$\label{eq:filterRewrite} F(\omega\tau) = 1 - \cos(N\omega\tau) + \sum_{j=0}^{N-1} (-1)^j \bigg[ \left( 1-\cos(\omega s_j)\right) - j \left( 1 - \cos(\omega t_j)\right) \bigg],$$ where $s_j = (j + 1/2)\tau$ and $t_j = (N-j)\tau$. We first consider the high temperature limit, $T \gg \omega_0$, in which we can approximate $\coth(\omega/2T)\approx 2T/\omega$. The result is a sum of integrals of the form $$4 T \omega_0 \gamma \lambda^2 \int \frac{d\omega}{2\pi} \frac{1-\cos(\omega t)} {\omega^2\left[ (\omega^2 - \omega_0^2)^2 + \gamma^2 \omega^2 \right]} = \eta^2 (2\nth) q(t),$$ which can be done exactly. In the limit $Q \gg 1$ we obtain $$q(t) = \gamma t + \left( 1 - e^{-\gamma t/2} \cos(\omega_0 t) \right) - \frac{4 \gamma}{3 \omega_0} e^{-\gamma t/2} \sin(\omega_0 t).$$ To calculate $\Gphi$ we need the spin coherence at the revivals, given by $\chi_N(\tau = 4\pi/\omega_0)$. To first order in $\gamma t$, we have $q(t = 4\pi / \omega_0) \simeq 3 \gamma t / 2$, and to this order the only nonzero term in is due to the $1 - \cos(N \omega\tau)$ terms in . The result is . Next we derive at $\tau = \pi/\omega_0$ in the limit $N \gg 1$. Here we use the fact that the filter function near $\tau \simeq \pi/\omega_0$ may be rewritten for $N \gg 1$ as $F(\omega\pi/\omega_0) \simeq 2 N^2 {\rm sinc}^2 \left[ \pi N (\omega-\omega_0)/2\right]$, and in turn this function is well-approximated by its Lorentzian envelope, $$\label{eq:Fapprox} F(\omega\pi/\omega_0) \simeq \frac{ (b\omega_0)^2/2}{ (\omega-\omega_0)^2 + (b\omega_0/N)^2/4},$$ where we obtain the effective bandwidth $b\omega_0/N$ by fitting the extrema of to a Lorentzian which yields $b \simeq 1.27$. At the collapse time $\tau = \pi/\omega_0$, we can approximate $\bar S_X(\omega)$ by the Lorentzian spectrum given in . Using , $\chi(\pi/\omega_0)$, the integrand is simply the product of two Lorentzians and performing the integration yields . References {#references .unnumbered} ==========
--- abstract: | We propose a simple yet effective instance segmentation framework, termed CondInst (conditional convolutions for instance segmentation). Top-performing instance segmentation methods such as Mask R-CNN rely on ROI operations (typically ROIPool or ROIAlign) to obtain the final instance masks. In contrast, we propose to solve instance segmentation from a new perspective. Instead of using instance-wise ROIs as inputs to a network of fixed weights, we employ dynamic instance-aware networks, conditioned on instances. CondInst enjoys two advantages: 1) Instance segmentation is solved by a fully convolutional network, eliminating the need for ROI cropping and feature alignment. 2) Due to the much improved capacity of dynamically-generated conditional convolutions, the mask head can be very compact (, 3 conv. layers, each having only 8 channels), leading to significantly faster inference. We demonstrate a simpler instance segmentation method that can achieve improved performance in both accuracy and inference speed. On the COCO dataset, we outperform a few recent methods including well-tuned Mask R-CNN baselines, without longer training schedules needed. Code is available: [https://git.io/AdelaiDet]{} **Keywords**: [Conditional convolutions, instance segmentation]{} author: - | Zhi Tian           Chunhua Shen[^1]           Hao Chen\ The University of Adelaide, Australia bibliography: - 'CondInst.bib' title: '**Conditional Convolutions for Instance Segmentation**' --- ![  makes use of instance-aware mask heads to predict the masks for each instance. $K$ is the number of instances to be predicted. The filters in the mask head vary with different instances, which are dynamically-generated and conditioned on the target instance. For the non-last conv. layers in the mask head,  is used as the activation function and no normalization layer such as batch normalization [@ioffe2015batch] is used here. The last conv. layer uses  to predict the probability of being mask foreground.[]{data-label="fig:mask_heads"}](figures/mask_heads.pdf){width="\linewidth"} ![image](figures/vis_results.pdf){width=".9\linewidth"} Introduction ============ Instance segmentation is a fundamental yet challenging task in computer vision, which requires an algorithm to predict a per-pixel mask with a category label for each instance of interest in an image. Despite a few works being proposed recently, the dominant framework for instance segmentation is still the two-stage method Mask R-CNN [@he2017mask], which casts instance segmentation into a two-stage detection-and-segmentation task. Mask R-CNN first employs an object detector Faster R-CNN to predict a bounding-box for each instance. Then for each instance, regions-of-interest (ROIs) are cropped from the networks’ feature maps using the ROIAlign operation. To predict the final masks for each instance, a compact fully convolutional network (FCN) (, mask head) is applied to these ROIs to perform foreground/background segmentation. However, this ROI-based method may have the following drawbacks. 1) Since ROIs are often axis-aligned bounding-boxes, for objects with irregular shapes, they may contain an excessive amount of irrelevant image content including background and other instances. This issue may be mitigated by using rotated ROIs, but with the price of a more complex pipeline. 2) In order to distinguish between the foreground instance and the background stuff or instance(s), the mask head requires a relatively larger receptive field to encode sufficiently large context information. As a result, a stack of $3 \times 3$ convolutions is needed in the mask head (, four $3 \times 3$ convolutions with $256$ channels in Mask R-CNN). It considerably increases computational complexity of the mask head, resulting that the inference time significantly varies in the number of instances. 3) ROIs are typically of different sizes. In order to use effective batched computation in modern deep learning frameworks [@pytorch; @tensorflow], a resizing operation is often required to resize the cropped regions into patches of the same size. For instance, Mask R-CNN resizes all the cropped regions to $14 \times 14$ (upsampled to $28 \times 28$ using a deconvolution), which restricts the output resolution of instance segmentation, as large instances would require higher resolutions to retain details at the boundary. In computer vision, the closest task to instance segmentation is semantic segmentation, for which fully convolutional networks (FCNs) have shown dramatic success [@long2015fully; @chen2017deeplab]. FCNs also have shown excellent performance on many other per-pixel prediction tasks ranging from low-level image processing such as denoising, super-resolution; to mid-level tasks such as optical flow estimation and contour detection; and high-level tasks including recent single-shot object detection [@tian2019fcos], monocular depth estimation [@Depth2015Liu] and counting [@boominathan2016crowdnet]. However, almost all the instance segmentation methods based on FCNs[^2] lag behind state-of-the-art ROI-based methods. [Why do the versatile FCNs perform unsatisfactorily on instance segmentation?]{} We observe that the major difficulty of applying FCNs to instance segmentation is that the similar image appearance may require different predictions but FCNs struggle at achieving this. For example, if two persons A and B with the similar appearance are in an input image, when predicting the instance mask of A, the FCN needs to predict B as background w.r.t.A, which can be difficult as they look similar in appearance. Therefore, the ROI operation is used to crop the person of interest, , A; and filter out B. Essentially, instance segmentation needs two types of information: 1) *appearance* information to categorize objects; and 2) *location* information to distinguish multiple objects belonging to the same category. Almost all methods rely on ROI cropping, which explicitly encodes the location information of instances. In contrast,  exploits the location information by using location/instance-sensitive convolution filters as well as relative coordinates that are appended to the feature map. Thus, we advocate a new solution that uses instance-aware FCNs for instance mask prediction. In other words, instead of using a standard ConvNet with a fixed set of convolutional filters as the mask head for predicting all instances, the network parameters are adapted according to the instance to be predicted. Inspired by dynamic filtering networks [@jia2016dynamic] and CondConv [@yang2019condconv], for each instance, a controller sub-network (see Fig. \[fig:main\_figure\]) dynamically generates the mask FCN network parameters (conditioned on the center area of the instance), which is then used to predict the mask of this instance. It is expected that the network parameters can encode the characteristics of this instance, and only fires on the pixels of this instance, which thus bypasses the difficulty mentioned above. These conditional mask heads are applied to the whole feature maps, *eliminating the need for ROI operations*. At the first glance, the idea may not work well as instance-wise mask heads may incur a large number of network parameters provided that some images contain as many as dozens of instances. However, we show that a very compact FCN mask head with dynamically-generated filters can already outperform previous ROI-based Mask R-CNN, resulting in much reduced computational complexity per instance than that of the mask head in Mask R-CNN. We summarize our main contributions as follow. - We attempt to solve instance segmentation from a new perspective. To this end, we propose the instance segmentation framework, which achieves improved instance segmentation performance than existing methods such as Mask R-CNN while being faster. To our knowledge, this is the first time that a new instance segmentation framework outperforms recent state-of-the-art both in accuracy and speed. - is fully convolutional and avoids the aforementioned resizing operation in many existing methods, as  does not rely on ROI operations. Without having to resize feature maps leads to high-resolution instance masks with more accurate edges. - Unlike previous methods, in which the filters in its mask head are fixed for all the instances once trained, the filters in our mask head are dynamically generated and conditioned on instances. As the filters are only asked to predict the mask of only one instance, it largely eases the learning requirement and thus reduces the load of the filters. As a result, the mask head can be extremely light-weight, significantly reducing the inference time per instance. Compared with the bounding box detector FCOS,  needs only $\sim$10% more computational time, even processing the maximum number of instances per image (, $100$ instances). - Even without resorting to longer training schedules as needed in recent works [@chen2019tensormask; @bolya2019yolact],  achieves state-of-the-art performance while being faster in inference. We hope that  can be a new strong alternative to popular methods such as Mask R-CNN for the instance segmentation task. Moreover, can be immediately applied to panoptic segmentation due to its flexible design. We believe that with minimal re-design effort, the proposed  can be used to solve all instance-level recognition tasks that were previously solved with an ROI-based pipeline. Related Work ------------ Here we review some work that is most relevant to ours. **Conditional Convolutions.** Unlike traditional convolutional layers, which have fixed filters once trained, the filters of conditional convolutions are conditioned on the input and are dynamically generated by another network (, a controller). This idea has been explored previously in dynamic filter networks [@jia2016dynamic] and CondConv [@yang2019condconv] mainly for the purpose of increasing the capacity of a classification network. In this work, we extend this idea to solve the significantly more challenging task of instance segmentation. **Instance Segmentation.** To date, the dominant framework for instance segmentation is still Mask R-CNN. Mask R-CNN first employs an object detector to detect the bounding-boxes of instances (, ROIs). With these bounding-boxes, an ROI operation is used to crop the features of the instance from the feature maps. Finally, a compact FCN head is used to obtain the desired instance masks. Many works [@chen2019hybrid; @liu2018path; @huang2019mask] with top performance are built on Mask R-CNN. Moreover, some works have explored to apply FCNs to instance segmentation. InstanceFCN [@dai2016instance] may be the first instance segmentation method that is fully convolutional. InstanceFCN proposes to predict position-sensitive score maps with vanilla FCNs. Afterwards, these score maps are assembled to obtain the desired instance masks. Note that InstanceFCN does not work well with overlapping instances. Others [@neven2019instance; @newell2017associative; @fathi2017semantic] attempt to first perform segmentation and the desired instance masks are formed by assembling the pixels of the same instance. To our knowledge, thus far none of these methods can outperform Mask R-CNN both in accuracy and speed on the public COCO benchmark dataset. The recent YOLACT [@bolya2019yolact] and BlendMask [@chen2020blendmask] may be viewed as a reformulation of Mask RCNN, which decouple ROI detection and feature maps used for mask prediction. Wang et al. developed a simple FCN based instance segmentation method, showing competitive performance [@wang2019solo]. PolarMask developed a new simple mask representation for instance segmentation [@polarmask], which extends the bounding box detector FCOS [@tian2019fcos]. Recently AdaptIS [@sofiiuk2019adaptis] proposes to solve panoptic segmentation with FiLM [@perez2018film]. The idea shares some similarity with in that information about an instance is encoded in the coefficients generated by FiLM. Since only the batch normalization coefficients are dynamically generated, AdaptIS needs a large mask head to achieve good performance. In contrast,  directly encodes them into conv.  filters of the mask head, thus having much stronger capacity. As a result, even with a very compact mask head, we believe that  can achieve instance segmentation accuracy that would not be possible for AdaptIS to attain. ![image](figures/main-crop){width=".84\linewidth"} Instance Segmentation with =========================== Overall Architecture -------------------- Given an input image $I \in \R^{H \times W \times 3}$, the goal of instance segmentation is to predict the pixel-level mask and the category of each instance of interest in the image. The ground-truth of instance segmentation are defined as $\{(M_i, c_i)\}$, where $M_i \in \{0, 1\}^{H \times W}$ is the mask for the $i$-th instance and $c_i \in \{1, 2, ..., C\}$ is the category. $C$ is $80$ on MS-COCO [@lin2014microsoft]. Unlike semantic segmentation, which only requires to predict one mask for an input image, instance segmentation needs to predict a variable number of masks, depending on the number of instances in the image. This poses a challenge when applying traditional FCNs [@long2015fully] to instance segmentation. In this work, our core idea is that for an image with $K$ instances, $K$ different mask heads will be dynamically generated, and each mask head will contain the characteristics of its target instance in their filters. As a result, when the mask is applied to an input, it will only fire on the pixels of the instance, thus producing the mask prediction of the instance. We illustrate the process in Fig. \[fig:mask\_heads\]. Recall that Mask R-CNN employs an object detector to predict the bounding-boxes of the instances in the input image. The bounding-boxes are actually the way that Mask R-CNN represents instances. Similarly,  employs the instance-aware filters to represent the instances. In other words, instead of encoding the instance concept into the bounding-boxes,  implicitly encodes it into the parameters of the mask heads, which is a much more flexible way. For example, it can easily represent the irregular shapes that are hard to be tightly enclosed by a bounding-box. This is one of ’s advantages over the previous ROI-based methods. Similar to the way that ROI-based methods obtain bounding-boxes, the instance-aware filters can also be obtained with an object detector. In this work, we build  on the popular object detector FCOS [@tian2019fcos] due to its simplicity and flexibility. Also, the elimination of anchor-boxes in FCOS can also save the number of parameters and the amount of computation of . As shown in Fig. \[fig:main\_figure\], following FCOS [@tian2019fcos], we make use of the feature maps $\{P_3, P_4, P_5, P_6, P_7\}$ of feature pyramid networks (FPNs) [@lin2017feature], whose down-sampling ratios are $8$, $16$, $32$, $64$ and $128$, respectively. As shown in Fig. \[fig:main\_figure\], on each feature level of the FPN, some functional layers (in the dash box) are applied to make instance-related predictions. For example, the class of the target instance and the dynamically-generated filters for the instance. In this sense,  can be viewed as the same as Mask R-CNN, both of which first attend to instances in an image and then predict the pixel-level masks of the instances (, instance-first). Besides the detector, as shown in Fig. \[fig:main\_figure\], there is also a mask branch, which provides the feature maps that our generated mask heads take as inputs to predict the desired instance mask. The feature maps are denoted by $\mF_{mask} \in \R^{H_{mask} \times W_{mask} \times C_{mask}}$. The mask branch is connected to FPN level $P_3$ and thus its output resolution is $\frac{1}{8}$ of the input image resolution. The mask branch has four $3 \times 3$ convolutions with $128$ channels before the last layer. Afterwards, in order to reduce the number of the generated parameters, the last layer of the mask branch reduces the number of channels from $128$ to $8$ (, $C_{mask} = 8$). Surprisingly, using $C_{mask} = 8$ can already achieve superior performance and using a larger $C_{mask}$ here (, 16) cannot improve the performance, as shown in our experiments. Even more aggressively, using $C_{mask} = 2$ only degrades the performance by $\sim0.3\%$ in mask AP. Moreover, as shown in Fig. \[fig:main\_figure\], $\mF_{mask}$ is combined with a map of the coordinates, which are relative coordinates from all the locations on $\mF_{mask}$ to the location $(x, y)$ (, where the filters of the mask head are generated). Then, the combination is sent to the mask head to predict the instance mask. The relative coordinates provide a strong cue for predicting the instance mask, as shown in our experiments. Moreover, a single  is used as the final output of the mask head, and thus the mask prediction is class-agnostic. The class of the instance is predicted by the classification head in parallel with the controller, as shown in Fig. \[fig:main\_figure\]. The resolution of the original mask prediction is same as the resolution of $F_{mask}$, which is $\frac{1}{8}$ of the input image resolution. In order to produce high-resolution instance masks, a bilinear upsampling is used to upsample the mask prediction by $4$, resulting in $400 \times 512$ mask prediction (if the input image size is $800 \times 1024$). We will show that the upsampling is crucial to the final instance segmentation performance of  in experiments. Note that the mask’s resolution is much higher than that of Mask R-CNN (only $28 \times 28$ as mentioned before). Network Outputs and Training Targets ------------------------------------ Similar to FCOS, each location on the FPN’s feature maps ${P_i}$ either is associated with an instance, thus being a positive sample, or is considered a negative sample. The associated instance and label for each location are determined as follows. Let us consider the feature maps $P_i \in \R^{H \times W \times C}$ and let $s$ be its down-sampling ratio. As shown in previous works [@tian2019fcos; @ren2015faster; @he2015spatial], a location $(x, y)$ on the feature maps can be mapped back onto the input image as $(\floor{\frac{s}{2}} + xs, \floor{\frac{s}{2}} + ys)$. If the mapped location falls in the center region of an instance, the location is considered to be responsible for the instance. Any locations outside the center regions are labeled as negative samples. The center region is defined as the box $(c_x - rs, c_y - rs, c_x + rs, c_y + rs)$, where $(c_x, c_y)$ denotes the mass center of the instance, $s$ is the down-sampling ratio of $P_i$ and $r$ is a constant scalar being $1.5$ as in FCOS [@tian2019fcos]. As shown in Fig. \[fig:main\_figure\], at a location $(x, y)$ on $P_i$,  has the following output heads. **Classification Head.** The classification head predicts the class of the instance associated with the location. The ground-truth target is the instance’s class $c_i$ or $0$ (, background). As in FCOS, the network predicts a $C$-D vector $\vp_{x, y}$ for the classification and each element in $\vp_{x, y}$ corresponds to a binary classifier, where $C$ is the number of categories. **Controller Head.** The controller head, which has the same architecture as the above classification head, is used to generate the parameters of the conv. filters for the instance at the location. As mentioned before, these generated filters are used in the mask head to predict the mask of this particular instance. This is the core contribution of our work. To predict the filters, we concatenate all the parameters of the filters (, weights and biases) together as an $N$-D vector ${\ensuremath{\pmb{\theta}}}_{x, y}$, where $N$ is the total number of the parameters. Thus, the controller head has $N$ output channels. As mentioned before, using a very few parameters (, $169$ parameters),  can already achieve excellent instance segmentation performance, which not only makes the parameters can be easily generated but also results in a mask head with low computational complexity. Thus, we use a very compact FCN as the mask head, which has three $1 \times 1$ convolutions, each having $8$ channels and using  as the activation function except for the last one. No normalization layer such as batch normalization [@ioffe2015batch] is used here. The last layer has $1$ output channel and uses  to predict the probability of being foreground. The generated $N$-D vector will be reinterpreted into the weights and biases of these filters. As mentioned before, the generated filters contain information about the instance at the location, and thus the mask head with the filters will ideally only fire on the pixels of the instance, even taking as the input the whole feature maps. **Center-ness Head.** The center-ness head predicts a scalar depicting the deviation from the location to the center of the target instance. The center-ness score is multiplied with the classification scores and used in NMS remove duplicated detections. We refer readers to FCOS [@tian2019fcos] for the details. Conceptually,  with the above heads can already solve the instance segmentation task since  needs no ROIs. However, we find that if we make use of box-based NMS, the inference time will be much reduced. Thus, we still predict bounding-boxes in . Following FCOS,  also predicts a $4$-D vector $\vt = (l, t, r, b)$ depicting the distances from the location to four sides (, left, top, right and bottom) of the instance’s bounding-box. The ground-truth bounding-boxes can be easily computed from the instance’s mask annotation $M_i$, and thus predicting bounding-boxes introduces no any extra annotations. We would like to highlight that the predicted bounding-boxes are *only* used in NMS and do not involve any ROI operations. Moreover, as shown in Table \[table:mask\_nms\], the bounding-boxes prediction can be removed if the NMS using no bounding-box (, mask NMS or peak NMS [@zhou2019objects]) used. This is fundamentally different from previous ROI-based methods, in which the bounding-box prediction is mandatory. Loss Function ------------- Formally, the overall loss function of  can be formulated as, $$\label{loss_function_overall} \begin{aligned} L_{overall} = L_{fcos} + \lambda L_{mask}, \end{aligned}$$ where $L_{fcos}$ and $L_{mask}$ denote the original loss of FCOS and the loss for instance masks, respectively. $\lambda$ being $1$ in this work is used to balance the two losses. We refer readers to FCOS for the details of $L_{fcos}$. $L_{mask}$ is defined as, $$\label{loss_function_mask} \begin{aligned} & L_{mask}(\{{\ensuremath{\pmb{\theta}}}_{x, y}\}) = \\ & \frac{1}{N_{\pos}}\sum_{{x, y}}{\mathbbm{1}_{\{c^*_{x, y} > 0\}}L_{dice}(MaskHead({\ensuremath{\mathbf{\tilde{F}}}}_{x, y}; {\ensuremath{\pmb{\theta}}}_{x, y}), \mM^*_{x, y})}, \end{aligned}$$ where $c^*_{x, y}$ is the classification label of location $(x, y)$, which is the class of the instance associated with the location or $0$ (, background) if the location is not associated with any instance. $N_{pos}$ is the number of locations where $c^*_{x, y} > 0$. $\mathbbm{1}_{\{c^*_{x, y} > 0\}}$ is the indicator function, being $1$ if $c^*_{x, y} > 0$ and $0$ otherwise. ${\ensuremath{\pmb{\theta}}}_{x, y}$ is the generated filters’ parameters at location $(x, y)$. ${\ensuremath{\mathbf{\tilde{F}}}}_{x, y} \in \R^{H_{mask} \times W_{mask} \times (C_{mask} + 2)}$ is the combination of ${\ensuremath{\mathbf{F}}}_{mask}$ and a map of coordinates ${\ensuremath{\mathbf{O}}}_{x, y} \in \R^{H_{mask} \times W_{mask} \times 2}$. As described before, ${\ensuremath{\mathbf{O}}}_{x, y}$ is the relative coordinates from all the locations on $\mF_{mask}$ to $(x, y)$ (, the location where the filters are generated). $MaskHead$ denotes the mask head, which consists of a stack of convolutions with dynamic parameters ${\ensuremath{\pmb{\theta}}}_{x, y}$. $\mM^*_{x, y} \in \{0, 1\}^{H \times W \times C}$ is the mask of the instance associated with location $(x, y)$. $L_{dice}$ is the dice loss as in [@milletari2016v], which is used to overcome the foreground-background sample imbalance. We do not employ focal loss here as it requires special initialization, which cannot be realized if the parameters are dynamically generated. Note that, in order to compute the loss between the predicted mask and the ground-truth mask $\mM^*_{x, y}$, they are required to have the same size. As mentioned before, the prediction is upsampled by $4$ and thus the final prediction has half the ground-truth mask’s resolution. Thus, we downsample $\mM^*_{x, y}$ by $2$ to make their sizes equal. These operations are omitted in Eq.  for clarification. Moreover, as shown in YOLACT [@bolya2019yolact], the instance segmentation task can benefit from a joint semantic segmentation task. Thus, we also conduct experiments with the joint semantic segmentation task. However, unless explicitly specified, all the experiments in the paper are *without* the semantic segmentation task. If used, the semantic segmentation loss is added to $L_{overall}$. Inference --------- Given an input image, we forward it through the network to obtain the outputs including classification confidence $\vp_{x, y}$, center-ness scores, box prediction $\vt_{x, y}$ and the generated parameters ${\ensuremath{\pmb{\theta}}}_{x, y}$. We first follow the steps in FCOS to obtain the bounding-box detections. Afterwards, box-based NMS with the threshold being $0.6$ is used to remove duplicated detections and then the top $100$ bounding-boxes (, instances) are used to compute masks. Let us assume that $K$ bounding-boxes remain after the process and thus we have $K$ groups of the generated filters. The $K$ groups of filters in turn are used in the mask head. These instance-specific mask heads are applied, in the fashion of FCNs, to the ${\ensuremath{\mathbf{\tilde{F}}}}_{x, y}$ (, the combination of $\mF_{mask}$ and ${\ensuremath{\mathbf{O}}}_{x, y}$) to predict the masks of the instances. Since the mask head is a very compact network (three $1 \times 1$ convolutions with $8$ channels and $169$ parameters in total), the overhead of computing masks is extremely small. For example, even with $100$ detections (, the maximum number of detections per image on MS-COCO), only less $5$ milliseconds in total are spent on the mask heads, which only adds $\sim 10\%$ computational time to the base detector FCOS. In contrast, the mask head of Mask R-CNN has four $3 \times 3$ convolutions with $256$ channels, thus having more than 2.3M parameters and taking longer computational time. Experiments =========== We evaluate  on the large-scale benchmark MS-COCO [@lin2014microsoft]. Following the common practice [@he2017mask; @tian2019fcos; @lin2017focal], our models are trained with split `train2017` (115K images) and all the ablation experiments are evaluated on split `val2017` (5K images). Our main results are reported on the `test`-`dev` split (20K images). Implementation Details ---------------------- Unless specified, we make use of the following implementation details. Following FCOS [@tian2019fcos], ResNet-50 [@he2016deep] is used as our backbone network and the weights pre-trained on ImageNet [@deng2009imagenet] are used to initialize it. For the newly added layers, we initialize them as in [@tian2019fcos]. Our models are trained with stochastic gradient descent (SGD) over $8$ V100 GPUs for 90K iterations with the initial learning rate being $0.01$ and a mini-batch of $16$ images. The learning rate is reduced by a factor of $10$ at iteration $60K$ and $80K$, respectively. Weight decay and momentum are set as $0.0001$ and $0.9$, respectively. Following Detectron2 [@wu2019detectron2], the input images are resized to have their shorter sides in $[640, 800]$ and their longer sides less or equal to $1333$ during training. Left-right flipping data augmentation is also used during training. When testing, we do not use any data augmentation and only the scale of the shorter side being $800$ is used. The inference time in this work is measured on a single V100 GPU with $1$ image per batch. ------------ ---------- ----------- ----------- ---------- ---------- ---------- $C_{mask}$ AP AP$_{50}$ AP$_{75}$ AP$_{S}$ AP$_{M}$ AP$_{L}$ 1 34.8 55.9 36.9 16.7 38.0 50.1 2 35.4 56.2 37.6 16.9 38.9 50.4 4 35.5 56.2 **37.9** 17.0 39.0 50.8 8 **35.7** **56.3** 37.8 **17.1** **39.1** 50.2 16 35.5 56.1 37.7 16.4 **39.1** **51.2** ------------ ---------- ----------- ----------- ---------- ---------- ---------- : The instance segmentation results by varying the number of channels of the mask branch output (, $C_{mask}$) on MS-COCO `val2017` split. As shown in the table, the performance keeps almost the same if $C_{mask}$ is in a reasonable range, which suggests that  is robust to the design choice.[]{data-label="table:c_mask"} ----------------- ----------------- ------------------- ---------- ----------- ----------- ---------- ---------- ---------- ---------- ----------- ------------ w/ abs.  coord. w/ rel.  coord. w/ ${\mF}_{mask}$ AP AP$_{50}$ AP$_{75}$ AP$_{S}$ AP$_{M}$ AP$_{L}$ AR$_{1}$ AR$_{10}$ AR$_{100}$ 31.4 53.5 32.1 15.6 34.4 44.7 28.4 44.1 46.2 31.3 54.9 31.8 16.0 34.2 43.6 27.1 43.3 45.7 32.0 53.3 32.9 14.7 34.2 46.8 28.7 44.7 46.8 **35.7** **56.3** **37.8** **17.1** **39.1** **50.2** **30.4** **48.8** **51.5** ----------------- ----------------- ------------------- ---------- ----------- ----------- ---------- ---------- ---------- ---------- ----------- ------------ Architectures of the Mask Head ------------------------------ In this section, we discuss the design choices of the mask head in . To our surprise, the performance is insensitive to the architectures of the mask head. Our baseline is the mask head of three $1 \times 1$ convolutions with $8$ channels (, width $= 8$). As shown in Table \[table:design\_choice\_mask\_head\] (3rd row), it achieves $35.7\%$ in mask AP. Next, we first conduct experiments by varying the depth of the mask head. As shown in Table \[varying\_depth\], apart from the mask head with depth being $1$, all other mask heads (, depth $= 2, 3$ and $4$) attain similar performance. The mask head with depth being $1$ achieves inferior performance as in this case the mask head is actually a linear mapping, which has overly weak capacity. Moreover, as shown in Table \[varying\_width\], varying the width (, the number of the channels) does not result in a remarkable performance change either as long as the width is in a reasonable range. We also note that our mask head is extremely light-weight as the filters in our mask head are dynamically generated. As shown in Table \[table:design\_choice\_mask\_head\], our baseline mask head only takes $4.5$ ms per $100$ instances (the maximum number of instances on MS-COCO), which suggests that our mask head only adds small computational overhead to the base detector. Moreover, our baseline mask head only has $169$ parameters in total. In sharp contrast, the mask head of Mask R-CNN [@he2017mask] has more than 2.3M parameters and takes $\sim 2.5 \times$ computational time ($11.4$ ms per $100$ instances). Design Choices of the Mask Branch --------------------------------- We further investigate the impact of the mask branch. We first change $C_{mask}$, which is the number of channels of the mask branch’s output feature maps (, $\mF_{mask}$). As shown in Table \[table:c\_mask\], as long as $C_{mask}$ is in a reasonable range (, from $2$ to $16$), the performance keeps almost the same. $C_{mask} = 8$ is optimal and thus we use $C_{mask} = 8$ in all other experiments by default. As mentioned before, before taken as the input of the mask heads, the mask branch’s output $\mF_{mask}$ is concatenated with a map of relative coordinates, which provides a strong cue for the mask prediction. As shown in Table \[table:w\_or\_wo\_offsets\] (2nd row), the performance drops significantly if the relative coordinates are removed ($35.7\%$ vs. $31.4\%$). The significant performance drop implies that the generated filters not only encode the appearance cues but also encode the shape of the target instance. It can also be evidenced by the experiment only using the relative coordinates. As shown in Table \[table:w\_or\_wo\_offsets\] (2rd row), only using the relative coordinates can also obtain decent performance ($31.3\%$ in mask AP). We would like to highlight that unlike Mask R-CNN, which encodes the shape of the target instance by a bounding-box,  implicitly encodes the shape into the generated filters, which can easily represent any shapes including irregular ones and thus is much more flexible. We also experiment with the absolute coordinates, but it cannot largely boost the performance as shown in Table \[table:w\_or\_wo\_offsets\] ($32.0\%$). This suggests that the generated filters mainly carry local cues such as shapes. It is preferred to mainly rely on the local cues because we hope that  is translation invariant. How Important to Upsample Mask Predictions? ------------------------------------------- -------- ------------ ---------- ----------- ----------- ---------- ---------- ---------- factor resolution AP AP$_{50}$ AP$_{75}$ AP$_{S}$ AP$_{M}$ AP$_{L}$ $1$ $1 / 8$ 34.4 55.4 36.2 15.1 38.4 50.8 $2$ $1 / 4$ **35.8** **56.4** **38.0** 17.0 **39.3** **51.1** $4$ $1 / 2$ 35.7 56.3 37.8 **17.1** 39.1 50.2 -------- ------------ ---------- ----------- ----------- ---------- ---------- ---------- : The instance segmentation results on MS-COCO `val2017` split by changing the factor used to upsample the mask predictions. “resolution" denotes the resolution ratio of the mask prediction to the input image. As shown in the table, if without the upsampling (, factor $= 1$), the performance drops significantly (from $35.8\%$ to $34.4\%$ in mask AP). Almost the same results are obtained with ratio $2$ or $4$.[]{data-label="table:upsampling"} As mentioned before, the original mask prediction is upsampled and the upsampling is of great importance to the final performance. We confirm this in the experiment. As shown in Table \[table:upsampling\], without using the upsampling (1st row in the table), in this case  can produce the mask prediction with $\frac{1}{8}$ of the input image resolution, which merely achieves $34.4\%$ in mask AP because most of the details (, the boundary) are lost. If the mask prediction is upsampled by factor $= 2$, the performance can be significantly improved by $1.4\%$ in mask AP (from $34.4\%$ to $35.8\%$). In particular, the improvement on small objects is large (from $15.1\%$ to $17.0$), which suggests that the upsampling can greatly retain the details of objects. Increasing the upsampling factor to $4$ slightly worsens the performance (from $35.8\%$ to $35.7\%$ in mask AP), probably due to the relatively low-quality annotations of MS-COCO. We use factor $= 4$ in all other models as it has the potential to produce high-resolution instance masks.  without Bounding-box Detection ------------------------------- Although we still keep the bounding-box detection branch in , it is conceptually feasible to totally eliminate it if we make use of the NMS using no bounding-boxes. In this case, all the foreground samples (determined by the classification head) will be used to compute instance masks, and the duplicated masks will be removed by mask-based NMS. As shown in Table \[table:mask\_nms\], with the mask-based NMS, the same overall performance can be obtained as box-based NMS ($35.7\%$ vs. $35.7\%$ in mask AP). Comparisons with State-of-the-art Methods ----------------------------------------- We compare  against previous state-of-the-art methods on MS-COCO `test`-`dev` split. As shown in Table \[table:comparisons\_state\_of\_the\_art\_methods\], with $1\times$ learning rate schedule (, $90K$ iterations),  outperforms the original Mask R-CNN by $0.8\%$ ($35.4\%$ vs. $34.6\%$).  also achieves a much faster speed than the original Mask R-CNN ($49$ms vs. $65$ms per image on a single V100 GPU). To our knowledge, it is the first time that a new and simpler instance segmentation method, without any bells and whistles outperforms Mask R-CNN both in accuracy and speed.  also obtains better performance ($35.9\%$ vs.  $35.5\%$) and on-par speed ($49$ms vs $49$ms) than the well-engineered Mask R-CNN in `Detectron2` (, Mask R-CNN$^*$ in Table \[table:comparisons\_state\_of\_the\_art\_methods\]). Furthermore, with a longer training schedule (, $3\times$) or a stronger backbone (, ResNet-101), a consistent improvement is achieved as well ($37.8\%$ vs. $37.5\%$ with ResNet-50 $3\times$ and $39.1\%$ vs. $38.8\%$ with ResNet-101 $3\times$), which suggests  is inherently superior to Mask R-CNN. Moreover, as shown in Table \[table:comparisons\_state\_of\_the\_art\_methods\], with the auxiliary semantic segmentation task, the performance can be boosted from $37.8\%$ to $38.8\%$ (ResNet-50) or from $39.1\%$ to $40.1\%$ (ResNet-101), without increasing the inference time. For fair comparisons, all the inference time here is measured by ourselves on the same hardware with the official codes. ------ ---------- ----------- ----------- ---------- ---------- ---------- NMS AP AP$_{50}$ AP$_{75}$ AP$_{S}$ AP$_{M}$ AP$_{L}$ box **35.7** 56.3 **37.8** 17.1 39.1 50.2 mask **35.7** **56.7** 37.7 **17.2** **39.2** **50.5** ------ ---------- ----------- ----------- ---------- ---------- ---------- : Instance segmentation results with different NMS algorithms. As shown in the table, mask-based NMS can obtain the same overall performance as box-based NMS, which suggests that  can totally eliminate the bounding-box detection. Note that it is impossible for ROI-based methods such as Mask R-CNN to remove bounding-box detection.[]{data-label="table:mask_nms"} ---------------------------------- ----------- ------ ------------- ---------- ----------- ----------- ---------- ---------- ---------- method backbone aug. sched. AP AP$_{50}$ AP$_{75}$ AP$_{S}$ AP$_{M}$ AP$_{L}$ Mask R-CNN [@he2017mask] R-50-FPN $1\times$ 34.6 **56.5** 36.6 15.4 36.3 **49.7** **** R-50-FPN $1\times$ **35.4** 56.4 **37.6** **18.4** **37.9** 46.9 Mask R-CNN$^*$ R-50-FPN $1\times$ 35.5 57.0 37.8 19.5 37.6 46.0 Mask R-CNN$^*$ R-50-FPN $3\times$ 37.5 59.3 40.2 21.1 39.6 48.3 TensorMask [@chen2019tensormask] R-50-FPN $6\times$ 35.4 57.2 37.3 16.3 36.8 49.3 **** R-50-FPN $1\times$ 35.9 56.9 38.3 19.1 38.6 46.8 **** R-50-FPN $3\times$ 37.8 59.1 40.5 21.0 40.3 48.7 **** w/ sem. R-50-FPN $3\times$ 38.8 60.4 41.5 21.1 41.1 51.0 Mask R-CNN R-101-FPN $6\times$ 38.3 61.2 40.8 18.2 40.6 **54.1** Mask R-CNN$^*$ R-101-FPN $3\times$ 38.8 60.9 41.9 21.8 41.4 50.5 YOLACT-700 [@bolya2019yolact] R-101-FPN $4.5\times$ 31.2 50.6 32.8 12.1 33.3 47.1 TensorMask R-101-FPN $6\times$ 37.1 59.3 39.4 17.4 39.1 51.6 **** R-101-FPN $3\times$ 39.1 60.9 42.0 21.5 41.7 50.9 **** w/ sem. R-101-FPN $3\times$ **40.1** **62.1** **43.1** **21.8** **42.7** 52.6 ---------------------------------- ----------- ------ ------------- ---------- ----------- ----------- ---------- ---------- ---------- We also compare  with the recently-proposed instance segmentation methods. Only with half training iterations,  surpasses TensorMask [@chen2019tensormask] by a large margin ($38.8\%$ vs. $35.4\%$ for ResNet-50 and $39.1\%$ vs. $37.1\%$ for ResNet-101).  is also $\sim 8\times$ faster than TensorMask ($49$ms vs $380$ms per image on the same GPU) with similar performance ($37.8\%$ vs.$37.1\%$). Moreover,  outperforms YOLACT-700 [@bolya2019yolact] by a large margin with the same backbone ResNet-101 ($40.1\%$ vs.$31.2\%$ and both with the auxiliary semantic segmentation task). Moreover, as shown in Fig. \[fig:qualitative\], compared with YOLACT-700 and Mask R-CNN,  can preserve more details and produce higher-quality instance segmentation results. More qualitative results are shown in Fig. \[fig:more\_qualitative\]. ![image](figures/vis_results_more.pdf){width=".9\linewidth"} Conclusions =========== We have proposed a new and simpler instance segmentation framework, named . Unlike previous method such as Mask R-CNN, which employs the mask head with fixed weights,  conditions the mask head on instances and dynamically generates the filters of the mask head. This not only reduces the parameters and computational complexity of the mask head, but also eliminates the ROI operations, resulting in a faster and simpler instance segmentation framework. To our knowledge,  is the first framework that can outperform Mask R-CNN both in accuracy and speed, without longer training schedules needed. We believe that  can be a new strong alternative to Mask R-CNN for instance segmentation. [^1]: Corresponding author, e-mail: `chunhua.shen@adelaide.edu.au` [^2]: By FCNs, we mean the vanilla FCNs in [@long2015fully] that only involve convolutions and pooling.
--- abstract: 'Two deep level defects (2.25 and 2.03 eV) associated with oxygen vacancies (V$_o$) were identified in ZnO nanorods (NRs) grown by low cost chemical bath deposition. A transient behaviour in the photoluminescence (PL) intensity of the two V$_o$ states was found to be sensitive to the ambient environment and to NR post-growth treatment. The largest transient was found in samples dried on a hot plate with a PL intensity decay time, in air only, of 23 and 80 s for the 2.25 and 2.03 eV peaks, respectively. Resistance measurements under UV exposure exhibited a transient behaviour in full agreement with the PL transient indicating a clear role of atmospheric O$_2$ on the surface defect states. A model for surface defect transient behaviour due to band bending with respect to the Fermi level is proposed. The results have implications for a variety of sensing and photovoltaic applications of ZnO NRs.' author: - 'E. G. Barbagiovanni' - 'V. Strano' - 'G. Franzò' - 'I. Crupi' - 'S. Mirabella' title: Photoluminescence transient study of surface defects in ZnO nanorods grown by chemical bath deposition --- Zinc oxide is a wide gap ($\sim$ 3.2$\rightarrow$3.4 eV) n-type semiconductor with a large exciton binding energy (60 meV) applications [@Janotti:2009]. However, source of n-type conductivity , the UV sensing mechanism, and the defect landscape [@Spencer:2013; @Janotti:2007]. In one model, the neutral oxygen vacancy (V$_o$) is an n-type donor state [@Janotti:2007], atmospheric O$_2$ absorbs at this site [@Spencer:2013], and a depletion [@Liu:2010_1]. UV excitation holes migrate to the depletion and O$_2$ desorption occurs, thus reducing the depletion region and increasing the conductivity [@Liu:2010_1; @Kushwaha:2012]. the kinetics of O$_2$ desorption determine the response time of UV sensor. In a second model, V$_o$ is reported to be a deep level state (DLS) and so cannot [@Janotti:2009]. Instead, H occupying sites act as donor states [@Spencer:2013; @Janotti:2009], therefore, the O desorption model does not drive UV sensing. An alternative suggests that after UV excitation, the DLS a metastable state resonant with the conduction band (CB), the doubly ionized V$_o$ (V$_o^{2+}$) state above the CB [@Lany:2005]. This mechanism is reported to explain PPC in UV sensors [@Hullavarad:2009; @Spencer:2013]. Nonetheless, serious critiques over the validity of this model been presented in the literature [@Janotti:2007]. In either model, the energetic position are central. Experimentally, defect states depend the carrier concentration and hence the Fermi level (E$_F$) [@Wang:2012_2]. A general consensus finds that V$_o$ lies within the flat band region, while the depletion region is dominated by the singly ionized V$_o$ (V$_o^{+}$) and/ or V$_o^{2+}$ for ZnO [@Wang:2012_2; @Cheng:2013; @Bouzid:2009; @Chaudhuri:2010; @Kushwaha:2012]. Theoretically, depends on the chemical potential and lattice relaxations [@Janotti:2007; @Lany:2005], which vary between bulk materials, thin films, and NSs [@Janotti:2009]. To understand the role of the defect states with respect to O$_2$ desorption, we measured, and vacuum conditions, the defect photoluminescence (PL) peak transient over large time scales . We the change in PL resistance under UV excitation. ZnO NRs were grown by chemical bath deposition (CBD). A Si substrate with native oxide was seeded with a ZnO nanoparticle [solution (0.1 wt$\%$ in ethanol)]{} via spin-coating . The seeded substrate was then placed in a solution of 25 mM zinc nitrate hexahydrate \[Zn(NO$_3$)$_2 \cdot$6H$_2$O\] and 25 mM hexamethylenetetramine \[C$_6$H$_{12}$N$_4$\] (HMTA) at a 90 $^o$C [@Strano:2014]. The quality, diameter, and length of the NRs was measured using . CBD introduces water based absorbates [@Tam:2006; @Wagata:2012; @Bera:2009], we controlled this parameter by comparing samples as-prepared not dried (ND), dried at 100 $^o$C for 20 min on a hot plate, . Additionally, we annealed a sample to understand the role of V$_o$. Resistance measurements were performed under exposure to 364 nm UV light. The samples were biased to force a current of 1 nA . This value was used to avoid in the samples and gave a good overall response UV exposure. taken in air vacuum cryostat ($\sim$10$^{-6}$ mbar), to ascertain the role of O$_2$. An SEM image of the ZnO NRs is given in Fig. \[SEM\_ZnO\]. Lower resolution images demonstrate uniform coverage of the NRs over sample area. ZnO NRs have a diameter and height $\sim$150 nm and 1 $\mu$m, respectively. The NRs oriented along the ZnO c-axis [@Strano:2014], . in the annealed sample, though nano-pits formed on the tips of the NRs. ![SEM image of the ZnO NRs showing their hexagonal crystal structure. \[SEM\_ZnO\]](ZnO_fig.eps) The inset in Fig. \[Defect\_PL\] shows the UV peak centred 382 nm . The intensity of the ND sample is almost 1.5 times the dried sample, and 7.8 times the annealed sample. This result is likely transitions do not from band to band, but with shallow defect states near the conduction band minimum (CBM) or (VBM), which reduced during the O$_2$ anneal [@Srikant:1998]. Many authors state indicates a reduction in the optical quality of the sample, and thus a reduction of the UV sensing capability [@Kushwaha:2012; @Liu:2010_1]. We believe that this assessment is premature as it does not segregate the role ![PL spectra showing the variation in the peak intensity between air and vacuum for the dry, ND, and annealed samples. The inset the UV spectra for the dry, ND, and annealed samples in vacuum.\[Defect\_PL\]](Defect_PL_Age_UV.eps) , which is consistent with the sample (see Fig. \[Defect\_PL\]). ![DLS PL spectrum for the dried sample in vacuum and air.\[Defect\_air\_vac\]](Defect_PL_Air_Vac_Dry.eps) First, the ND sample essentially no change in the PL intensity whether in air or vacuum. of $\sim$ 70 s. Comparing with the dried sample, in vacuum there is no transient, while in air there is a marked . On the other hand, the annealed sample shows no defect transient, because of the reduction in the V$_o$ concentration. A similar result was found in UV , whereby, O$_2$ annealing reduces the UV sensing performance [@Kushwaha:2013; @Lv:2013]. ![555 nm PL transient in air and in vacuum for the dried, ND, and annealed sample.\[PL\_time\]](Defect_Time_Air_Vac.eps) Finally, There are several factors that can affect the transient measurements, such as the intensity of the excitation source and sample quality. Nonetheless, We can explain these results with a simple model. ![Comparison of the resistance (2$\mu$/cm$^2$, 364nm UV light) and 555 nm PL transient for the dried sample. Both transients have a decay time of 23 s, while the 2$^{nd}$ decay time is for the PL transient alone. \[Res\_PL\]](IV_Defect_PL_Time_figure.eps) The results above hint at a role for surface defects under UV excitation, depicted in Fig. \[band\]. the depletion region ![Schematic illustration of the UV sensing mechanism (not drawn to scale). The top and bottom half represent the band configuration in air and vacuum before and after UV excitation, respectively. Band bending ionizes the V$_o$ state at the surface to V$_o^{+}$. In the air configuration, absorbed O$_2$ increases the band bending (represented by $\Delta$) and the number of V$_o^{+}$ sites. After UV excitation, two DLS PL bands are represented. In air, holes (open circles) migrate to the surface allowing O$_2$ to desorb decreasing the depletion region and the number of V$_o^{+}$ sites, simultaneously electron (filled circles) conduction increases. \[band\]](band.eps) The bottom part of Fig. \[band\] depicts what happens after UV exposure. The UV excitation creates an occupation of excited electrons and holes, given by the closed and open circles in Fig. \[band\], respectively. In vacuum, there is a PL spectra from the V$_o^+$ and V$_o$ state at 2.25 and 2.03 eV, respectively, which is constant over time. While in air, the holes are free to migrate to surface and neutralize O$_2^-$, allowing O$_2$ to desorb from the sample surface, This model of surface defects can help to explain some of the conflicting results in the literature. For example, it has been argued that O$_2$ desorption is not a significant process in PPC studies [@Hullavarad:2009; @Spencer:2013], whereby, the metastable state model of Ref. is favoured. However, these studies consider thin films [@Spencer:2013; @Li:2005], which have a lower concentration of surface defects and thus a modified O$_2$ desorption rate compared with NRs [@Bayan:2012; @Li:2005; @Swanwick:2012; @Bera:2009]. In our model, since conduction electrons accumulate at the surface V$_o^+$ state, response time is correlated with V$_o^+$ due to surface band bending. Therefore, more band bending implies more V$_o^+$ states and promotes the V$_o^+$ $\rightarrow$ V$_o$ transition, which can explain why better response was observed for gas sensors with reduced NR diameter [@Lupan:2010]. Furthermore, our model implies O$_2$ desorption is only one route to obtain charge separation through the V$_o^+$ state. Many authors have found enhanced UV sensing by coating ZnO with a metal [@Liu:2010_1] or a conducting polymer [@Hassan:2012], in agreement with our results using a Au interdigitated mask. In conclusion, our low cost CBD ZnO NRs exhibit two main surface DLS peaks at 555, and 610 nm due to V$_o^{+}$, and V$_o$, respectively. The 555 nm PL intensity exhibits a transient in air $\sim$ 20 s, which is well correlated with the change in resistance under UV excitation. This correlation arises because O$_2$ desorption decreases the band bending and thus the concentration of V$_o^{+}$ states, simultaneously, charge separation reduces the sample resistance. The PL transient is suppressed in vacuum, because the depletion region is stable since O$_2$ desorption does not occur. We presented a unified model for these results with implications for photovoltaic, gas, pH, and UV sensing applications. [26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [10.1021/jp507496a](\doibase 10.1021/jp507496a) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} We would like to thank Kingsley Iwu for valuable discussions and insights regarding the chemical synthesis of ZnO NRs. The authors acknowledge MIUR projects, ENERGETIC (PON02$\textunderscore$00355$\textunderscore$3391233), and PLAST$\textunderscore$ICs (PON02$\textunderscore$00355$\textunderscore$3416798).
--- abstract: 'The boundedness and compactness of weighted composition operators from $H^\infty$ to the Bloch space in the unit ball of $\CC^n$ are investigated in this paper. In particular, some new characterizations for the boundedness and the essential norm of weighted composition operators are given. [^1] [*Keywords*]{}: Weighted composition operator, $H^\infty$, Bloch space, essential norm. [^2]' address: - | Juntao Du\ Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau. - | Songxiao Li\ Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, 610054, Chengdu, Sichuan, P.R. ChinaInstitute of Systems Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau. author: - 'Juntao Du and Songxiao Li$^\dagger$' title: ' Weighted Composition Operators from $H^\infty$ to the Bloch Space in the Unit Ball of $\CC^n$' --- Introduction ============ Let $\BB$ be the open unit ball of $\CC^n$ and $\partial \BB$ the boundary of $\BB$. When $n=1$, $\BB$ is the open unit disk $\D$ in the complex plane. Let $H(\BB)$ denote the space of all holomorphic functions on $\BB$. For $f\in H(\BB)$, the radial derivative and complex gradient of $f$ at $z$ will be denoted by $\R f(z)$ and $\nabla f(z)$, respectively. That is, $$\R f(z)=\sum_{j=1}^n z_j\frac{\partial f}{\partial z_j}(z),\mbox{ and } \,\,\nabla f(z)=\Big(\frac{\partial f}{\partial z_1}(z),\frac{\partial f}{\partial z_2}(z),\cdots, \frac{\partial f}{\partial z_n}(z)\Big).$$ An $f\in H(\BB)$ is said to belong to the Bloch space, denoted by $\B=\B(\BB)$, if $$\|f\|_{\beta}:=\sup\limits_{z\in \BB} (1-|z|^2)|\R f(z)|<\infty.$$ The space $\B$ is a Banach space with the norm $\|f\|_{\B}=|f(0)|+\|f\|_\mu $. From [@Zkh2005], we see that $\|f\|_\beta \approx \sup_{z\in\BB}(1-|z|^2)|\nabla f(z)|.$ In [@Tr1980blms], Timoney proved that $$\label{1222-3} \|f\|_{\beta}\approx \sup\left\{ \frac{|\langle \nabla f(z),\ol{v} \rangle|}{H_z(v,v)^\frac{1}{2}} :z\in\BB,v\in\CC^n\backslash\{0\} \right\}.$$ Here $H_z(v,v)$ is the Bergman metric defined by $$H_z(v,v)=\frac{n+1}{2}\frac{(1-|z|^2)|v|^2+|\langle v,z\rangle|^2}{(1-|z|^2)^2}, ~~z\in\BB,v\in\CC^n\backslash\{0\} .$$ See [@Zkh2005] for more information of the Bloch space $\B$ on the unit ball. We use $H^\infty$ to denote the space of bounded holomorphic functions in $\BB$. That is, $f\in H^\infty$ if and only if $f\in H(\BB)$ and $$\|f\|_{\infty}:=\sup_{z\in\BB} |f(z)|<\infty.$$ It is well known that $H^\infty$ is a Banach space and a subset of $\B$. Moreover (see [@Zkh2005]), $$\label{1222-1} \|f\|_\B \leq \|f\|_{\infty}.$$ Let $u\in H(\BB)$ and $\vp(z)=(\vp_1(z),\vp_2(z),\cdots,\vp_n(z))$ be a holomorphic self-map of $\BB$. The weighted composition operator, denoted by $uC_\vp$, is defined by $$(uC_\vp f)(z)=u(z)f(\vp(z)),\,\,f\in H(\BB).$$ When $u=1$, $uC_\vp$ is the composition operator, denoted by $C_\vp$. It is important to give function theoretic description of when $u$ and $\vp$ induce a bounded or compact weighted composition operator on various function spaces (see [@CccMbd1995]). In the setting of the unit disk, it is well known that the operator $C_\vp$ is bounded on $\B$ for any analytic self-map $\vp$ in $\D$ by Schwarz-Pick Lemma. The compactness of the composition operator on $\B$ was characterized in [@MkMa1995tams]. In [@WlZdcZkh2009pams], Wulan, Zheng and Zhu showed that $C_\varphi$ is compact on $\B$ if and only if $ \lim\limits_{k\to\infty} \|\varphi^k \|_\B=0.$ This method has been used to describe the boundedness and compactness of $uC_\varphi$ on some function spaces, see [@Cf2013cejm; @Djn2012jmaa; @HqhLsxWh2017cvee; @LxsLsx2017ieot] for example. In [@Os2001tjm], Ohno characterized the boundedness and compactness of the operator $uC_\varphi:H^\infty\to\B$. Colonna, motivated by [@WlZdcZkh2009pams], gave another characterization for the boundedness and compactness of the operator $uC_\varphi:H^\infty\to\B$ in [@Cf2013cejm]. Hu, Li and Wulan, based on the work of Ohno and Colonna, gave some estimates for the essential norm of the operator $uC_\varphi:H^\infty\to\B$ in [@HqhLsxWh2017cvee]. Moreover, they gave a new characterization for the boundedness and compactness of $uC_\varphi:H^\infty\to\B$ in [@HqhLsxWh2017cvee]. In the setting of the unit ball, Shi and Luo studied composition operators on the Bloch space in [@SjhLl2000ams]. In [@Djn2012jmaa], Dai gave several new characterizations for the compactness of the composition operator on the Bloch space, which extended the main result in [@WlZdcZkh2009pams] to the unit ball. Li and Stević characterized the boundedness and compactness of $uC_\varphi:H^\infty\to\B$ in [@LsxSs2008tjm] (see [@LsxSs2007aaa] for the setting of polydisk). Zhang and Chen gave two characterizations of the boundedness and compactness of $uC_\varphi:H^\infty\to\B$ in [@ZmzChh2009ams]. For example, they showed that $uC_\vp:H^\infty\to\B$ is bounded if and only if $u\in\B$ and $$\sup_{z\in\BB} (1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)}<\infty.$$ In this paper, motivated by [@HqhLsxWh2017cvee] and [@ZmzChh2009ams], we investigate the boundedness, compactness and essential norm of $uC_\vp:H^\infty\to\B$ on the unit ball. That is, we will give some new characterizations for the boundedness, compactness and essential norm of $uC_\vp:H^\infty\to\B$. These extend the results in [@HqhLsxWh2017cvee] to the unit ball. Moreover, the method we used here is completely different from [@HqhLsxWh2017cvee]. Recall that the essential norm of a bounded linear operator $T:X\to Y$, denoted by $\|T\|_{e,X\to Y}$, is defined as the distance from $T$ to the space of compact operators from $X$ to $Y$. That is, $$\|T\|_{e,X\to Y}=\inf\{\|T-K\|_{X\to Y}: \, \, ~~~K \mbox{ is a compact operator from } X \mbox{ to } Y\}.$$ Constants are denoted by $C$, they are positive and may differ from one occurrence to the next. We say that $A\lesssim B$ if there exists a constant $C$ such that $A \leq CB$. The symbol $A\approx B$ means that $A\lesssim B\lesssim A$. Boundedness of $uC_\vp:H^\infty\to\B $ ======================================= Before we state the main result and the proof in this section, we state some notations and preliminary results. Let $\vp^\p(z)$ be the Jacobian matrix of $\vp$, that is $$\vp^\p(z)=\left( \begin{array}{cccc} \frac{\partial \vp_1}{\partial z_1} &\frac{\partial \vp_1}{\partial z_2} &\cdots&\frac{\partial \vp_1}{\partial z_n}\\ \frac{\partial \vp_2}{\partial z_1} &\frac{\partial \vp_2}{\partial z_2} &\cdots&\frac{\partial \vp_2}{\partial z_n}\\ \cdots &\cdots &\cdots &\cdots\\ \frac{\partial \vp_n}{\partial z_1} &\frac{\partial \vp_n}{\partial z_2} &\cdots&\frac{\partial \vp_n}{\partial z_n} \end{array} \right).$$ Therefore $$\nabla (f(\vp(z)))=(\nabla f)(\vp(z))\vp^\p(z),\mbox{ and }\,\, \R (f(\vp(z)))=(\nabla f)(\vp(z))\vp^\p(z)z.$$ Here and henceforth, we do not distinguish the row vector and column vector, that is, we always admit the vectors have the proper forms in the expressions. For $a\in \BB\backslash\{0\}$, the automorphism of $\BB$ is defined by $$\phi_a(z)=\frac{a-P_a z-s_aQ_a z}{1-\langle z,a\rangle}, z\in\BB,$$ where $s_a=\sqrt{1-|a|^2}$, $$P_a z=\frac{\langle z,a\rangle}{|a|^2}a,\,\, Q_a z=z-\frac{\langle z,a\rangle}{|a|^2}a,\,z\in\BB.$$ When $a=0$, set $\phi_a(z)=-z$. Let $\phi_a(z)=(\phi_{a,1}(z),\phi_{a,2}(z),\cdots,\phi_{a,n}(z))$. We have $$\{\phi_{a,i}(z)\}_{i=1}^n\subset H^\infty,\mbox{ and } \sum_{i=1}^n |\phi_{a,i}(z)|^2<1.$$ [**Lemma 1.**]{} [@Djn2012jmaa Lemma 2.1] [*Let $a\in \BB\backslash\{0\}$. Then $$|\phi_a(z)-a|=\frac{\sqrt{(1-|a|^2)(|z|^2-|\langle z,a \rangle|^2)}}{|1-\langle z,a \rangle|}$$ and $$|\phi_a^\p(a)z|=\frac{\sqrt{(1-|a|^2)|z|^2+ |\langle z,a \rangle|^2}}{1-|a|^2}.$$* ]{} [**Lemma 2.**]{} [@li] [*Let $u,f\in H(\BB)$. Then $$\R I_u(f)(z)=u(z)\R f(z),\,\,\R J_u(f)(z)= \R u(z)f(z), ~~~\,~\, z\in \BB.$$ Here $$I_u(f)(z)=\int_0^1 u(tz)(\R f)(tz)\frac{dt}{t},\,\,J_u(f)(z)=\int_0^1 (\R u)(tz)f(tz)\frac{dt}{t}.$$* ]{} [**Theorem 1.**]{} *Suppose $u\in H(\BB)$ and $\vp$ is a holomorphic self-map of $\BB$. Then the following statements are equivalent.* (i) $uC_\vp:H^\infty\to\B$ is bounded. (ii) $u\in\B$ and $$M_1:=\sup_{k\in\N}\sup_{\xi\in\partial \BB} \|u\langle \varphi,\xi\rangle^k\|_{\beta} <\infty.$$ (iii) $u\in\B$ and $$M_2:=\sup_{z\in\BB} (1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)}<\infty.$$ (iv) $u\in\B$ and $$M_3:=\sup_{1\leq i\leq n}\sup\limits_{w\in\BB}\|uC_\vp\phi_{\vp(w),i}\|_{\beta}<\infty.$$ (v) $u\in\B$ and $$M_4:=\sup_{k\in\N}\sup_{\xi\in\partial \BB} \|I_u(\langle \varphi,\xi\rangle^k)\|_{\beta}<\infty.$$ Obviously, $uC_\vp:H^\infty\to\B$ is bounded if and only if $$\|uC_\vp f\|_{\mu}\lesssim \|f\|_{\infty}, ~~~\forall f\in H^\infty.$$ [**([*i*]{})$\Rightarrow$ ([*ii*]{})**]{}. This implication is obvious since $\|f_{k,\xi}\|_{\infty}=1$. Here and henceforth, $f_{k,\xi}(z)=\langle z,\xi\rangle^k$, $z\in \BB$, $\xi\in\partial \BB$. [**([*ii*]{})$\Rightarrow$ ([*iii*]{})**]{}. When $k\geq 1$, since $$\R(f_{k,\xi}\comp\vp)(z)=k \langle \vp(z),\xi\rangle^{k-1} \langle \vp^\p(z) z,\xi \rangle$$ and $$|\R (uC_\vp f_{k,\xi})(z)| \geq |u(z)||\R(f_{k,\xi}\comp\vp)(z)|-|\R u(z)||f_{k,\xi}(\vp(z))| ,$$ we obtain $$\label{1216-1} \sup_{z\in\BB}\sup_{k\in\N}\sup_{\xi\in\partial \BB}k(1-|z|^2)|u(z)|\left| \langle \vp(z),\xi\rangle^{k-1} \langle \vp^\p(z) z,\xi \rangle \right| \leq 2M_1.$$ After a calculation, we have [$$\label{1216-2} \sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z) } \lesssim \left( \frac{(1-|\vp(z)|^2)^\frac{1}{2}|\vp^\p(z)z|}{1-|\vp(z)|^2}+\frac{|\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2}\right).$$ ]{} Let $\xi_i$ be the vector in which the $i$-th component is 1 and the others are 0, $i=1,2,\cdots,n$. By letting $k=1$, from (\[1216-1\]), we have $$\label{1220-2} \sup_{z\in\BB}(1-|z|^2)|u(z)||\vp^\p(z) z| =\sup_{z\in\BB}(1-|z|^2)|u(z)|\left(\sum_{i=1}^n |\langle \vp^\p(z) z,\xi_i \rangle|^2\right)^\frac{1}{2} \lesssim M_1.$$ When $|\vp(z)|\leq \frac{1}{2}$, from (\[1216-2\]) we have $$(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)} \lesssim (1-|z|^2)|u(z)||\vp^\p(z) z|.$$ Therefore, $$\label{1216-3} \sup_{|\vp(z)|\leq \frac{1}{2}}(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)} \lesssim M_1.$$ Next, we will prove that $$\label{0101-1} \sup_{|\vp(z)|> \frac{1}{2}}(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)}\lesssim M_1.$$ Assume that there exists $k\in\N$ such that $k\geq 3 $ and $1-\frac{1}{k-1}\leq |\vp(z)| < 1-\frac{1}{k}$. It is easy to see that $$\label{0102-1} |\vp(z)|^{2(k-1)}\approx 1 , \mbox{ and } k( 1-|\vp(z)|^2)\approx 1.$$ Therefore, by letting $\tau=\vp(z)$, we have $$\label{1219-6} \frac{|\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2} \approx k {|\langle \vp(z),\tau \rangle|^{k-1}|\langle {\vp^\p(z)z,\tau \rangle|}}.$$ By (\[1216-1\]), we get $$\begin{aligned} \sup_{1-\frac{1}{k-1}\leq |\vp(z)| < 1-\frac{1}{k} } \frac{(1-|z|^2)|u(z)||\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2} \lesssim M_1 \nonumber\end{aligned}$$ and hence $$\begin{aligned} \label{1216-4} & & \sup_{|\vp(z)|> \frac{1}{2}} \frac{(1-|z|^2)|u(z)||\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2} \nonumber\\ &\lesssim & \sup_{k \geq 3} \sup_{1-\frac{1}{k-1}\leq |\vp(z)| < 1-\frac{1}{k} } \frac{(1-|z|^2)|u(z)||\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2} \lesssim M_1.\end{aligned}$$ By projection theorem, there exists a $\eta(z)\in \partial \BB$ such that $\langle \vp(z),\eta(z)\rangle =0. $ Then $$\label{1222-2} \vp^\p(z)z=p\vp(z)+q \eta(z),$$ where $$p=\frac{\langle \vp^\p(z) z,\vp(z)\rangle}{|\vp(z)|^2}, q=\langle \vp^\p(z) z,\eta(z)\rangle.$$ Since $|\vp(z)|>\frac{1}{2}$, we get $$\label{1217-1} |\vp^\p(z)z|^2 \leq 4|\langle \vp^\p(z) z,\vp(z)\rangle|^2+|\langle \vp^\p(z) z,\eta(z)\rangle|^2.$$ Let $\zeta(z)=\vp(z)+\sqrt{1-|\vp(z)|^2}\eta(z)$. By (\[1222-2\]), we have $$\label{1222-4} |\zeta(z)|=1,\,\, \langle \vp(z),\zeta(z)\rangle =|\vp(z)|^2,$$ and $$\langle \vp^\p(z)z,\zeta(z)\rangle =\langle \vp^\p(z)z,\vp(z) \rangle +\sqrt{1-|\vp(z)|^2}\langle \vp^\p(z) z,\eta(z)\rangle.$$ Then, $$\begin{aligned} \sqrt{1-|\vp(z)|^2}|\langle \vp^\p(z) z,\eta(z)\rangle| &\leq& |\langle \vp^\p(z) z,\vp(z)\rangle| +|\langle \vp^\p(z) z,\zeta(z)\rangle| . \label{1217-2}\end{aligned}$$ From (\[1217-1\]), (\[1217-2\]) and (\[1216-4\]), we obtain $$\begin{aligned} &&(1-|z|^2)|u(z)|\frac{|\vp^\p(z)z|}{\sqrt{1-|\vp(z)|^2}} \nonumber\\ &\lesssim& (1-|z|^2)|u(z)|\frac{|\langle \vp^\p(z) z,\vp(z)\rangle|+|\langle \vp^\p(z) z,\eta(z)\rangle|}{\sqrt{1-|\vp(z)|^2}} \nonumber\\ &\lesssim& (1-|z|^2)|u(z)|\frac{ |\langle \vp^\p(z) z,\vp(z)\rangle| +|\langle \vp^\p(z) z,\zeta(z)\rangle| }{{1-|\vp(z)|^2}}\label{1219-7}\\ &\lesssim& M_1 + (1-|z|^2)|u(z)| \frac{|\langle \vp^\p(z) z,\zeta(z)\rangle|}{1-|\vp(z)|^2}.\nonumber\end{aligned}$$ By (\[0102-1\]), (\[1222-4\]) and (\[1216-1\]), we have $$\begin{aligned} &&(1-|z|^2)|u(z)| \frac{|\langle \vp^\p(z) z,\zeta(z)\rangle|}{1-|\vp(z)|^2} \nonumber\\ &\approx& k (1-|z|^2)|u(z)| |\vp(z)|^{2(k-1)} |\langle \vp^\p(z) z,\zeta(z)\rangle| \nonumber\\ &=& k (1-|z|^2)|u(z)| |\langle \vp(z),\zeta(z)\rangle|^{k-1} |\langle \vp^\p(z) z,\zeta(z)\rangle| \label{1219-8}\\ &\lesssim & M_1.\nonumber\end{aligned}$$ Therefore, we have $$\label{1216-5} \sup_{|\vp(z)|> \frac{1}{2}} \frac{(1-|z|^2)|u(z)|(1-|\vp(z)|^2)^\frac{1}{2}|\vp^\p(z)z|}{1-|\vp(z)|^2} \lesssim M_1.$$ By (\[1216-2\], (\[1216-4\]) and (\[1216-5\]), we get (\[0101-1\]). From (\[1216-3\]) and (\[0101-1\]), we see that ([*iii*]{}) holds. [**([*iii*]{})$\Rightarrow$ ([*i*]{})**]{}. Let $f\in H^\infty$. From (\[1222-3\]) and (\[1222-1\]), we have $$\begin{aligned} &&\|uC_\vp f\|_{\beta} \nonumber\\ &\leq&(1-|z|^2) |\R u(z)||f(\vp(z))| + (1-|z|^2)|u(z)|\left|(\nabla f)(\vp(z))\vp^\p(z)z\right| \nonumber\\ &\leq& \|u\|_{\beta}\|f\|_{\infty} +(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z)}\frac{\left|(\nabla f)(\vp(z))\vp^\p(z)z\right|}{ \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z)}} \nonumber\\ &\lesssim& \|u\|_{\beta}\|f\|_{\infty} +M_2 \|f\|_{\beta} \nonumber\\ &\lesssim& \|u\|_{\beta}\|f\|_{\infty} +M_2 \|f\|_{\infty}. \nonumber\end{aligned}$$ So $uC_\vp:H^\infty\to\B$ is bounded. [**([*iv*]{})$\Rightarrow$ ([*i*]{})**]{}. Suppose $(iv)$ holds. For all $w\in\BB$, by Lemma 1, we have $$\begin{aligned} \sqrt{\frac{2}{n+1}}\sqrt{H_{\vp(w)}(\vp^\p(w) w,\vp^\p(w) w)} &=&\left|\phi_{\vp(w)}^\p (\vp(w))\vp^\p(w)w\right| \\ &=& \left(\sum_{i=1}^n \left|\R(\phi_{\vp(w),i}\comp \vp)(w)\right|^2\right)^\frac{1}{2}\\ &\lesssim &\sum_{i=1}^n \left|\R(\phi_{\vp(w),i}\comp \vp)(w)\right|.\end{aligned}$$ Since $|\phi_{\vp(w)}(z)|<1$, we obtain $$\begin{aligned} &&(1-|w|^2)|u(w)|\sqrt{H_{\vp(w)}(\vp^\p(w) w,\vp^\p(w) w)} \nonumber\\ &\lesssim& \sum_{i=1}^n \left (1-|w|^2)|u(w)| |\R(\phi_{\vp(w),i}\comp \vp)(w)\right| \nonumber\\ &\lesssim& \sum_{i=1}^n\left(\|uC_\vp\phi_{\vp(w),i}\|_{\mu}+(1-|w|^2)|\R u(w)||(\phi_{\vp(w),i}\comp \vp)(w)|\right)\label{1220-1}\\ &\lesssim& M_3+\|u\|_{\mu}.\nonumber\end{aligned}$$ Then ([*iii*]{}) holds. So ([*i*]{}) holds. [**([*i*]{})$\Rightarrow$ ([*iv*]{})**]{}. This implication is also obvious since $\|\phi_{\vp(w),i}\|_{\infty}\leq 1$. [**([*v*]{})$\Leftrightarrow$ ([*ii*]{})**]{}. By $u\in\B$ and Lemma 2, we have $$(1-|z|^2)|\R (J_u(f_{k,\xi}\comp \vp))(z) |\leq (1-|z|^2)|\R u(z)|\leq \|u\|_{\mu}$$ and $$\label{1223-1} \R (uC_\vp f_{k,\xi})(z) = \R (I_u(f_{k,\xi}\comp \vp))(z) + \R (J_u(f_{k,\xi}\comp \vp))(z).$$ Using triangle inequality, we can get the desired result. The proof is complete. [**Remark 1.**]{} In [@ZmzChh2009ams], the equivalence of $\it (i)$ and $\it (iii)$ was proved in a different way. the essential norm of $uC_\vp:H^\infty\to\B$ ============================================ To study the essential norm of $uC_\vp:H^\infty\to\B$, we need the following lemmas. [**Lemma 3.**]{} [@Tm1996d Lemma 2.10] [*Suppose $T:H^\infty\rightarrow \B$ is linear and bounded. Then $T$ is compact if and only if whenever $\{f_k\}_{k=1}^\infty$ is bounded in $H^\infty$ and $f_k \rightarrow 0$ uniformly on compact subsets of $\BB$, $\lim\limits_{k\to \infty}\|T f_{k }\|_{\B}=0$.*]{} [**Lemma 4.**]{} [*Suppose $0<r,s<1$ and $f\in H(\BB)$. For all $|z|\leq s$, $$|\nabla f(z)|\leq \frac{2n}{1-s}\max_{|z|\leq \frac{1+s}{2}}|f(z)|\,\, \mbox{ and }\,\, |f(z)-f(rz)|\leq \frac{2n(1-r)}{1-s}\max_{|z|\leq \frac{1+s}{2}}|f(z)|.$$* ]{} Since $f\in H(\BB)$, $\frac{\partial f}{\partial z_j}\in H(\BB)(j=1,2,\cdots, n)$. For all $|z|\leq s$, let $\Gamma_{z,j,s}=\{\eta\in{{\mathbb D}};|\eta-z_j|=\frac{1-s}{2}\}$, and $$z_{j,s}(\eta)=(z_1,\cdots,z_{j-1},\eta,z_{j+1},\cdots,z_n), \eta\in\Gamma_{z,j,s}.$$ Since $|z_{j,s}(\eta)|\leq \frac{1+s}{2}$, we get $$\begin{aligned} \left|\frac{\partial f}{\partial z_j}\right| =\frac{1}{2\pi}\left|\int_{\Gamma_{z,j,s}}\frac{f(z_{j,s}(\eta))}{(\eta-z_j)^2}d\eta\right| \leq\frac{2}{1-s}\max_{|z|\leq \frac{1+s}{2}}|f(z)|.\end{aligned}$$ Hence $|\nabla f(z)|\leq \frac{2n}{1-s}\max\limits_{|z|\leq \frac{1+s}{2}}|f(z)|$. When $|z|<s$, $$\begin{aligned} |f(z)-f(rz)| &=&\left|\int_r^1 \frac{df(tz)}{dt}dt\right| =\left|\int_r^1 \langle (\nabla f)(tz),\overline{z}\rangle dt\right| \\ &\leq& (1-r) \sup_{|z|\leq s} |\nabla f(z)| \leq \frac{2n(1-r)}{1-s}\max_{|z|\leq \frac{1+s}{2}}|f(z)|.\end{aligned}$$ The proof is complete. [**Theorem 2.**]{} [*Suppose $u\in H(\BB)$ and $\vp$ is a holomorphic self-map of $\BB$. If $uC_\vp:H^\infty\to\B$ is bounded, then $$\|uC_\vp\|_{e,H^\infty\to \B}\approx Q_1+Q_2\approx Q_1+Q_3 \approx Q_1+Q_4 \approx Q_5+Q_6 .$$ Here $$Q_1=\limsup_{|\vp(z)|\to 1} (1-|z|^2)|\R u(z)|,\,\,Q_2= \limsup_{|\vp(z)|\to 1}(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z) },$$ $$Q_3=\limsup_{k\to\infty} \sup_{\xi\in\partial\BB} \|u \langle \vp,\xi\rangle^k \|_{\beta}, \,\,\,\,Q_4=\limsup_{|\vp(w)|\to 1} \sum_{i=1}^n \|uC_\vp\phi_{\vp(w),i}\|_{\beta},$$ $$Q_5= \limsup_{k\to\infty}\sup_{\xi\in\partial \BB}\|I_u \langle \vp,\xi\rangle^k \|_{\beta},\,\,\,\, Q_6=\limsup_{k\to\infty}\sup_{\xi\in\partial \BB}\|J_u \langle \vp,\xi\rangle^k \|_{\beta}.$$* ]{} Since $uC_\vp:H^\infty\to\B$ is bounded, by Theorem 1, we have $u\in\B$ and $\max\limits_{1\leq i\leq 6} Q_i<\infty$. When $\sup\limits_{z\in\BB}|\vp(z)|<1$, it is easy to see that $uC_\vp:H^\infty\to\B$ is compact by Lemmas 3 and 4. In this case, these asymptotic relations vacuously hold. Hence we only consider the case $\sup_{z\in\BB} |\vp(z)|=1.$ First we prove that $$Q_1+Q_2\gtrsim \|uC_\vp\|_{e,H^\infty\to \B} .$$ Let $f_t(z)=f(tz)$, for $ f\in H(\BB)$ and $ t\in (0,1)$. Suppose $r,s\in (\frac{1}{2},1)$. For any $f\in H^\infty$ with $\|f\|_{\infty}\leq 1$, we have $$\|(uC_\vp -uC_{r\vp})f\|_{\beta} \leq\sup_{|\vp(z)|\leq s} G_1 +\sup_{ s<|\vp(z)|<1} G_1 +\sup_{|\vp(z)|\leq s} G_2 +\sup_{s<|\vp(z)|<1} G_2 ,$$ where $$G_1=(1-|z|^2)|\R u(z)||(f\comp\vp-f_r\comp\vp)(z)|,\,\,\,\,G_2=(1-|z|^2)|u(z)||\R(f\comp \vp-f_{r}\comp\vp)(z)|.$$ By Lemma 4, we have $$\label{1219-1} \sup_{|\vp(z)|\leq s} G_1 \leq \frac{2n(1-r)}{1-s}\|u\|_{\beta},\,\,\sup_{s<|\vp(z)|<1} G_1 \le 2 \sup_{s<|\vp(z)|<1} (1-|z|^2)|\R u(z)|,$$ and $$\begin{aligned} \sup_{|\vp(z)|\leq s}G_2 &\leq& \sup_{|\vp(z)|\leq s}(1-|z|^2)|u(z)||(\nabla (f-f_r))(\vp(z))| |\vp^\p(z)z| \\ &\leq& \frac{2n}{1-s}\sup_{|\vp(z)|\leq \frac{1+s}{2}}|f(\vp(z))-f(r\vp(z)| \sup_{|\vp(z)|\leq s} (1-|z|^2)|u(z)| |\vp^\p(z)z| \\ &\leq& \frac{8n^2(1-r)}{(1-s)^2}\sup_{|\vp(z)|\leq s} (1-|z|^2)|u(z)| |\vp^\p(z)z| .\end{aligned}$$ From (\[1220-2\]), we have $$\label{1219-2} \sup_{|\vp(z)|\leq s}G_2\lesssim \frac{1-r}{(1-s)^2}M_1.$$ By (\[1222-3\]) and (\[1222-1\]), we have $$\begin{aligned} &&\sup_{s<|\vp(z)|<1}G_2\nonumber\\ &\leq& \sup_{s<|\vp(z)|<1}(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z) }\frac{\left|(\nabla f)(\vp(z))\vp^\p(z)z\right|}{ \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z)} } \nonumber\\ &&+\sup_{s<|\vp(z)|<1}(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z)} \frac{\left|(\nabla f_r)(\vp(z))\vp^\p(z)z\right|}{ \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z)} } \nonumber\\ &\lesssim& (\|f\|_{\beta}+\|f_r\|_{\beta})\sup_{s<|\vp(z)|<1}(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z) }\nonumber\end{aligned}$$ $$\begin{aligned} &\lesssim& \sup_{s<|\vp(z)|<1}(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z) }.\label{1219-4}\end{aligned}$$ It is obvious that $$\begin{aligned} \label{1219-5} \lim_{r\to 1}|(uC_{\vp}-uC_{r\vp})(0)|=0.\end{aligned}$$ Letting $r\to 1$, by (\[1219-1\])-(\[1219-5\]), we have $$\begin{aligned} &&\|uC_\vp\|_{e,H^\infty\to\B}\\ &\leq& \limsup_{r\to 1} \|uC_\vp-uC_{r\vp}\|_{H^\infty\to\B}\\ &\lesssim& \sup_{s<|\vp(z)|<1}(1-|z|^2)|u(z)| \sqrt{H_{\vp(z)}(\vp^\p(z)z,\vp^\p(z)z) } +\sup_{s<|\vp(z)|<1} (1-|z|^2)|\R u(z)|.\end{aligned}$$ Here we used the fact that $uC_{r\vp}:H^\infty\to\B$ is compact. Letting $s\to 1$, we get the desired result. Next we prove that $$Q_1+Q_3\gtrsim Q_1+Q_2.$$ Similar to the proof of Theorem 1, we assume that there exists $k\in\N$ such that $k\geq 3 $ and $1-\frac{1}{k-1}\leq |\vp(z)| < 1-\frac{1}{k}$. From (\[1219-6\]), we have $$\begin{aligned} \frac{|\langle \vp^\p(z)z,\vp(z) \rangle|}{1-|\vp(z)|^2} \lesssim \sup_{\xi\in\partial \BB}k {|\langle \vp(z),\xi \rangle^{k-1}||\langle {\vp^\p(z)z,\xi \rangle|}}.\end{aligned}$$ From (\[1219-7\]) and (\[1219-8\]), we have $$(1-|z|^2)|u(z)|\frac{|\vp^\p(z)z|}{(1-|\vp(z)|^2)^\frac{1}{2}} \lesssim \sup_{\xi\in\partial\BB} k(1-|z|^2)|u(z)| |\langle \vp(z),\xi \rangle|^{k-1} |\langle \vp^\p(z) z,\xi\rangle|.$$ From (\[1216-2\]), we have $$\begin{aligned} &&(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z)}\\ &\lesssim& \sup_{\xi\in\partial\BB} k (1-|z|^2)|u(z)| |\langle \vp(z),\xi\rangle|^{k-1} |\langle \vp^\p(z) z,\xi\rangle| \\ &\leq& \sup_{\xi\in\partial\BB} \|u \langle \varphi,\xi\rangle^k\|_{\beta}+(1-|z|^2)|(\R u)(z)|.\end{aligned}$$ By letting $|\vp(z)|\to 1$, we have $Q_1+Q_3\gtrsim Q_2$, i.e., we get $Q_1+Q_3\gtrsim Q_1+Q_2.$ Now we prove that $$\|uC_\vp\|_{e,H^\infty\to \B} \gtrsim Q_1+Q_3.$$ Suppose $K:H^\infty\to\B$ is compact. For any $\varepsilon>0$, there exists $\{\xi_k\}_{k=0}^\infty \subset\partial\BB$, such that $$\|u\langle \varphi, \xi_k\rangle^k\|_{\beta} \geq\sup_{\xi\in\partial\BB} \|u\langle \varphi, \xi\rangle^k\|_{\beta}-\varepsilon.$$ Let $h_{k}(z)=\langle z,\xi_k\rangle^k.$ Thus $\|h_{k}\|_{\infty}\leq 1$ and converges to $0$ uniformly on compact subsets of $\BB$. Then $\lim\limits_{k\to\infty} \|Kh_k\|_{\B}=0$. Since $$\begin{aligned} \|uC_\vp -K\|_{H^\infty\to\B } &\geq& \|(uC_\vp-K)h_{k}\|_{\B } \\ &\geq& \|uC_\vp h_{k}\|_{\beta}+|u(0)h_{k}(\vp(0))|-\|Kh_{k}\|_{\B },\end{aligned}$$ we have $$\|uC_\vp -K\|_{H^\infty\to\B} \geq \limsup_{k\to\infty} \|uC_\vp h_{k}\|_{\beta} \geq \limsup_{k\to\infty}\sup_{\xi\in\partial\BB} \|u\langle \varphi, \xi\rangle^k\|_{\beta}-\varepsilon .$$ Because $K$ and $\varepsilon$ are arbitrary, we obtain $\|uC_\vp\|_{e,H^\infty\to \B }\geq Q_3.$ Let $\{\eta_k\}_{k=1}^\infty\subset\partial\BB$ such that $Q_1=\lim\limits_{k\to \infty} (1-|\eta_k|^2)|\R u(\eta_k)|$ and $\lim\limits_{k\to\infty} |\vp(\eta_k)|=1$. Let $$g_k(z)=\frac{2(1-|\vp(\eta_k)|^2)}{1-\langle z,\vp(\eta_k) \rangle}-\left(\frac{1-|\vp(\eta_k)|^2}{1-\langle z,\vp(\eta_k) \rangle}\right)^2.$$ Then $\{g_k\}$ is bounded in $H^\infty$ and converges to 0 uniformly on compact subsets of $\BB$. Moreover, we have $g_k(\vp(\eta_k))=1$ and $(\R g_k)(\vp(\eta_k))=0.$ If $K:H^\infty\to\B$ is compact, we have $$\begin{aligned} \|uC_\vp-K\|_{H^\infty\to\B} &\gtrsim& (1-|\eta_k|^2)|(\R (uC_\vp g_k))(\eta_k)|-\|Kg_k\|_{\B } \\ &=&(1-|\eta_k|^2)|\R u(\eta_k)|-\|Kg_k\|_{\B }.\end{aligned}$$ Letting $k\to\infty$, we have $\|uC_\vp-K\|_{H^\infty\to \B}\gtrsim Q_1.$ Since $K$ is arbitrary, we get $$\|uC_\vp\|_{e,H^\infty\to \B}\gtrsim Q_1,$$ as desired. Next we prove that $$\|uC_\vp\|_{e,H^\infty\to \B}\approx Q_1+Q_4.$$ Suppose $K:H^\infty\to\B$ is compact and $1\leq i\leq n$. Let $\{w_{k}\}_{k=1}^\infty\subset\BB$ such that $\lim\limits_{k\to\infty}|\vp(w_k)|=1$ and $$\lim\limits_{k\to\infty}\|uC_\vp \phi_{\vp(w_k),i}\|_{\mu}=\limsup_{|\vp(w)|\to 1}\|uC_\vp \phi_{\vp(w),i}\|_{\beta}.$$ By Lemma 1, $\{\phi_{\vp(w_k),i}-\vp_i(w_k) \}$ is bounded in $H^\infty$ and converges to 0 uniformly on compact subset of $\BB$. By Lemma 3, $$\begin{aligned} \|uC_\vp-K\|_{H^\infty\to\B} &\gtrsim& \limsup_{k\to\infty} \|(uC_\vp-K)(\phi_{\vp(w_k),i}-\vp_i(w_k))\|_{\beta} \\ &\geq& \limsup_{k\to\infty} \|uC_\vp\phi_{\vp(w_k),i}\|_{\beta} -\limsup_{k\to\infty} \|uC_\vp\vp_i(w_k)\|_{\beta}\\ &&-\limsup_{k\to\infty} \|K(\phi_{\vp(w_k),i}-\vp_i(w_k))\|_{\beta} \\ &\geq& \limsup_{|\vp(w)|\to 1}\|uC_\vp \phi_{\vp(w),i}\|_{\beta}-Q_1.\end{aligned}$$ Since $K$ is arbitrary, we have $\|uC_\vp\|_{e,H^\infty\to \B}\gtrsim Q_4 -Q_1$. Since $\|uC_\vp\|_{e,H^\infty\to \B}\gtrsim Q_1$, we get $$\|uC_\vp\|_{e,H^\infty\to \B}\gtrsim Q_1+Q_4.$$ From (\[1220-1\]), we have $Q_1+Q_4\gtrsim Q_2$. So $$Q_1+Q_4\gtrsim Q_1+Q_2\gtrsim \|uC_\vp\|_{e,H^\infty\to \B},$$ as desired. Finally, we prove that $$Q_5+Q_6 \approx Q_3+Q_1.$$ From (\[1223-1\]), we have $Q_3\leq Q_5+Q_6,\,\,\mbox{ and }\,\,Q_5\leq Q_6+Q_3.$ By Lemma 2, $Q_6\leq Q_1$. So $$Q_5+Q_6\lesssim Q_1+Q_3.$$ Suppose $\{z_k\}_{k=1}^\infty\subset\BB$ such that $\lim\limits_{k\to\infty} (1-|z_k|)|(\R u)(z_k)|=Q_1$ and $\lim\limits_{k\to\infty}|\varphi(z_k)|=1$. From the fact that $$\begin{aligned} (1-|z_k|)|(\R u)(z_k)| &\approx &(1-|z_k|)|(\R u)(z_k) |\vp(z_k)|^{2k}\\ &\leq &\|J_u f_{k,\vp(z_k)} \|_{\beta} \leq \sup_{\xi\in\partial\BB}\|J_u \langle\varphi, \xi\rangle^k\|_{\beta},\end{aligned}$$ we have $Q_1\lesssim Q_6$. So $ Q_1+Q_3\lesssim Q_5+Q_6.$ The proof is complete. From Theorem 2, we immediately get the following corollary. [**Corollary 1.**]{} *Suppose $u\in H(\BB)$ and $\vp$ is a holomorphic self-map of $\BB$. If $uC_\vp:H^\infty\to\B$ is bounded, then the following statements are equivalent.* (i) $uC_\vp:H^\infty\to\B$ is compact. (ii) $$\limsup_{|\vp(z)|\to 1} (1-|z|^2)|\R u(z)|=0 ~~~\mbox{and}~~~ \limsup_{k\to\infty} \sup_{\xi\in\partial\BB} \|u \langle \vp,\xi\rangle^k \|_{\beta}=0.$$ (iii) $$\limsup_{|\vp(z)|\to 1} (1-|z|^2)|\R u(z)|=0 ~~~\mbox{and}~~~ \limsup_{|\vp(z)|\to 1}(1-|z|^2)|u(z)|\sqrt{H_{\vp(z)}(\vp^\p(z) z,\vp^\p(z) z) }=0.$$ (iv) $$\limsup_{|\vp(z)|\to 1} (1-|z|^2)|\R u(z)|=0 ~~~\mbox{and}~~~ \limsup_{|\vp(w)|\to 1} \sum_{i=1}^n \|uC_\vp\phi_{\vp(w),i}\|_{\beta}=0.$$ (v) $$\limsup_{k\to\infty}\sup_{\xi\in\partial \BB}\|I_u \langle \vp,\xi\rangle^k \|_{\beta}=0 ~~\mbox{and}~~~ \limsup_{k\to\infty}\sup_{\xi\in\partial \BB}\|J_u \langle \vp,\xi\rangle^k \|_{\mu}=0.$$ [**Remark 2.**]{} Suppose $u,v\in H(\BB)$ and $\vp,\psi$ are holomorphic self-maps of $\BB$. Based on the work of [@SycLsxZxl2007], we conjecture that the following statements hold: [*(a)*]{} $uC_\vp-vC_\psi:H^\infty\to\B$ is bounded if and only if $$\sup_{k\in \N\cup\{0\}} \sup_{\xi\in\partial \BB} \|(uC_\vp-vC_\psi)\langle z,\xi\rangle^k\|_\beta <\infty.$$ [*(b)*]{} Assume that $uC_\vp :H^\infty\to\B$ and $ vC_\psi:H^\infty\to\B$ are bounded. Then $$\begin{aligned} \|uC_\varphi-vC_{\psi}\|_{e,H^\infty\to\B }\thickapprox \limsup_{k\to\infty} \sup_{\xi\in\partial \BB}\|(uC_\vp-vC_\psi)\langle z,\xi\rangle^k\|_\beta .\nonumber\end{aligned}$$ We are not able, at the moment, to prove this conjecture. Hence, we leave the problem to the readers interested in this research area. [aa]{} F. Colonna, New criteria for boundedness and compactness of weighted composition operators mapping into the Bloch space, [*Cent. Eur. J. Math.*]{} **11** (2013), 55–73. C. C. Cowen and B. D. MacCluer, [*Composition Operators on Spaces of Analytic Functions*]{}, CRC Press, Boca Raton, FL, 1995. J. Dai, Compact composition operators on the Bloch space of the unit ball, [*J. Math. Anal. Appl.*]{} [**386**]{} (2012), 294–299. Q. Hu, S. Li and H. Wulan, New essential norm estimates of weighted composition operators from $H^\infty$ into the Bloch space, [*Complex Var. Elliptic Equ.*]{} [**62**]{} (2017), 600–615. S. Li, Riemann-Stieltjes operators from $F(p,q,s)$ to Bloch space on the unit ball, [*J. Inequal. Appl.*]{} Vol. 2006 (2006), Article ID 27874, 14 pages. S. Li and S. Stević, Weighted composition operators from $H^\infty$ to the Bloch space on the polydisc, [*Abstr. Appl. Anal.*]{} Vol. 2007 (2007), Article ID 48478, 12 pages. S. Li and S. Stević, Weighted composition operators between $H^\infty$ and $\alpha$-Bloch spaces in the unit ball, [*Taiwanese J. Math.*]{} [**12**]{} (2008), 1625–1639. X. Liu and S. Li, Norm and essential norm of a weighted composition operator on the Bloch space, [*Integr. Equ. Oper. Theory*]{} [**87**]{} (2017), 309–325. K. Madigan and A. Matheson, Compact composition operators on the Bloch space, [*Trans. Amer. Math. Soc.*]{} [**347**]{} (1995), 2679–2687. S. Ohno, Weighted composition operators between $H^\infty$ and Bloch space. [*Taiwanese J. Math.*]{} [**5**]{} (2001), 555–563. Y. Shi, S. Li and X. Zhu, Differences of weighted composition operators from $H^\infty $ to the Bloch space, [*arXiv:1712.03402*]{} (2017), 18 pages. J. Shi and L. Luo, Composition operators on the Bloch space, [*Acta Math. Sin.*]{} [**16**]{} (2000), 85–98. R. Timoney, Bloch function in several complex variables, I, [*Bull. London Math. Soc.*]{} [**12**]{} (1980), 241–267. M. Tjani, [*Compact composition operators on some Möbius invariant Banach spaces*]{}, PhD dissertation, Michigan State University, 1996. H. Wulan, D. Zheng and K. Zhu, Compact composition operators on BMOA and the Bloch space, [*Proc. Amer. Math. Soc.*]{} [**137**]{} (2009), 3861–3868. M. Zhang and H. Chen, Weighted composition operators of $H^\infty$ into $\alpha$-Bloch spaces on the unit ball, [*Acta Math. Sin.*]{} [**25**]{} (2009), 265–278. K. Zhu, [*Spaces of Holomorphic Functions in the Unit Ball*]{}, Springer, New York, 2005. [^1]: $\dagger$ Corresponding author. [^2]: This project was partially supported by NSF of China (No.11471143 and No. 11720101003).
--- author: - 'C. Benoist' - 'L. da Costa' - 'H.E. J[ø]{}rgensen' - 'L.F. Olsen' - 'S. Bardelli' - 'E. Zucca' - 'M. Scodeggio' - 'D. Neumann' - 'M. Arnaud' - 'S. Arnouts' - 'A. Biviano' - 'M. Ramella' date: 'Received ; accepted' title: 'Optically-selected clusters at $0.8\lsim z \lsim 1.3$ in the EIS Cluster Survey' --- Introduction {#sec:intro} ============ Clusters of galaxies are both ideal sites for studying galaxy evolution and important cosmological probes, especially at redshifts $z\gsim0.5$, where differences between competing evolutionary and cosmological models become important. This has motivated several searches for distant clusters using a variety of techniques in different wavelengths. As a result, over the past few years a remarkable progress has been made in detecting an ever increasing number of systems with $z \gsim0.5$ (see Gioia 2000 for a recent review). More recently, a handful of clusters at $z\gsim1$ have also been identified. While the sheer existence of these high redshift clusters is of great importance, the current number of confirmed systems is still very small, mostly identified from serendipitous X-ray searches (Rosati 1999) or from infra-red imaging (Stanford 1997). Therefore, the construction of a large sample of confirmed clusters at $z\gsim0.8$ representative of the entire population of these high-z systems remains an important goal of observational cosmology. However, as these systems are expected to be rare, finding them requires large areas of the sky to be covered, limiting the techniques that can be used in identifying candidates. In particular, surveys at X-ray and mm wavelengths (Carlstrom 2000) are unlikely to provide in the near future the necessary sky coverage for constructing the large samples of very distant clusters of galaxies required for statistical analyses. An alternative way is to consider multi-band optical/infrared imaging data. Thanks to the advent of panoramic CCD imagers, wide-angle imaging surveys in the optical and near infrared wavelengths have become viable and can be used for identifying cluster candidates up to $z\sim1$. Examples of wide-angle surveys that have been used to identify intermediate to high redshift clusters include those of Gunn (1986), Postman (1996), the ESO Imaging Survey (EIS) Cluster Survey (Olsen 1999a,b; Scodeggio 1999), the Red-Sequence Survey (Gladders & Yee 2000) and the Las Campanas distant cluster survey (Gonzalez 2001). These surveys, especially those carried out in a single passband, can only provide plausible candidates and further benefits from additional multi-wavelength observations to mitigate many of the problems of foreground-background contamination, to assign photometric redshifts for galaxies of different morphological types and to select possible cluster members to improve the yield of spectroscopic follow-ups. In this paper we describe our first attempts to explore the nature of the high redshift cluster candidates identified in the EIS $I$-band survey, combining new imaging and spectroscopic observations. Altogether there are about 82 candidates with matched-filter redshifts $\gsim 0.8$ for which about half have already been complemented by imaging observations in $BVRJK$. Among the various clusters for which we have spectroscopic data, we present here three clusters at higher redshift. In section \[sec:data\] we describe the selection of the candidate clusters and of the galaxy sample used in the observations. In section \[sec:obs\_red\], we briefly describe the reduction procedure which will be expanded in a separate paper where the accumulated data are presented (J[ø]{}rgensen 2002). In section \[results\], the observed redshift distribution and the technique used to identify groups in redshift space are presented. Finally, in section \[summary\] our main results are summarised. Cluster and Galaxy Sample {#sec:data} ========================= The results presented here are part of an ongoing comprehensive effort to identify and study clusters at different epochs using as a starting point the EIS cluster candidate compilation. This sample, consisting of over 300 candidates, has been split roughly into three redshift domains - low ($z\lsim0.4$), intermediate ($0.4\lsim z \lsim 0.7$) and high ($z\gsim 0.7$). Several photometric and spectroscopic follow-up programs are underway at different facilities to secure the necessary data for confirmation (Olsen 2001) and more detailed studies. The observations include moderately deep optical/infrared imaging in $R$ and $JK$ (Scodeggio 2002), spectroscopic observations of intermediate redshift clusters at the ESO 3.6m telescope (Ramella 2000, Biviano 2002), deep multi-band imaging (Schirmer 2002) for cosmic shear analysis and spectroscopic observations of high redshift candidates at the VLT, and for one case XMM-Newton data (Neumann 2002). In the present paper we focus our attention on three high-redshift ($z\gsim0.8$) candidates - EIS0046-2930, EIS0954-2023 and EIS0533-2412 (Olsen 1999b; Scodeggio 1999)- for which photometric, spectroscopic and, in one case, X-ray data are available. The sample of objects selected for the spectroscopic observations in the fields considered were drawn from an area of $10 \times 10$ arcmin centred at the position of the candidate clusters as determined by the matched-filter analysis of the $I$-band data. For EIS0046-2930 and EIS0954-2023 the full area is covered in $BVI$, from publicly available EIS-WIDE and/or EIS-Pilot data (Nonino 1999; Benoist 1999), while the central $5\times5$ arcmin area is also covered in $JK$ from SOFI observations at the NTT. While we now also have $R$-band images these were not available at the time these observations were being prepared. For these clusters the targets were selected using one of the following criteria: [*i)*]{} using the photometric redshifts computed in the area covered by the infrared data (Arnouts 1999); [*ii)*]{} searching for the expected $(I-K)$ and $(J-K)$ colours of early-type galaxies in the redshift interval of interest; [*iii)*]{} identifying B- and V-dropouts (expected for early-type galaxies considering the depth of EIS-WIDE) in the outer part of the field; and [*iv)*]{} arbitrarily to fill the slits (about 50% in the outer parts). When using the photometric redshifts, the targets were chosen within the redshift range $z\sim 0.6 - 1.3$. We used this broad interval due to the lack of $R$-band data which causes a large uncertainty (degeneracy) in the location of the 4000 Å break for galaxies in the interval $z\sim0.5-0.9$. Furthermore, the errors in redshift estimates are $\sim0.15$ close to the magnitude limit of the sample. In the case of EIS0533-2412 only $IJK$ data were available. In this case, the targets were selected by searching for the expected $(I-K)$ and $(J-K)$ colours of early-type galaxies in the redshift interval of interest. In the case of EIS0046-2930, expected to be at lower redshifts, galaxies were selected in the magnitude range of $19\lsim I_{AB} \lsim22.5$, while for EIS0533-2412 and EIS0954-2023 they were drawn in the interval $21\lsim I_{AB} \lsim23$. A full description of the colour selection used to build the list of spectroscopic targets likely to be cluster members will be presented in a forthcoming paper (J[ø]{}rgensen 2002). VLT Spectroscopy {#sec:obs_red} ================ [cccccccccc]{} Candidate & date & seeing & Inst. & Nr. of & Integration & Observed & Measured & Stars & No\ & & (arcsec) & & masks & time & objects & redshifts & & Identification\ \ EIS0046-2930 & 24/09/2000 & 0.6-0.7 & FORS1 & 4 & 3600s & 85 & 63 & 5 & 17\ EIS0533-2412 & 25/12/2000 & 0.7-1.2 & FORS2 & 1 & 14400s & 47 & 30 & 4 & 13\ EIS0954-2023 & 26/12/2000 & 0.5-0.7 & FORS2 & 2 & 14400s & 80 & 54 & 11 & 15\ The spectroscopic observations presented here were carried out using FORS1 in the MOS mode (September 2000) on the VLT-ANTU telescope and FORS2 in the MXU mode on the VLT-Kueyen telescope (December 2000) (Cf. http://www.eso.org/instruments). In the MOS mode FORS1 provides 19 movable slit blade pairs that can be placed in the available field-of-view. For the present observations Grism 150I+17 with the order separation filter OG590 was used, providing a useful field of $4.7 \times 6.8$ square arcmin and covering the spectral range 6000-11000Å. The dispersion of 230Å/mm (5.52Å/pixel) yielded a spectral resolution of 280 or about 29 Å for a slit width of 1.4 arcsec. The FORS2 observations were carried out in the multi-object (MXU) spectroscopy mode using GRISM 200I+28 with order separation filter OG550 covering the spectral range 5600-11000Å. The dispersion of 162 Å/mm (3.89 Å/pixel) yielded a resolution of 380 or about 21 Å for a slit width of 1.2 arcsec. The slit mask allowed for a much larger multiplex than was possible with FORS1 and typically over 35 objects could be observed simultaneously. The spectroscopic data were reduced using standard IRAF routines. The extracted one-dimensional spectra were then inspected to identify spectral features and obtain a preliminary redshift estimate. The spectra were then cross-correlated to template spectra using the FXCOR task of IRAF. In general, the differences between the three redshift estimates (emission lines, absorption features, and cross correlation) were small and the redshift measured by the cross-correlation was adopted. A more detailed account of the reduction procedure will be presented elsewhere (J[ø]{}rgensen 2002). A summary of the spectroscopic observations and the results obtained from the data analysis is presented in Table \[tab:obs\]. Individual exposure times ranged from 900 to 1800 sec depending on the total integration time required for each mask. Note that in the case of EIS0533-2412 two masks were prepared but technical problems prevented the use of one of them at the time of the observation. In the case of EIS0954-2023 one of the masks was 30 minutes shorter. Altogether, a total of 212 objects were observed yielding 147 measured redshifts in the redshift interval 0.14-1.32. Results ======= In Figure \[fig:zhist\_0046\] we present the distribution of measured redshifts for each of the fields considered showing in the upper part of each panel the individual redshifts and in the lower part the redshift distribution in bins $\Delta z= 0.01$ wide. The solid histograms indicate groups that have been identified from the analysis of the redshift distribution as discussed below. For each field the measured redshift distribution is compared to that expected for a uniform distribution of galaxies with a given luminosity function (LF) and selected in the same magnitude intervals as the observed galaxies. The $I$-band LF was computed using the LF parameters, split into three spectral classes (early, spiral and late-types), as derived from the $R$-band data of the ESO-Sculptor Survey (de Lapparent 2002) up to $z\sim0.6$. We assumed that the LF remains constant at higher redshifts. The $I$-band LF at different redshifts was then obtained using the appropriate SEDs for the three spectral classes considered (Arnouts 2002). It was confirmed that this approach leads to a redshift distribution which is consistent with that measured by the CFRS survey when the same limiting magnitude is adopted. The predicted distribution is normalised by requiring the number of objects with $z>0.5$ to be equal to the number of galaxies observed. This is done to approximately simulate the colour/photometric redshift criteria adopted in selecting the target galaxies. From the figure one can immediately see the presence of several peaks in the observed distribution relative to the uniform background. Note that the redshift range covered by the observations of the EIS0046-2930 ($0.2<z<0.9$) is smaller than that for the other two cases which have measured redshifts up to $z\sim1.3$. This is due to the brighter magnitude interval adopted in selecting the galaxy sample. This also explains the difference in the predicted redshift distribution for this field. It is also interesting to point out that while most of the redshifts lie beyond $z\sim0.5$, as originally intended, some have lower redshifts. The fraction of galaxies with $z\lsim0.5$ is $\lsim$ 10% of the total number of galaxies with redshifts, nearly all corresponding to faint ($I_{AB}\sim 21-23$) objects arbitrarily selected to fill-in available slits. Typically about 50% of the observed galaxy sample was arbitrarily selected, reflecting the fact that the area covered with infrared data (limited by the size of the SOFI field) was too small to position all the slits of the spectrograph. Given the incompleteness of the sample, groups have been identified first in redshift space and then by their angular proximity. Groups in redshift-space have been identified using the “gap”-technique of Katgert (1996) which identifies gaps in the redshift distribution larger than a certain size to separate individual groups. In this preliminary analysis we have adopted a redshift gap of $\Delta z=0.005*(1+z)$ corresponding to 1500 in the rest-frame. A total of five, three and eight groups with more than 3 members were found in the fields of EIS0046-2930, EIS0533-2412 and EIS0954-2023, respectively. To assess the significance of these detections we have resorted to simulations and evaluated how frequently peaks similar to those identified can occur by chance drawing galaxies with $z>0.5$ from a uniform background. We find that 8 groups (2 in EIS0046-2930, 2 in EIS0533-2412 and 4 in EIS0954-2023), corresponding to 50% of the groups identified, are likely to correspond to real enhancements in redshift space ($99\%$ confidence level). In addition, we can also ask how many of these are also spatially concentrated. This can be done by examining Figure \[fig:cones\_00462930\] which shows, for each field, diagrams plotting redshifts as a function of right ascension and declination. From the figure it is easy to identify the most compelling cases of galaxies not only with concordant redshifts but also within a circular region roughly 1 arcmin in radius. These cases are listed in Table \[tab:groups\] which gives: in column (1) the name of the cluster; in column (2) the ID of the group within each field; in columns (3) and (4) the Right Ascension and the Declination; in column (5) number of members; in columns (6) and (7) the mean redshift and the standard deviation in . We remind the reader that all these cases are firm ($99\%$) detections in redshift space and their location is in excellent agreement ($< $1 arcmin) with the position of the original candidate as identified by the matched-filter analysis. Candidate ID Ra Dec $N_{g}$ $\langle z\rangle$ $\sigma$ (kms$^{-1}$) -------------- ---- ------------ ------------- --------- -------------------- ----------------------- EIS0046-2930 1 00:46:29.6 -29:30:57.4 12 0.808 1171 EIS0533-2412 1 05:33:40.3 -24:12:43.8 3 1.301 - EIS0954-2023 1 09:54:47.5 -20:23:55.2 6 0.948 202 2 09:54:37.0 -20:22:54.7 8 1.141 285 In summary, in all three candidate cluster fields considered we identify at least one significant concentration of galaxies in redshift and in position. In the field of EIS0046-2930 we identify a system at $z=0.808$. This candidate was originally assigned a matched-filter redshift of $z\sim0.6$ but subsequent work using optical/infrared colour-magnitude diagrams (da Costa 1999) suggested it to be at higher redshift. Indeed, the apparently brightest galaxy in the cluster was measured to be at $z=0.81$ (Ramella, private communication). New measurements of another 11 galaxies with concordant redshifts now corroborate this earlier result. In the field of EIS0533-2412 we find a significant concentration at $z=1.3$, which coincides with the location of the matched-filter detection at an estimated redshift of $z=1.1$. While currently only three galaxies have measured redshifts in the $z=1.3$ system, most of the faint galaxies in the field have similar colours as shown in Figure \[fig:images\]. Furthermore, recent analysis of an XMM-Newton image finds a $>5\sigma$ detection centred near the location of the brightest cluster member for which we have a secure redshift at $z=1.3$. In fact, the distribution of galaxies with similar $(J-K)$ colour extends some 3 arcmin to the NE, where another X-ray detection has been found. A more detailed discussion of the X-ray data will be presented elsewhere (Neumann 2002). Finally, in the field of EIS0954-2023 we find evidence for two clumps. One at $z=1.141$, at the same location as the matched-filter detection, and the other a foreground concentration at $z=0.95$, some 2 arcmin away from the original detection. High resolution cutouts for the systems discussed here can be found at the URL “$http::www.obs-nice.fr/benoist/high-z.clusters.html$”. Summary ======= This paper presents new spectroscopic data of EIS cluster candidate fields identified from moderately deep $I$-band images using the matched-filter algorithm. The three fields considered were selected because the cluster candidates had estimated redshifts beyond $z=0.8$. Analysis of the spectroscopic data strongly suggests the existence of real density enhancements at high redshifts ($0.8<z<1.3$) in all of them. The measured redshifts are, in general, consistent with those estimated from the photometric data. In at least one of the cluster fields, the location of the high-redshift system coincides remarkably well with a robust X-ray detection, lending further support to the reality of the system. Therefore, despite the fact that it is difficult to decide if those clumps are filaments or bound systems, for two of them the evidence seem to favour the second possibility: for the $z=0.81$ clump a red sequence is observed (da Costa 1999) whereas the $z=1.3$ one has an X-ray detection associated with it. The present paper strongly suggests that the EIS cluster candidate catalog provides a valuable pool from which to construct a statistical sample of optically-selected clusters at high redshift. The present data alone contribute with four systems at $z>0.8$ in the southern hemisphere, two of which at $z\gsim1$, ideal for VLT studies. The success in identifying significant concentrations from a relative small sample underscores the importance of collecting multi-band optical/infrared data and estimating photometric redshifts to select potential cluster members. However, in establishing the true nature of these systems will require a better sampling of these systems which will become possible with the availability of an integral field unit as foreseen by the VIMOS spectrograph. We would like to thank the EIS Team for the effort of producing the publicly available object catalogs for the EIS-Wide and Pilot Surveys. LFO thanks the SARC and Carlsberg Foundations for financial support during the project period.
--- abstract: 'The long term aim is to use modern dynamical systems theory to derive discretisations of noisy, dissipative partial differential equations. As a first step we here consider a small domain and apply stochastic centre manifold techniques to derive a model. The approach automatically parametrises subgrid scale processes induced by spatially distributed stochastic noise. It is important to discretise stochastic partial differential equations carefully, as we do here, because of the sometimes subtle effects of noise processes. In particular we see how stochastic resonance effectively extracts new noise processes for the model which in this example helps stabilise the zero solution.' author: - 'A. J. Roberts[^1]' bibliography: - 'ajr.bib' - 'bib.bib' - 'new.bib' title: A step towards holistic discretisation of stochastic partial differential equations --- Introduction ============ ![numerical solution over time $0<t<3$ of the [<span style="font-variant:small-caps;">spde</span>]{} (\[eq:oburgnm\]) on the domain $0<x<\pi$ with stochastic forcing (\[eq:onoise\]) truncated to the first seven spatial modes. Parameters: $\gamma=0$ so $u\propto\sin x$ is linearly neutral although nonlinearly stable; $\sigma=1$ for large forcing; numerically $\Delta x=\pi/16$ and $\Delta t=0.01$.[]{data-label="fig:umesh"}](umesh){width="\textwidth"} The ultimate aim is to accurately and efficiently model numerically the evolution of stochastic partial differential equations ([<span style="font-variant:small-caps;">spde</span>]{}s). An example solution field $u(x,t)$, see Figure \[fig:umesh\], shows the intricate spatio-temporal dynamics typically generated in a [<span style="font-variant:small-caps;">spde</span>]{}. Numerical methods to integrate stochastic *ordinary* differential equations are known to be delicate and subtle [@Kloeden92 e.g.]. We surely need to take considerable care for [<span style="font-variant:small-caps;">spde</span>]{}s as well [@Grecksch96; @Werner97 e.g.]. An issue is that the stochastic forcing generates high wavenumber, steep variations, in structures seen in Figure \[fig:umesh\]. Stable implicit integration in time generally damps far too fast such decaying modes, yet through stochastic resonance an accurate resolution of the life-time of these modes may be important on the large scale dynamics. For example, stochastic resonance causes a high wavenumber noise to restabilise the trivial solution field $u=0$ in the simulations summarised in Figure \[fig:mmodel2\]. Thus we should resolve reasonably subgrid structures so that numerical discretisation with large space-time grids achieve efficiency, without sacrificing the subtle interactions that take place between the subgrid scale structures. ![numerical solution of the  model (\[eq:oomod\]) with small, $\sigma=0.5$, and large, $\sigma=2$, noise. The amplitude $a$ of the $\sin x$ mode decays for large noise, but not for small. Parameters: $\gamma=-0.03$ to promote linear growth of $a$, and $\Delta t=0.1$.[]{data-label="fig:mmodel2"}](mmodel2) The methods of centre manifold theory are used here to begin to develop good methods for the discretisation of [<span style="font-variant:small-caps;">spde</span>]{}s. There is supporting centre manifold theory by Boxler [@Boxler89; @Boxler91; @Berglund03] for the modelling of s; the centre manifold approach appears a better foundation than heuristic arguments for s [@Majda02 e.g.]. Further, a centre manifold approach seems to improve the discretisation of deterministic partial differential equations [@Roberts98a; @Roberts00a; @Mackenzie00a; @Roberts01a; @Roberts01b; @Mackenzie03]. The first step, taken here, is to demonstrate the effective modelling of subgrid scale stochastic structures. Directly seek a one element model ================================= The simplest case, and that developed here, is the modelling of a [<span style="font-variant:small-caps;">spde</span>]{} on just one finite size element. Consider the stochastically forced nonlinear partial differential equation $$\D tu=-u\D xu+\DD x u +(1-\gamma)u +\sigma\phi( x ,t) \quad\mbox{such that}\quad u=0\mbox{ at } x =0,\pi\,, \label{eq:oburgnm}$$ which involves advection $uu_x$, diffusion $u_{xx}$, reaction $(1-\gamma)u$, and noise $\phi$. In general, the forcing by $\phi(x ,t)$, of strength $\sigma$, is assumed to be white noise that is delta correlated in both space and time as used in Figure \[fig:umesh\]; however, here we consider only the case $$\phi=\phi_2(t)\sin 2x \,, \label{eq:onoise}$$ where the $\phi_2(t)$ is a white noise that is delta correlated in time. Note that the mode $u\propto\sin x$, when $\gamma=0$, is linearly neutral and will form the basis of the model we seek. Thus this example of noise forcing the orthogonal $\sin2x$ mode is expected to be representative of the case of subgrid stochastic forcing and consequent resolution of higher wavenumber modes. Many simple numerical methods, such as Galerkin projection (remembering that the domain here represents just one finite element), would completely obliterate such “high wavenumber” modes and hence completely miss subtle but important subgrid effects. ![numerical solution of the [<span style="font-variant:small-caps;">spde</span>]{} (\[eq:oburgnm\]) with relatively weak noise limited to just $\phi=\phi_2(t)\sin 2x$ showing convergence to a nonlinearly stabilised $\sin x$ mode that is perturbed by the noise. Parameters: $\sigma=0.5$ is small, $\gamma=-0.03$ to generate linear growth of the $\sin x$ mode, $\Delta t=0.05$ and $\Delta x=\pi/8$.[]{data-label="fig:u1sin2"}](u1sin2) An example numerical solution, Figure \[fig:u1sin2\], displays that relatively weak noise only perturbs the deterministic dynamics. However, when the noise is large enough, then stochastic resonance restabilises the zero solution and the $\sin x$ mode decays as seen in Figure \[fig:u1sin2s\]. The success of our approach is seen by it modelling this induced restabilisation. ![numerical solution of the [<span style="font-variant:small-caps;">spde</span>]{} (\[eq:oburgnm\]) with strong noise limited to just $\phi=\phi_2(t)\sin 2x$ showing the $\sin x$ mode decays. Parameters: $\sigma=2$, $\gamma=-0.03$ to promote linear growth of the $\sin x$ mode, $\Delta t=0.05$ and $\Delta x=\pi/8$.[]{data-label="fig:u1sin2s"}](u1sin2s) For much of the analysis the requirement of white, delta correlated noise is irrelevant. Where it is relevant, we interpret the stochastic differential equations in the Stratonovich sense so that the rules of traditional calculus apply. The centre manifold approach identifies that the long term dynamics of a [<span style="font-variant:small-caps;">spde</span>]{} such as (\[eq:oburgnm\]) is parametrised by the amplitude $a(t)$ of the neutral mode $\sin x$. Arnold et al. [@Arnold95] investigated stochastic Hopf bifurcations this way, and the approach is equivalent to the slaving principle for s by Schoner and Haken [@Schoner86]. Computer algebra [@Roberts96a] determines the solution field $$\begin{aligned} u&=&a\sin x -\rat16a^2\sin2x \nonumber\\&&{} +\sigma{{\cal H}_{2}}(1-\gamma{{\cal H}_{2}})\phi_2\sin2x -\rat32\sigma a{{\cal H}_{3}}{{\cal H}_{2}}(1-\gamma{{\cal H}_{2}})\phi_2\sin3x \nonumber\\&&{} +\rat13\sigma a^2{{\cal H}_{4}}(1+9{{\cal H}_{3}}){{\cal H}_{2}}\phi_2\sin4x +\Ord{a^3+\gamma^2,\sigma^2}\,,\end{aligned}$$ in which the operator ${{\cal H}_{m}}$ denotes convolution with $\exp[-(m^2-1)t]$. See in this formula the resolution of the subgrid structure arising through the interaction of the noise and the nonlinearity. The model is the corresponding evolution equation for the amplitude: $$\begin{aligned} \dot a&=& -\gamma a -\rat1{12}a^3 +\sigma a\rat12{{\cal H}_{2}}(1-\gamma{{\cal H}_{2}})\phi_2 \nonumber\\&&{} +\sigma a^3(\rat1{64} +\rat1{12}{{\cal H}_{2}} -\rat34{{\cal H}_{2}}{{\cal H}_{3}} +\rat18{{\cal H}_{3}}){{\cal H}_{2}}\phi_2 +\Ord{a^4+\gamma^2,\sigma^2}\,. \label{eq:onaive}\end{aligned}$$ This is an unduly messy model as it involves many convolutions over the rapid time scales we would like to “step over.” Straightforward analyses of forced systems often terminate at this point because of the tremendously involved form of the repeated convolutions that occur in higher order terms, especially higher order in the noise amplitude $\sigma$. However, some thought leads us to the drastic simplifications discussed next. Use a normal form instead ========================= Here we simplify the model by removing the convolutions from the evolution equation (\[eq:onaive\]). This step was originally developed for s by Coullet et al. [@Coullet85] and Sri Namachchivaya & Lin [@Srinamachchivaya91]. In computer algebra this is done in the equation for the updates to the field and the evolution: $$\D t{u'}-\DD x{u'}-u'+{\dot a}'\sin x =\mbox{residual}.$$ When the residual of the [<span style="font-variant:small-caps;">spde</span>]{} (\[eq:oburgnm\]) contains a component of the form ${{\cal H}_{m}}\Phi\sin x $, where $\Phi$ denotes some noise process, which previously we put into ${\dot a}'$ to form (\[eq:onaive\]), we instead recognise that $$\frac{d}{dt}{{\cal H}_{m}}\Phi =-(m^2-1){{\cal H}_{m}}\Phi+\Phi \quad\mbox{thus}\quad {{\cal H}_{m}}\Phi=\frac{1}{m^2-1}\left[ -\frac{d}{dt}{{\cal H}_{m}}\Phi +\Phi \right]\,, \label{eq:em}$$ and so the contribution in the residual is split into: a part that is integrated into the update $u'$ for the subgrid field; and a part without the convolution for the update ${\dot a}'$ for the evolution. Note that if the residual component has many convolutions, then this separation is applied recursively. Computer algebra then deduces the normal form model $$\dot a=-\gamma a -\rat1{12}a^3 +\sigma a (\rat16-\rat1{18}\gamma)\phi_2 -\sigma^2a\rat1{44}({{\cal H}_{2}}\phi_2-3{{\cal H}_{3}}{{\cal H}_{2}}\phi_2)\phi_2 +\Ord{a^4+\gamma^2,\sigma^3}\,, \label{eq:oomod}$$ for the amplitude $a$ of the $\sin x$ mode, and now to quadratic terms in the noise. See that $a=0$ is always a fixed point of this . Numerical solutions of this model (\[eq:oomod\]), see Figure \[fig:mmodel2\], confirm that for the linearly unstable (deterministically) parameter $\gamma=-0.03$ large amounts of noise restabilise the zero solution. Stochastic resonance affects deterministic terms ================================================ The noise $\phi_2(t)$ so far could have been any distributed forcing at all, random or deterministic. The analysis and the results are generally valid. We proceed to address the specific modelling when we restrict the noise $\phi_2(t)$ to be stochastic white noise in the Stratonovich sense. Previously, the model was a strong model in that (\[eq:oomod\]) could faithfully track given realisations of the original [<span style="font-variant:small-caps;">spde</span>]{}; however, now we derive the weak model (\[eq:oomodl\]) which maintains fidelity to solutions of the original [<span style="font-variant:small-caps;">spde</span>]{}, but we cannot know which realisation. The relevant feature of the large time model (\[eq:oomod\]) is the inescapable and undesirable appearance in the model of fast time convolutions in the quadratic noise term, namely ${{\cal H}_{2}}\phi_2 =e^{-3t}\star \phi_2$ and ${{\cal H}_{3}}{{\cal H}_{2}}\phi_2 = e^{-8t}\star e^{-3t}\star \phi_2$. These are undesirable because they require resolution of the fast time response of the system to these fast time dynamics in order to maintain fidelity with the original [<span style="font-variant:small-caps;">spde</span>]{} (\[eq:oburgnm\]). However, maintaining fidelity with the full details of a white noise source is a pyrrhic victory when all we are interested in is the long term dynamics. Instead we should only be interested in those parts of the quadratic noise factors, $\phi_2{{\cal H}_{2}}\phi_2$ and $\phi_2{{\cal H}_{3}}{{\cal H}_{2}}\phi_2$, that *over long time scales* are firstly correlated with the other processes that appear and secondly independent of the other processes: these not only introduce factors in *new independent* noises into the model but also introduces a deterministic drift due to stochastic resonance [@Chao95; @Drolet97 e.g.]. The argument by Chao & Roberts [@Chao95 §4.1] asserts that we are interested in the long term statistics of the two quadratic noise processes $y_1$ and $y_2$ evolving according to $$\dot y_1=z_1\phi_2\,,\quad \dot y_2=z_2\phi_2\,,\quad \dot z_1=-\beta_1 z_1 +\phi_2\,,\quad \dot z_2=-\beta_2 z_2 +z_1\,, \label{eq:bin}$$ where here the decay rates $\beta_1=3$ and $\beta_2=8$ so that the convolutions of the noise $\phi_2$ are represented by the variables $z_1={{\cal H}_{2}}\phi_2$ and $z_2={{\cal H}_{3}}{{\cal H}_{2}}\phi_2$. From the Fokker-Planck equation for (\[eq:bin\]) we have determined that large time solutions have a probability distribution $$\mbox{\textsc{pdf}} \propto p(y_1,y_2,t) \exp\left[ -(\beta_1+\beta_2)z_1^2 +2\beta_2(\beta_1+\beta_2)z_1z_2 -\beta_2(\beta_1+\beta_2)^2z_2^2 \right]\,,$$ where the relatively slowly varying $p$ evolves according to the approximate equation $$\D tp=-\half\D{y_1}p +D:\grad\grad p +\Ord{\grad^3p} \label{eq:oofpl}$$ where the diffusion matrix $$D=\left[ \begin{array}{cc} \frac{1}{4\beta_1} & \frac{1}{4\beta_1(\beta_1+\beta_2)} \\ \frac{1}{4\beta_1(\beta_1+\beta_2)} & \frac{1}{4\beta_1\beta_2(\beta_1+\beta_2)} \end{array} \right]\,.$$ Interpret (\[eq:oofpl\]) as a Fokker-Planck equation and see it corresponds to the s $$\dot y_1=\half+\frac{\psi_1(t)}{\sqrt{2\beta_1}} \quad\mbox{and}\quad \dot y_2=\frac{1}{\beta_1+\beta_2}\left( \frac{\psi_1(t)}{\sqrt{2\beta_1}} +\frac{\psi_2(t)}{\sqrt{2\beta_2}} \right)\,, \label{eq:oosnn}$$ where $\psi_i(t)$ are new noises independent of $\phi_2$ *over long time scales*. Thus on long time scales, and substituting for the decay rates $\beta_i$, we should replace the quadratic noises by the following: $$\phi_2{{\cal H}_{2}}\phi_2=\half+\frac{\psi_1(t)}{\sqrt6} \quad\mbox{and}\quad \phi_2{{\cal H}_{3}}{{\cal H}_{2}}\phi_2= \frac{\psi_1(t)}{11\sqrt6} +\frac{\psi_2(t)}{44} \,.$$ Thus the normal form model (\[eq:oomod\]) is transformed to $$\dot a=-\left(\gamma+\rat{\sigma^2}{88}\right) a -\rat1{12}a^3 +\sigma a (\rat16-\rat1{18}\gamma)\phi_2 -\sigma^2a(\rat2{121\sqrt6}\psi_1-\rat1{1936}\psi_2) \,.$$ Combining the new noises into one effective new noise the model is a little more simply written $$\dot a =-\left(\gamma+\rat{\sigma^2}{88}\right) a -\rat1{12}a^3 +\sigma a (\rat16-\rat1{18}\gamma)\phi_2 +\sigma^2 a\rat{\sqrt{515}}{1936\sqrt3}\psi \,, \label{eq:oomodl}$$ for some white noise $\psi(t)$ independent of $\phi_2$ over long times. Although the nonlinearity induced stochastic resonance generates the effectively new multiplicative noise, $\propto\sigma^2a\psi$, its most significant effect is the enhancement of the stability of the equilibrium $a=0$ through the $\sigma^2a/88$ term. The equilibrium is stable for parameters $\gamma>-\sigma^2/88$ which neatly explains the differences in the stability seen in Figure \[fig:mmodel2\] because, compared to $\gamma=-0.030$, the thresholds for stability are $-0.003$ and $-0.045$ for small and large noise respectively. ![simulations of the long time model (\[eq:oomodl\]) for small, $\sigma=0.5$, and large, $\sigma=2$, noise over long times. Parameters: $\Delta t=1$, $\gamma=-0.03$. []{data-label="fig:mmodelll"}](mmodelll) Conclusion ========== A big virtue of the model (\[eq:oomodl\]) is that we may accurately take large time steps as all the fast dynamics have been eliminated. Shown in Figure \[fig:mmodelll\] are simulations over a long time for small and large noise again demonstrating the stochastic resonance induced stabilisation of the equilibrium $a=0$. These simulations are done for an order of magnitude longer times with a time step that is ten times larger than that we could use previously. This approach to numerical modelling is viable and effective for stochastic partial differential equations. Much more development and theoretical support is needed. [^1]: Department of Mathematics & Computing, University of Southern Queensland, Toowoomba, Queensland 4352, Australia. [mailto:aroberts@usq.edu.au](mailto:aroberts@usq.edu.au)
--- abstract: 'Spiking neural networks are nature’s versatile solution to fault-tolerant and energy efficient signal processing. To translate these benefits into hardware, a growing number of neuromorphic spiking neural network processors attempt to emulate biological neural networks. These developments have created an imminent need for methods and tools to enable such systems to solve real-world signal processing problems. Like conventional neural networks, spiking neural networks can be trained on real, domain specific data. However, their training requires overcoming a number of challenges linked to their binary and dynamical nature. This article elucidates step-by-step the problems typically encountered when training spiking neural networks, and guides the reader through the key concepts of synaptic plasticity and data-driven learning in the spiking setting. To that end, it gives an overview of existing approaches and provides an introduction to surrogate gradient methods, specifically, as a particularly flexible and efficient method to overcome the aforementioned challenges.' author: - | Emre O. Neftci$^\dagger$,  Hesham Mostafa$^\dagger$,  Friedemann Zenke$^\dagger$\ [$^\dagger$ All authors contributed equally. The order of authors is arbitrary.]{} title: Surrogate Gradient Learning in Spiking Neural Networks --- Introduction ============ Biological are evolution’s highly efficient solution to the problem of signal processing. Therefore, taking inspiration from the brain is a natural approach to engineering more efficient computing architectures. In the area of machine learning, , a class of stateful neural networks whose internal state evolves with time [[(Box. \[box:rnn\])]{}]{}, have proven highly effective at solving real-time pattern recognition and noisy time series prediction problems [@Goodfellow_etal16_deeplear]. and biological neural networks share several properties, such as a similar general architecture, temporal dynamics and learning through weight adjustments. Based on these similarities, a growing body of work is now establishing formal equivalences between and networks of spiking neurons which are widely used in computational neuroscience [@zenke_superspike:_2018; @bellec_long_2018; @Kaiser_etal18_synaplas; @tavanaei_deep_2018]. are typically trained using an optimization procedure in which the parameters or weights are adjusted to minimize a given objective function. Efficiently training large-scale is challenging due to a variety of extrinsic factors, such as noise and non-stationarity of the data, but also due to the inherent difficulties of optimizing functions with long-range temporal and spatial dependencies. In and binary , these difficulties are compounded by the non-differentiable dynamics implied by the binary nature of their outputs. While a considerable body of work has successfully demonstrated training of two-layer [@gutig_spike_2014; @memmesheimer_learning_2014; @Anwani_Rajendran15_normappr] without hidden units, and networks with recurrent synaptic connections [@gilra_predicting_2017; @nicola_supervised_2017], the ability to train deeper with hidden layers has remained a major obstacle. Because hidden units and depth are crucial to efficiently solve many real-world problems, overcoming this obstacle is vital. As network models grow larger and make their way into embedded and automotive applications, their power efficiency becomes increasingly important. Simplified neural network architectures that can run natively and efficiently on dedicated hardware are now being devised. This includes, for instance, networks of binary neurons or neuromorphic hardware that emulate the dynamics of [@boahen_neuromorphs_2017]. Both types of networks dispense with energetically costly floating-point multiplications, making them particularly advantageous for low-power applications compared to neural networks executed on conventional hardware. These new hardware developments have created an imminent need for tools and strategies enabling efficient inference and learning in and binary . In this article, we discuss and address the inherent difficulties in training with hidden layers, and introduce various strategies and approximations used to successfully implement them. Understanding as {#sec:understanding_ssn_as_rnn} ================= We start by formally mapping to . Formulating as will allow us to directly transfer and apply existing training methods for and will serve as the conceptual framework for the rest of this article. Before we proceed, one word on terminology. We use the term in its widest sense to refer to networks whose state evolves in time according to a set of recurrent dynamical equations. Such dynamical recurrence can be due to the explicit presence of recurrent synaptic connections between neurons in the network. This is the common understanding of what a is. But importantly, dynamical recurrence can also arise in the *absence* of recurrent connections. This happens, for instance, when stateful neuron or synapse models are used which have internal dynamics. Because the network’s state at a particular time step recurrently depends on its state in previous time steps, these dynamics are intrinsically recurrent. In this article, we use the term for networks exhibiting either, or both types of recurrence. Moreover, we introduce the term for the subset of networks with explicit recurrent synaptic connections. We will now show that both admit the same mathematical treatment despite the fact that their dynamical properties may be vastly different. To this end, we will first introduce the neuron model with current-based synapses which has wide use in computational neuroscience [@Gerstner_etal14_neurdyna]. Next, we will reformulate this model in discrete time and show its formal equivalence to a with binary activation functions. Readers familiar with the neuron model can skip the following steps up to Equation . [r]{}[4em]{} are networks of inter-connected units, or neurons in which the network state at any point in time ${a}[n]$ is a function of both external input ${x}[n]$ and the network’s state at the previous time point ${a}[n-1]$. One popular structure arranges neurons in multiple layers where every layer is recurrently connected and also receives input from the previous layer. More precisely, the dynamics of a network with $L$ layers is given by: $$\begin{aligned} {\bf y}^{(l)}[n] = & \sigma({\bf a}^{(l)}[n]) \quad \text{for $l=1,\ldots,L$}\\ {\bf a}^{(l)} [n] = & {\bf V}^{(l)} {\bf y}^{(l)} [n-1] + {\bf W}^{(l)} {\bf y}^{(l-1)} [n-1] \quad \text{for $l=1,\ldots,L$} \\ {\bf y}^{(0)}[n] \equiv & {\bf x}[n] \end{aligned}$$ where ${\bf a}^{(l)}[n]$ is the state vector of the neurons at layer $l$, $\sigma$ is an activation function, and $\mathbf{V}^{(l)}$ and $\mathbf{W}^{(l)}$ are the recurrent and feedforward weight matrices of layer $l$, respectively. External inputs ${\bf x}[n]$ typically arrive at the first layer. Non-scalar quantities are typeset in bold face. A neuron in layer $l$ with index $i$ can formally be described in differential form as $$\tau_\mathrm{mem} \frac{\mathrm{d}U_i^{(l)}}{\mathrm{d}t} = -(U_i^{(l)}-U_\mathrm{rest}) + RI_i^{(l)} \label{eq:lif_basic}$$ where $U_i(t)$ is the membrane potential, $U_\mathrm{rest}$ is the resting potential, $\tau_\mathrm{mem}$ is the membrane time constant, $R$ is the input resistance, and $I_i(t)$ is the input current [@Gerstner_etal14_neurdyna]. Equation  shows that $U_i$ acts as a leaky integrator of the input current $I_i$. Neurons emit spikes to communicate their output to other neurons when their membrane voltage reaches the firing threshold $\vartheta$. After each spike, the membrane voltage $U_i$ is reset to the resting potential $U_\mathrm{rest}$ [[(Fig. \[fig:lif\_neuron\_activity\])]{}]{}. Due to this reset, Equation  only describes the subthreshold dynamics of a neuron, *i.e.* the dynamics in absence of spiking output of the neuron. ![image](fig_lif_neuron) In , the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes $S_j^{(l)}(t)$. When working with differential equations, it is convenient to denote a spike train $S_j^{(l)}(t)$ as a sum of Dirac delta functions $S_j^{(l)}(t)=\sum_{s \in C_j^{(l)}} \delta(t-s)$ where $s$ runs over the firing times $C_j^{(l)}$ of neuron $j$ in layer $l$. Synaptic currents follow specific temporal dynamics themselves. A common first-order approximation is to model their time course as an exponentially decaying current following each presynaptic spike. Moreover, we assume that synaptic currents sum linearly. The dynamics of these operations are given by $$\frac{\mathrm{d}I_i^{(l)}}{\mathrm{d}t}= -\underbrace{\frac{I_i^{(l)}(t)}{\tau_\mathrm{syn}}}_\text{exp. decay} +\underbrace{\sum_j W_{ij}^{(l)} S_j^{(l-1)}(t)}_\mathrm{feed-forward} +\underbrace{\sum_j V_{ij}^{(l)} S_j^{(l)}(t)}_\mathrm{recurrent} \label{eq:cuba_syn}$$ where the sum runs over all presynaptic neurons $j$ and $W_{ij}^{(l)}$ are the corresponding afferent weights from the layer below. Further, the $V_{ij}^{(l)}$ correspond to explicit recurrent connections within each layer. Because of this property we can simulate a single neuron with two linear differential equations whose initial conditions change instantaneously whenever a spike occurs. Through this property, we can incorporate the reset term in Equation  through an extra term that instantaneously decreases the membrane potential by the amount $(\vartheta - U_\mathrm{rest})$ whenever the neuron emits a spike: $$\frac{\mathrm{d}U_i^{(l)}}{\mathrm{d}t} = -\frac{1}{\tau_\mathrm{mem}}\left( (U_i^{(l)}-U_\mathrm{rest}) + RI_i^{(l)} \right) + S_i^{(l)}(t)(U_\mathrm{rest}-\vartheta) \label{eq:lif}$$ It is customary to approximate the solutions of Equations  and  numerically in discrete time and to express the output spike train $S_i^{(l)}[n]$ of neuron $i$ in layer $l$ at time step $n$ as a nonlinear function of the membrane voltage $S_i^{(l)}[n]\equiv\Theta(U_i^{(l)}[n]-\vartheta)$ where $\Theta$ denotes the Heaviside step function and $\vartheta$ corresponds to the firing threshold. Without loss of generality, we set $U_\mathrm{rest}=0$, $R=1$, and $\vartheta=1$. When using a small simulation time step $\Delta_t>0$, Equation  is well approximated by $$I_i^{(l)}[n+1] = \alpha I_i^{(l)}[n] + \sum_j W_{ij}^{(l)} S_j^{(l)}[n] + \sum_j V_{ij}^{(l)} S_j^{(l)}[n] \label{eq:syn_discrete_time}$$ with the decay strength $\alpha \equiv \exp\left(-\frac{\Delta_t}{\tau_\mathrm{syn}} \right)$. Note that $0<\alpha<1$ for finite and positive $\tau_\mathrm{syn}$. Moreover, $S_j^{(l)}[n] \in \{0,1\}$. We use $n$ to denote the time step to emphasize the discrete dynamics. We can now express Equation  as $$U_i^{(l)}[n+1] = \beta U_i^{(l)}[n] + I_i^{(l)}[n] -S_i^{(l)}[n] \label{eq:mem_discrete_time}$$ with $\beta \equiv \exp\left(-\frac{\Delta_t}{\tau_\mathrm{mem}}\right)$. Equations  and  characterize the dynamics of a . Specifically, the state of neuron $i$ is given by the instantaneous synaptic currents $I_i$ and the membrane voltage $U_i$ [[(Box. \[box:rnn\])]{}]{}. The computations necessary to update the cell state can be unrolled in time as is best illustrated by the computational graph (Figure \[fig:snn\_computational\_graph\]). ![image](snn_graph.pdf) We have now seen that constitute a special case of . However, so far we have not explained how their parameters are set to implement a specific computational function. This is the focus of the rest of this article, in which we present a variety of learning algorithms that systematically change the parameters towards implementing specific functionalities. Methods for training ===================== Powerful machine learning methods are able to train for a variety of tasks ranging from time series prediction, to language translation, to automatic speech recognition [@Goodfellow_etal16_deeplear]. In the following, we discuss the most common methods before analyzing their applicability to . There are several stereotypical ingredients that define the training process. The first ingredient is a cost or loss function which is minimized when the network’s response corresponds to the desired behavior. In time series prediction, for example, this loss could be the squared difference between the predicted and the true value. The second ingredient is a mechanism that updates the network’s weights to minimize the loss. One of the simplest and most powerful mechanisms to achieve this is to perform gradient descent on the loss function. In network architectures with *hidden units* (*i.e.* units whose activity affect the loss indirectly through other units) the parameter updates contain terms relating to the activity and weights of the downstream units they project to. Gradient-descent learning solves this *credit assignment problem* by providing explicit expressions for these updates through the chain rule of derivatives. As we will now see, the learning of hidden unit parameters depends on an efficient method to compute these gradients. When discussing these methods, we distinguish between solving the spatial credit assignment problem which affects and in the same way and the temporal credit assignment problem which only occurs in . We now discuss common algorithms which provide both types of credit assignment. Spatial credit assignment {#sec:spatial_CA} ------------------------- To train , credit or blame needs to be assigned spatially across layers and their respective units. This spatial credit assignment problem is solved most commonly by the of error algorithm [[(Box. \[box:bp\])]{}]{}. In its simplest form, this algorithm propagates errors “backwards” from the output of the network to upstream neurons. Using to adjust hidden layer weights ensures that the weight update will reduce the cost function for the current training example, provided the learning rate is small enough. While this theoretical guarantee is desirable, it comes at the cost of certain communication requirements — namely that gradients have to be communicated back through the network — and increased memory requirements as the neuron states need to be kept in memory until the errors become available. The task of learning is to minimize a cost function $\mathcal{L}$ over the entire dataset. In a neural network, this can be achieved by gradient descent, which modifies the network parameters $\mathbf{W}$ in the direction opposite to the gradient: $$\begin{split} W_{ij} \leftarrow W_{ij} - \eta \Delta W_{ij}, & \text{where } \Delta W_{ij} = {\frac{\partial \mathcal{L}}{\partial W_{ij}}} = {\frac{\partial \mathcal{L}}{\partial y_i}} {\frac{\partial y_i}{\partial a_i }} {\frac{\partial a_i}{\partial W_{ij}}} \end{split}$$ with $a_i = \sum_j W_{ij} x_j$ the total input to the neuron, $y_i$ is the output of neuron $i$, and $\eta$ a small learning rate. The first term is the error of neuron $i$ and the second term reflects the sensitivity of the neuron output to changes in the parameter. In multilayer networks, gradient descent is expressed as the of the errors starting from the prediction (output) layer to the inputs. Using superscripts $l=0,...,L$ to denote the layer ($0$ is input, $L$ is output): $$\label{eq:bp_deep} \frac{\mathrm{\partial}}{\mathrm{\partial} W^{(l)}_{ij}} \mathcal{L} = \delta_{i}^{(l)} y^{(l-1)}_j,\text{ where }\delta_{i}^{(l)} = \sigma'\left( a_i^{(l)} \right) \sum_k \delta_{k}^{(l+1)} W_{ik}^{\top,(l)},$$ where $\sigma'$ is the derivative of the activation function, and $\delta_{i}^{(L)}={\frac{\partial \mathcal{L}}{\partial y_i^{(L)}}}$ is the error of output neuron $i$ and $y_{i}^{(0)}=x_i$ and $\top$ indicates the transpose.\ [r]{}[15em]{} ![image](rnn_unrolled.pdf){width="15em"} This update rule is ubiquitous in deep learning and known as the gradient algorithm [@Goodfellow_etal16_deeplear]. Learning is typically carried out in forward passes (evaluation of the neural network activities) and backward passes (evaluation of $\delta$s). The same rule can be applied to . In this case the recurrence is “unrolled” meaning that an auxiliary network is created by making copies of the network for each time step. The unrolled network is simply a deep network with shared feedforward weights $\mathbf{W}^{(l)}$ and recurrent weights $\mathbf{V}^{(l)}$, on which the standard applies: $$\label{eq:bptt} \begin{split} \Delta {W_{ij}^{(l)}} &\propto \frac{\mathrm{\partial}}{\mathrm{\partial} W^{(l)}_{ij}} \mathcal{L}[n] = \sum_{m=0}^t \delta_{i}^{(l)}[m] y^{(l-1)}_j[m],\text{ and }\Delta {V_{ij}^{(l)}} \propto \frac{\mathrm{\partial}}{\mathrm{\partial} V^{(l)}_{ij}} \mathcal{L}[n] = \sum_{m=1}^t \delta_{i}^{(l)}[s] y^{(l)}_j[m-1]\\ \delta_{i}^{(l)} [n] & = \sigma'\left( a_i^{(l)}[n] \right) \left( \sum_k \delta_{k}^{(l+1)}[n] W_{ik}^{\top,l} + \sum_k \delta_{k}^{(l)}[n+1] V_{ik}^{\top,l} \right), \end{split}$$ Applying to an unrolled network is referred to as (BPTT). Temporal credit assignment {#sec:temporal_credit_assignment} -------------------------- When training , we also have to consider temporal interdependencies of network activity. This requires solving the temporal credit assignment problem (Fig. \[fig:snn\_computational\_graph\]). There are two common methods to achieve this: 1. The “backward” method: This method applies the same strategies as with spatial credit assignment by “unrolling” the network in time [[(Box. \[box:bptt\])]{}]{}. solves the temporal credit assignment problem by back-propagating errors through the unrolled network. This method works backward through time after completing a forward pass. The use of standard on the unrolled network directly enables the use of autodifferentiation tools offered in modern machine learning toolkits [@bellec_long_2018; @shrestha_slayer:_2018]. 2. The forward method: In some situations, it is beneficial to propagate all necessary information for gradient computation forward in time [@Williams_Zipser89_learalgo]. This formulation is achieved by computing the gradient of a cost function $\mathcal{L}[n]$ and maintaining the recursive structure of the . For example, the “forward gradient” of the feed-forward weight $\mathbf{W}$ becomes: $$\begin{split} \label{eq:forward_mode_differentiation} \Delta {W_{ij}^m} &\propto {\frac{\partial {\mathcal{L}[n}{\partial }}}]{W_{ij}^m} = \sum_k {\frac{\partial {\mathcal{L}[n}{\partial }}}]{y_k^{(L)}[n]} P_{ijk}^{L,m}[n],\text{ with } P_{ijk}^{(l,m)}[n] = \frac{\partial} {\partial W_{ij}^m} y_k^{(l)}[n]\\ P_{ijk}^{(l,m)}[n] &= \sigma'(a^{(l)}_k[n]) \left( \sum_{j'} V_{ij'}^{(l)} P_{ijj'}^{(l,m)}[n-1] + \sum_{j'} W_{ij'}^{(l)} P_{ijj'}^{(l-1,m)}[n-1] + \delta_{lm} y_i^{(l-1)}[n-1] \right). \\ \end{split}$$ Gradients with respect to recurrent weights $V_{ij}^{(l)}$ can be computed in a similar fashion [@Williams_Zipser89_learalgo]. The backward optimization method is generally more efficient in terms of computation, but requires maintaining all the inputs and activations for each time step. Thus, its space complexity for each layer is $O(N T)$, where $N$ is the number of neurons per layer and $T$ is the number of time steps. On the other hand, the forward method requires maintaining variables $P_{ijk}^{(l,m)}$, resulting in a $O(N^3)$ space complexity per layer. While $O(N^3)$ is not a favorable scaling compared to $O(NT)$ for large $N$, simplifications of the computational graph can reduce the memory complexity of the forward method to $O(N^2)$ [@bellec_biologically_2019; @zenke_superspike:_2018], or even $O(N)$[@Kaiser_etal18_synaplas]. These simplifications also reduce the computational complexity, rendering the scaling of forward algorithms comparable or better than . Such simplifications are at the core of several successful approaches which we will describe in [[Sec. \[sec:applications\]]{}]{}. Furthermore, the forward method is more appealing from a biological point of view, since the learning rule can be made consistent with synaptic plasticity in the brain and “three-factor” rules, as discussed in Section \[sec:superspike\]. In summary, efficient algorithms to train exist. We will now focus on training . Credit assignment with spiking neurons: Challenges and solutions ================================================================ So far we have discussed common algorithmic solutions to training . Before these solutions can be applied to , however, two key challenges need to be overcome. The first challenge concerns the non-differentiability of the spiking nonlinearity. Equations  and  reveal that the expressions for both the forward and the backward learning methods contain the derivative of the neural activation function $\sigma' \equiv {\frac{\partial y_i^{(l)}}{\partial a_i^{(l)}}}$ as a multiplicative factor. For a spiking neuron, however, we have $S(U(t))=\Theta(U(t)-\vartheta)$, whose derivative is zero everywhere except at $U=\vartheta$, where it is ill defined [[(Fig. \[fig:surr\_partials\])]{}]{}. This all-or-nothing behavior of the binary spiking nonlinearity stops gradients from “flowing” and makes neurons unsuitable for gradient based optimization. The same issue occurs in binary neurons and some of the solutions proposed here are inspired by the methods first developed in binary networks [@courbariaux_binarized_2016; @bengio_estimating_2013]. ![image](fig_derivatives) The second challenge concerns the implementation of the optimization algorithm itself. Standard can be expensive in terms of computation, memory and communication, and may be poorly suited to the constraints dictated by the hardware that implements it (e.g. a computer, a brain, or a neuromorphic device). Processing in dedicated neuromorphic hardware and, more generally, non-von Neumann computers may have specific locality requirements [[(Box. \[box:nonlocal\])]{}]{} that can complicate matters. On such hardware, the forward approach may therefore be preferable. In practice, however, the scaling of both methods ($O(N^3)$ and $O(NT)$) has proven unsuitable for many models. For example, the size of the convolutional models trained with for gesture classification [@Shrestha_Orchard18_slayspik] are GPU memory bounded. Additional simplifying approximations that reduce the complexity of the forward method will be discussed below. In the following sections, we describe approximate solutions to these challenges that make learning in more tractable. To overcome the first challenge in training , which is concerned with the discontinuous spiking nonlinearity, several approaches have been devised with varying degrees of success. The most common approaches can be coarsely classified into the following categories: i) resorting to entirely biologically inspired local learning rules for the hidden units, ii) translating conventionally trained “rate-based” neural networks to , iii) smoothing the network model to be continuously differentiable, or iv) defining a as a continuous relaxation of the real gradients. Approaches pertaining biologically motivated local learning rules (i) and network translation (ii) have been reviewed extensively elsewhere [@abbott_building_2016; @tavanaei_deep_2018]. In this article, we therefore focus on the latter two supervised approaches (iii & iv) which we will refer to as the “smoothed” and the approach. First, we review existing literature on common “smoothing” approaches before turning to an in-depth discussion of how to build functional using methods. Smoothed spiking neural networks -------------------------------- The defining characteristic of smoothed is that their formulation ensures well-behaved gradients which are directly suitable for optimization. Smooth models can be further categorized into (1) soft nonlinearity models, (2) probabilistic models, for which gradients are only well defined in expectation, or models which either rely entirely on (3) rate or (4) single-spike temporal codes. ### Gradients in soft nonlinearity models This approach can in principle be applied directly to all spiking neuron models which explicitly include a smooth spike generating process. This includes, for instance, the Hodgkin-Huxley, Morris-Lecar, and FitzHugh-Nagumo models [@Gerstner_etal14_neurdyna]. In practice this approach has only been applied successfully by @huh_gradient_2018 using an augmented integrate-and-fire model in which the binary spiking nonlinearity was replaced by a continuous-valued gating function. The resulting network constitutes a which can be optimized using standard methods of or . Importantly, the soft threshold models compromise on one of the key features of , namely the binary spike propagation. ### Gradients in probabilistic models Another example for smooth models are binary probabilistic models. In simple terms, stochasticity effectively smooths out the discontinuous binary nonlinearity which makes it possible to define a gradient on expectation values. Binary probabilistic models have been objects of extensive study in the machine learning literature mainly in the context of (restricted) Boltzmann machines [@ackley_learning_1985]. Similarly, the propagation of gradients has been studied for binary stochastic models [@bengio_estimating_2013]. Probabilistic models are practically useful because the log-likelihood of a spike train is a smooth quantity which can be optimized using gradient descent [@pfister_optimal_2006]. Although this insight was first discovered in networks without hidden units, the same ideas were later extended to multi-layer networks [@gardner_learning_2015]. Similarly, @guerguiev_towards_2017 used probabilistic neurons to study biologically plausible ways of propagating error or target signals using segregated dendrites (see Section \[sec:feedback\_alignment\]). In a similar vein, variational learning approaches were shown to be capable of learning useful hidden layer representations in [@brea_matching_2013; @rezende_stochastic_2014; @Mostafa_Cauwenberghs18]. However, the injected noise necessary to smooth out the effect of binary nonlinearities often poses a challenge for optimization [@rezende_stochastic_2014]. How noise, which is found ubiquitously in neurobiology, influences learning in the brain, remains an open question. ### Gradients in rate-coding networks Another common approach to obtain gradients in is to assume a rate-based coding scheme. The main idea is that spike rate is the underlying information-carrying quantity. For many plausible neuron models, the supra-threshold firing rate depends smoothly on the neuron input. This input-output dependence is captured by the so-called f-I curve of a neuron. In such cases, the derivative of the f-I curves is suitable for gradient-based optimization. There are several examples of this approach. For instance, @Hunsberger_Eliasmith15_spikdeep as well as @Neftci_etal17_evenranda used an effectively rate-coded input scheme to demonstrate competitive performance on standard machine learning benchmarks such as CIFAR10 and MNIST. Similarly @Lee_etal16_traideep demonstrated deep learning in by defining partial derivatives on low-pass filtered spike trains. Rate-based approaches can offer good performance, but they may be inefficient. On the one hand, precise estimation of firing rates requires averaging over a number of spikes. Such averaging requires either relatively high firing rates or long averaging times because several repeats are needed to average out discretization noise. This problem can be partially addressed by spatial averaging over large populations of spiking neurons. However, this may require the use of larger neuron numbers. Finally, the distinction between rate-coding and probabilistic networks can be blurry since many probabilistic network implementations use rate-coding at the output level. Both types of models are differentiable, but for different reasons: Probabilistic models are based on a firing probability densities [@pfister_optimal_2006]. Importantly, the firing probability of a neuron is a continuous function. Although measuring probability changes requires “trial averaging” over several samples, it is the underlying continuity of the probability density which formally allows to define differential improvements and thus to derive gradients. By exploiting this feature, probabilistic models have been used to learn precise output spike timing [@pfister_optimal_2006; @gardner_learning_2015]. In contrast, deterministic networks always emit a fixed integer number of spikes for a given input. To nevertheless get at a notion of differential improvement, one may consider the number of spikes over a given time interval within single trials. When averaging over sufficiently large intervals, the resulting firing rates behave as a quasi continuous function of the input current. This smooth input output relationship is captured by the neuronal f-I curve which can be used for optimization [@Hunsberger_Eliasmith15_spikdeep; @Neftci_etal17_evenranda]. Operating at the level of rates, however, comes at the expense temporal precision. ### Gradients in single-spike-timing-coding networks In an effort to optimize without potentially harmful noise injection and without reverting to a rate-based coding scheme, several studies have considered the outputs of neurons in to be a set of firing times. In such a temporal coding setting, individual spikes could carry significantly more information than rate-based schemes that only consider the total number of spikes in an interval. The idea behind training temporal coding networks was pioneered in SpikeProp [@bohte_error-backpropagation_2002]. In this work the analytic expressions of firing times for hidden units were linearized, allowing to analytically compute approximate hidden layer gradients. More recently, a similar approach without the need for linearization was used in [@Mostafa16_supelear] where the author computed the spike timing gradients explicitly for non-leaky integrate-and-fire neurons. Intriguingly, the work showed competitive performance on conventional networks and benchmarks. Although the spike timing formulation does in some cases yield well-defined gradients, it may suffer from certain limitations. For instance, the formulation of SpikeProp [@bohte_error-backpropagation_2002] required each hidden unit to emit exactly one spike per trial, because it is impossible to define firing time for quiescent units. Ultimately, such a non-quiescence requirement could be at conflict with power-efficiency for which it is conceivably beneficial to, for instance, only have a subset of neurons active for any given task. Surrogate gradients {#sec:surrogate_gradients} ------------------- methods provide an alternative approach to overcoming the difficulties associated with the discontinuous nonlinearity. Moreover, they hold opportunities to reduce the potentially high algorithmic complexity associated with training . Their defining characteristic is that instead of changing the model definition as in the smoothed approaches, a is introduced. In the following we make two distinctions. We first consider which constitute a continuous relaxation of the non-smooth spiking nonlinearity for purposes of numerical optimization [[(Fig. \[fig:surrgrad\_concept\])]{}]{}. Such do not explicitly change the optimization algorithm itself and can be used, for instance, in combination with . Further, we also consider with more profound changes that explicitly affect locality of the underlying optimization algorithms themselves to improve the computational and/or memory access overhead of the learning process. One example of this approach that we will discuss involves replacing the global loss by a number of local loss functions. Finally, the use of allows to efficiently train end-to-end without the need to specify which coding scheme is to be used in the hidden layers. ![**Example of for a classifier.** (**a**) Value of the loss function (gray) of an classifier along an interpolation path over the hidden layer parameters $\mathbf{W}^{(1)}$. Specifically, we linearly interpolated between the random initial and final (post-optimization) weight matrices of the hidden layer inputs $\mathbf{W}^{(1)}$ (network details: 2 input, 2 hidden, and 2 output units trained on a binary classification task). Note that the loss function (gray) displays characteristic plateaus with zero gradient which are detrimental for numerical optimization. (**b**) Norm of hidden layer (surrogate) gradients in arbitrary units along the interpolation path. To perform numerical optimization in this network we constructed a  (violet) which, in contrast to the true gradient (gray), is non-zero. Note that we obtained the “true gradient” via the finite differences method which in itself is an approximation. Importantly, the approximates the true gradient, but retains favorable properties for optimization, i.e.  continuity and finiteness. The can be thought of as the gradient of a *virtual* surrogate loss function (violet curve in (a); obtained by numerical integration of the and scaled to match loss at initial and final point). This surrogate loss remains virtual because it is generally not computed explicitly. In practice, suitable are obtained directly from the gradients of the original network through sensible approximations. This is a key difference with respect to some other approaches [@huh_gradient_2018] in which the entire network is replaced explicitly by a surrogate network on which gradient descent can be performed using its true gradients.[]{data-label="fig:surrgrad_concept"}](fig_surrgrad_example){width="3.0in"} Like standard gradient-descent, learning can deal with the spatial and temporal credit assignment problem by either or by forward methods, e.g. through the use of eligibility traces (see Section \[sec:temporal\_credit\_assignment\] for details). Alternatively, additional approximations can be introduced which may offer advantages specifically for hardware implementations. In the following, we briefly review existing work relying on methods before turning to a more in-depth treatment of the underlying principles and capabilities. ### Surrogate derivatives for spiking nonlinearity A set of works have used to specifically overcome the challenge of the discontinuous spiking nonlinearity. In these works, typically a standard algorithm such as is used with one minor modification: within the algorithm each occurrence of the derivative of the spiking nonlinearity is replaced by the derivative of a smooth function. Implementing these approaches is straight-forward in most auto-differentiation-enabled machine learning toolkits. One of the first uses of such a is described in @bohte_error-backpropagation_2011 where the derivative of a spiking neuron non-linearity was approximated by the derivative of a truncated quadratic function, thus resulting in a as surrogate derivative [[(Fig. \[fig:surr\_partials\])]{}]{}. This is similar in flavor to the solution proposed to optimize binary neural networks [@courbariaux_binarized_2016]. The same idea underlies the training of large-scale convolutional networks with binary activations on classification problems using neuromorphic hardware [@esser_convolutional_2016]. @zenke_superspike:_2018 proposed a three factor online learning rule using a fast sigmoid to construct a . @shrestha_slayer:_2018 used an exponential function and reported competitive performance on a range of neuromorphic benchmark problems. Additionally, @Oconnor_etal17 described a spike-based encoding method inspired by Sigma-Delta modulators. They used their method to approximately encode both the activations and the errors in standard feedforward , and apply standard backpropagation on these sparse approximate encodings. Surrogate derivatives have also been used to train spiking where dynamical recurrence arises due to the use of neurons as well as due to recurrent synaptic connections. Recently, @bellec_long_2018 successfully trained with slow temporal neuronal dynamics using a piecewise linear surrogate derivative. Encouragingly, the authors found that such networks can perform on par with conventional networks. Similarly, @Wozniak_etal18_deepnetw reported competitive performance on a series of temporal benchmark datasets. In summary, a plethora of studies have constructed using different nonlinearities and trained a diversity of architectures. These nonlinearties, however, have a common underlying theme. All functions are nonlinear and monotonically increasing towards the firing threshold [[(Fig. \[fig:surr\_partials\])]{}]{}. While a more systematic comparison of different surrogate nonlinearities is still pending, overall the diversity found in the present literature suggests that the success of the method is not crucially dependent on the details of the surrogate used to approximate the derivative. ### Surrogate gradients affecting locality of the update rules The majority of studies discussed in the previous section introduced a surrogate nonlinearity to prevent gradients from vanishing (or exploding), but by relying on methods such as , they did not explicitly affect the structural properties of the learning rules. There are, however, training approaches for which introduce more far-reaching modifications which may completely alter the way error signals or target signals are propagated (or generated) within the network. Such approaches are typically used in conjunction with the aforementioned surrogate derivatives. There are two main motivations for such modifications which are typically linked to physical constraints that make it impossible to implement the “correct” gradient descent algorithm. For instance, in neurobiology biophysical constraints make it impossible to implement without further approximations. Studies interested in how the brain could solve the credit assignment problem focus on how simplified “local” algorithms could achieve similar performance while adhering to the constraints of the underlying biological wetware [[(Box. \[box:nonlocal\])]{}]{}. Similarly, neuromorphic hardware may pose certain constraints with regard to memory or communications which impede the use of and call for simpler and often more local methods for training on such devices. As training using advances to deeper architectures, it is foreseeable that additional problems, similar to the ones encountered in , will arise. For instance, several approaches currently rely on derived from sigmoidal activation functions (Fig. \[fig:surr\_partials\]). However, the use of sigmoidal activation functions is implicated with vanishing gradient problems. Another set of challenges which may well need tackling in the future could be linked to the bias which introduce into the learning dynamics. In the following Applications Section, we will review a selection of promising approaches which introduce far larger deviations from the “true gradients” and still allow for learning at a greatly reduced complexity and computational cost. Applications {#sec:applications} ============ In this section, we present a selection of illustrative applications of smooth or to which exploit both the internal continuous-time dynamics of the neurons and their event-driven nature. The latter allows a network to remain quiescent until incoming spikes trigger activity. Feedback alignment and random error {#sec:feedback_alignment} ------------------------------------ One family of algorithms that relaxes some of the requirements of are feedback alignment or, more generally, random algorithms [@Lillicrap_etal16_randsyna; @Baldi_Sadowski16_theoloca; @nokland_direct_2016]. These are approximations to the gradient rule that side-step the non-locality problem by replacing weights in the rule with random ones (Fig. \[fig:spatial\_credit\_assignment\]b): $ \delta_{i}^{(l)} = \sigma'\left( a_i^{(l)} \right) \sum_k \delta_{k}^{(l+1)} G_{ki}^{(l)}, $ where $\mathbf{G}^(l)$ is a fixed, random matrix with the same dimensions as ${\bf W}$. The replacement of $\mathbf{W}^{\top,(l)}$ with a random matrix $\mathbf{G}^{(l)}$ breaks the dependency of the backward phase on $\mathbf{W}^{(l)}$, enabling the rule to be more local. One common variation is to replace the entire backward propagation by a random propagation of the errors to each layer (Fig. \[fig:spatial\_credit\_assignment\]c) [@nokland_direct_2016]: $ \delta_{i}^{(l)} = \sigma'\left( a_i^{(l)} \right) \sum_k \delta^{(L)}_k H_{ki}^{(l)}, $ where $\mathbf{H}^{(l)}$ is a fixed, random matrix with appropriate dimensions. Random approaches lead to remarkably little loss in classification performance on some benchmark tasks. Although a general theoretical understanding of random is still a subject of intense research, simulation studies have shown that, during learning, the network adjusts its feed-forward weights such that they partially align with the (random) feedback weights, thus permitting them to convey useful error information [@Lillicrap_etal16_randsyna]. Building on these findings, an asynchronous spike-driven adaptation of random using local synaptic plasticity rules with the dynamics of spiking neurons was demonstrated in [@Neftci_etal17_evenranda]. To obtain the , the authors approximated the derivative of the neural activation function using a symmetric function that is zero everywhere except in the vicinity of zero, where it is constant. The derivative of this function exists and is piecewise constant. Networks using this learning rule performed remarkably well, and were shown to operate continuously and asynchronously without the alternation between forward and backward passes that is necessary in . One important limitation with random applied to was that the temporal dynamics of the neurons and synapses was not taken into account in the gradients. The following rule, SuperSpike solves this problem. [r]{}[10em]{} ![image](local_comp.pdf){width="10.5em"} Locality of computations is characterized by the set variables available to the physical processing elements, and depends on the computational substrate. To illustrate the concept of locality, we assume two neurons, $A$ and $B$, and would like Neuron $A$ to implement a function on domain $D$ defined as: $$\begin{split} D & = D_{loc} \cup D_{nloc}, \text{where } D_{loc}=\{W_{BA},S_A(t), U_A(t)\}\\\text{ and }D_{nloc} &= \{ S_B(t-T), U_{B}\}. \end{split}$$ Here, $S^B(t-T)$ refers to the output of neuron $B$ $T$ seconds ago, $U_A$, $U_B$ are the respective membrane potentials, and $W_{BA}$ is the synaptic weight from $B$ to $A$. Variables under $D_{loc}$ are directly available to Neuron A and are thus local to it. On the other hand, variable $S^B(t-T)$ is temporally non-local and $U_{B}$ is spatially non-local to neuron $A$. Non-local information can be transmitted through special structures, for example dedicated encoders and decoders for $U_B$ and a form of working memory (WM) for $S_B(t-T)$. Although locality in a model of computation can make its use challenging, it enables massively parallel computations with dynamical interprocess communications. Supervised learning with local three factor learning rules {#sec:superspike} ---------------------------------------------------------- SuperSpike is a biologically plausible three factor learning rule. In contrast to many existing three factor rules which fall into the category of “smoothed approaches” [@pfister_optimal_2006; @gardner_learning_2015; @guerguiev_towards_2017; @brea_matching_2013; @rezende_stochastic_2014; @Mostafa_Cauwenberghs18], SuperSpike is a approach which combines several approximations to render it more biologically plausible [@zenke_superspike:_2018]. Although the underlying motivation of the study is geared toward a deeper understanding of learning in biological neural networks, the learning rule may prove interesting for hardware implementations because it does not rely on . Specifically, the rule uses synaptic eligibility traces to solve the temporal credit assignment problem. We now provide a short account on why SuperSpike can be seen as one of the forward-in-time optimization procedures. SuperSpike was derived for temporal supervised learning tasks in which a given output neuron learns to spike at predefined times. To that end, SuperSpike minimizes the van Rossum distance with kernel $\epsilon$ between a set of output spike train $S_k(t)$ and their corresponding target spike trains $S_k^*(t)$ $$\mathcal{L} = \frac{1}{2} \int_{-\infty}^t \mathcal{L}(s)~ ds = \frac{1}{2} \int_{-\infty}^t \left( \epsilon\ast(S_k(s)-S_k^*(s)) \right)^2 ds \approx \frac{1}{2} \sum_n \left( \epsilon\ast(S_k[n]-S_k^*[n]) \right)^2$$ where the last approximation corresponds to transitioning to discrete time. To perform online gradient descent, we need to compute the gradients of $\mathcal{L}[n]$. Here we first encounter the derivative $\frac{\partial}{\partial W_{ij}} \epsilon \ast S_k[n]$. Because the (discrete) convolution is a linear operator, this expression simplifies to $\epsilon \ast \frac{\partial S_k[n]}{\partial W_{ij}}$. In SuperSpike $\epsilon$ is implemented as a dynamical system (see [@zenke_superspike:_2018] for details). To compute derivatives of the neuron’s output spiketrain of the form $\frac{\partial S_i[n]}{\partial W_{ij}}$ we differentiate the network dynamics (Equations  and ) and obtain $$\begin{aligned} \frac{\partial S_k[n+1]}{\partial W_{ij}}&=& \Theta^\prime(U_k[n+1]-\vartheta) \left[ \frac{\partial U_k[n+1]}{\partial W_{ij}} \right] \label{eq:hebb_like_update}\\ \frac{\partial U_k[n+1]}{\partial W_{ij}}&=& \beta \frac{\partial U_k[n]}{\partial W_{ij}} + \frac{\partial I_k[n]}{\partial W_{ij}} -\frac{\partial S_k[n]}{\partial W_{ij}} \label{eq:deriv_mem_update}\\ \frac{\partial I_k[n+1]}{\partial W_{ij}}&=& \alpha \frac{\partial I_k[n]}{\partial W_{ij}} + S_j[n] \label{eq:deriv_current_update}\end{aligned}$$ The above equations define a dynamical system which, given the starting conditions $S_k[0]=U_k[0]=I_k[0]=0$, can be simulated online and forward in time to produce all relevant derivatives. Crucially, to arrive at useful , SuperSpike makes two approximations. First, $\Theta^\prime$ is replaced by a smooth surrogate derivative $\sigma^\prime(U[n]-\vartheta)$ (cf. Fig. \[fig:surr\_partials\]). Second, the reset term with the negative sign in Equation  is dropped, which empirically leads to better results. With these definitions in hand, the final weight updates are given by $$\Delta W_{ij}[n] \propto e_i[n] \epsilon \ast \left[ \sigma^\prime(U_k[n]) \frac{\partial U_k[n]}{\partial W_{ij}} \right]$$ where $e_i[n] \equiv \epsilon \ast (S_i-S^*_i)$. These weight updates depend only on local quantities [[(Box. \[box:nonlocal\])]{}]{}. Above, we have considered a simple two-layer network (cf.Fig. \[fig:snn\_computational\_graph\]) without recurrent connections. If we were to apply the same strategy to compute updates in a or a network with an additional hidden layer, the equations would become more complicated and non-local. SuperSpike applied to multi-layer networks sidesteps this issue by propagating error signals from the output layer directly to the hidden units as in random (cf. Section \[sec:feedback\_alignment\]; Fig. \[fig:spatial\_credit\_assignment\]c; [@Lillicrap_etal16_randsyna; @Baldi_Sadowski16_theoloca; @nokland_direct_2016]). Thus, SuperSpike achieves temporal credit assignment by propagating all relevant quantities forward in time, while it relies on random to perform spatial credit assignment. While the work by @zenke_superspike:_2018 was centered around feed-forward networks, @bellec_biologically_2019 show that similar biologically plausible three factors rule can also be used to train efficiently. Learning using local errors --------------------------- In practice, the performance of SuperSpike does not scale favorably for large multilayer networks. The scalability of SuperSpike can be improved by introducing local errors, as described here. ![\[fig:dcll\_gestures\] Deep Continuous Local Learning (DCLL) with spikes [@Kaiser_etal18_synaplas], applied to the event-based DVSGestures dataset. The feed-forward weights (green) of a three layer convolutional are trained with using local errors generated using fixed random projections to a local classifier. Learning in DCLL scales linearly with the number of neurons thanks to local rate-based cost functions formed by spike-based basis functions. The circular arrows indicate recurrence due to the statefulness of the LIF dynamics (no recurrent synaptic connections were used here) and are not trained. This outperforms BPTT methods [@shrestha_slayer:_2018], requiring fewer training iterations [@Kaiser_etal18_synaplas] compared to other approaches.](DCLL_illustration){width="100.00000%"} Multi-layer neural networks are hierarchical feature extractors. Through successive linear projections and point-wise non-linearities, neurons become tuned (respond most strongly) to particular spatio-temporal features in the input. While the best features are those that take into account the subsequent processing stages and which are learned to minimize the final error (as the features learned using do), high-quality features can also be obtained by more local methods. The non-local component of the weight update equation (Eq. ) is the error term $\delta_i^{(l)}[n]$. Instead of obtaining this error term through , we require that it be generated using information local to the layer. One way of achieving this is to define a layer-wise loss $\mathcal{L}^{(l)}({ y}^{(l)}[n])$ and use this local loss to obtain the errors. In such a local learning setting, the local errors $\delta^{(l)}$ becomes: $$\begin{aligned} \label{eq:bp_local} \delta_{i}^{(l)} [n] = \sigma'\left(a_i^{(l)}[n] \right) \frac{\mathrm{d}}{\mathrm{d}y_i^{(l)}[n]}\mathcal{L}^{(l)}(\mathbf{ y}^{(l)}[n])\text{ where } \mathcal{L}^{(l)}(\mathbf{ y}^{(l)}[n]) \equiv \mathcal{L}(\mathbf{ G}^{(l)} \mathbf{ y}^{(l)}[n],\hat{\mathbf{ y}}^{(l)}[n])\end{aligned}$$ with $\hat{{\bf y}}^{(l)}[n]$ a pseudo-target for layer $l$, and ${\bf G}^{(l)}$ a fixed random matrix that projects the activity vector at layer $l$ to a vector having the same dimension as the pseudo-target. In essence, this formulation assumes that an auxiliary random layer is attached to layer $l$ and the goal is to modify $\mathbf{\bf W}^{(l)}$ so as to minimize the discrepancy between the auxiliary random layer’s output and the pseudo-target. The simplest choice for the pseudo-target is to use the top-layer target. This forces each layer to learn a set of features that are able to match the top-layer target after undergoing a fixed random linear projection. Each layer builds on the features learned by the layer below it, and we empirically observe that higher layers are able to learn higher-quality features that allow their random and fixed auxiliary layers to better match the target [@mostafa2018deep]. \[sec:spatial\_credit\_assignment\] A related approach was explored with spiking neural networks [@Nicola_Clopath17_supelear], where separate networks provided high-dimensional temporal signals to improve learning. Local errors were recently used in in combination with the SuperSpike (cf. Section \[sec:superspike\]) forward method to overcome the temporal credit assignment problem [@Kaiser_etal18_synaplas]. As in SuperSpike, the model is simplified by using a feedforward structure, and omitting the refractory dynamics in the optimization. However, the cost function was defined to operate locally on the instantaneous rates of each layer. This simplification results in a forward method whose space complexity scales as $O(N)$ (instead of $O(N^3)$ for the forward method, $O(N^2)$ for SuperSpike, or $O(N T)$ for the backward method), while still making use of spiking neural dynamics. Thus the method constitutes a highly efficient synaptic plasticity rule for multi-layer . Furthermore, the simplifications enable the use of existing automatic differentiation methods in machine learning frameworks to systematically derive synaptic plasticity rules from task-relevant cost functions and neural dynamics (see [@Kaiser_etal18_synaplas] and included tutorials), making DCLL easy to implement. This approach was benchmarked on the DVS Gestures dataset [[(Fig. \[fig:dcll\_gestures\])]{}]{}, and performs on par with standard or rules. Learning using gradients of spike times --------------------------------------- Difficulties in training stem from the discrete nature of the quantities of interest such as the number of spikes in a particular interval. The derivatives of these discrete quantities are zero almost everywhere which necessitates the use of methods. Alternatively, we can choose to use spike-based quantities that have well defined, smooth derivatives. One such quantity is spike times. This capitalizes on the continuous-time nature of SNNs and results in highly sparse network activity as the emission time of even a single spike can encode significant information. Just as importantly, spike times are continuous quantities that can be made to depend smoothly on the neuron’s input. Working with spike times is thus a complementary approach to but which achieves the same goal: obtaining a smooth chain of derivatives between the network’s outputs and inputs. For this example, we use non-leaky integrate and fire neurons described by: $$\begin{aligned} \label{eq:model_neuron} \frac{\mathrm{d}U_i}{\mathrm{d}t} = I_i \quad \text{with} \quad I_i = \sum\limits_j W_{ij}\sum\limits_r \Theta(t - t_i^r)\exp\left(-(t-t_i^r)\right)\end{aligned}$$ where $t_i^r$ is the time of the $r^\mathrm{th}$ spike from neuron $j$, and $\Theta$ is the Heaviside step function. Consider the simple *exclusive or* (XOR) problem in the temporal domain: A network receives two spikes, one from each of two different sources. Each spike can either be “early” or “late”. The network has to learn to distinguish between the case in which the spikes are either both early or both late, and the case where one spike is early and the other is late (Fig. \[fig:xor\_net\]). When designing a SNN, there is significant freedom in how the network input and output are encoded. In this case, we use a first-to-spike code in which we have two output neurons and the binary classification result is represented by the output neuron that spikes first. Figure \[fig:xor\_sim\] shows the network’s response after training (see [@Mostafa16_supelear] for details on the training process). For the first input class (early/late or late/early), one output neuron spikes first and for the other class (early/early or late/late), the other output neuron spikes first. . \[temporal\_xor\] Conclusion ========== We have outlined how can be studied within the framework of and discussed successful approaches for training them. We have specifically focused on approaches for two reasons: approaches are able to train to unprecedented performance levels on a range of real-world problems. This transition marks the beginning of an exciting time in which will become increasingly interesting for applications which were previously dominated by ; provide a framework that ties together ideas from machine learning, computational neurosciences, and neuromorphic computing. From the viewpoint of computational neuroscience, the approaches presented in this paper are appealing because several of them are related to “three-factor” plasticity rules which are an important class of rules believed to underlie synaptic plasticity in the brain. Finally, for the neuromorphic community, methods provide a way to learn under various constraints on communication and storage which makes methods highly relevant for learning on custom low-power neuromorphic devices. The spectacular successes of modern were enabled by algorithmic and hardware advances that made it possible to efficiently train large on vast amounts of data. With temporal coding, are universal function approximators that are potentially far more powerful than with sigmoidal nonlinearities. Unlike large-scale , which had to wait for several decades until the necessary computational resources were available for training them, we currently have the necessary resources, whether in the form of mainstream compute devices such as CPUs or GPUs, or custom neuromorphic devices, to train and deploy large . The fact that are less widely used than is thus primarily due to the algorithmic issue of trainability. In this article, we have provided an overview of various exciting developments that are gradually addressing the issues encountered when training . Fully addressing these issues would have immediate and wide-ranging implications, both technologically, and in relation to learning in biological brains. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Intel Corporation (EN); the National Science Foundation under grant 1640081 (EN); the Swiss National Science Foundation Early Postdoc Mobility Grant P2ZHP2\_164960 (HM) ; the Wellcome Trust \[110124/Z/15/Z\] (FZ). [41]{} \[1\][\#1]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} I. Goodfellow, Y. Bengio, and A. Courville, *Deep learning*.1em plus 0.5em minus 0.4emMIT press, 2016. F. Zenke and S. Ganguli, “[SuperSpike]{}: [Supervised]{} [Learning]{} in [Multilayer]{} [Spiking]{} [Neural]{} [Networks]{},” *Neural Computation*, vol. 30, no. 6, pp. 1514–1541, Apr. 2018. G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long short-term memory and [Learning]{}-to-learn in networks of spiking neurons,” in *Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 31*, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds.1em plus 0.5em minus 0.4emCurran Associates, Inc., 2018, pp. 795–805. J. Kaiser, H. Mostafa, and E. Neftci, “Synaptic plasticity for deep continuous local learning,” *arXiv preprint arXiv:1812.10766*, 2018. A. Tavanaei, M. Ghodrati, S. R. Kheradpisheh, T. Masquelier, and A. Maida, “Deep learning in spiking neural networks,” *Neural Networks*, Dec. 2018. R. Gütig, “To spike, or when to spike?” *Current Opinion in Neurobiology*, vol. 25, pp. 134–139, Apr. 2014. R.-M. Memmesheimer, R. Rubin, B. Ölveczky, and H. Sompolinsky, “Learning [Precisely]{} [Timed]{} [Spikes]{},” *Neuron*, vol. 82, no. 4, pp. 925–938, May 2014. N. Anwani and B. Rajendran, “[NormAD]{}-normalized approximate descent based supervised learning rule for spiking neurons,” in *Neural Networks (IJCNN), 2015 International Joint Conference on*.1em plus 0.5em minus 0.4emIEEE, 2015, pp. 1–8. A. Gilra and W. Gerstner, “,” **, vol. 6, p. e28295, Nov. 2017. \[Online\]. Available: <https://elifesciences.org/articles/28295> W. Nicola and C. Clopath, “,” **, vol. 8, no. 1, p. 2208, Dec. 2017. K. Boahen, “A neuromorph’s prospectus,” *Computing in Science Engineering*, vol. 19, no. 2, pp. 14–28, Mar. 2017. W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, *Neuronal dynamics: From single neurons to networks and models of cognition*.1em plus 0.5em minus 0.4emCambridge University Press, 2014. S. B. Shrestha and G. Orchard, “[SLAYER]{}: [Spike]{} [Layer]{} [Error]{} [Reassignment]{} in [Time]{},” in *Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 31*, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds.1em plus 0.5em minus 0.4em Curran Associates, Inc., 2018, pp. 1419–1428. R. J. Williams and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks,” *Neural computation*, vol. 1, no. 2, pp. 270–280, 1989. G. Bellec, F. Scherr, E. Hajek, D. Salaj, R. Legenstein, and W. Maass, “Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets,” *arXiv:1901.09049 \[cs\]*, Jan. 2019. \[Online\]. Available: <http://arxiv.org/abs/1901.09049> M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized [Neural]{} [Networks]{}: [Training]{} [Deep]{} [Neural]{} [Networks]{} with [Weights]{} and [Activations]{} [Constrained]{} to +1 or -1,” *arXiv:1602.02830 \[cs\]*, Feb. 2016, arXiv: 1602.02830. Y. Bengio, N. Léonard, and A. Courville, “Estimating or [Propagating]{} [Gradients]{} [Through]{} [Stochastic]{} [Neurons]{} for [Conditional]{} [Computation]{},” *arXiv:1308.3432 \[cs\]*, Aug. 2013, arXiv: 1308.3432. S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha, “Convolutional networks for fast, energy-efficient neuromorphic computing,” *Proc Natl Acad Sci U S A*, vol. 113, no. 41, pp. 11441–11446, Oct. 2016. S. M. Bohte, “,” in **, ser. Lecture [Notes]{} in [Computer]{} [Science]{}.1em plus 0.5em minus 0.4emSpringer, Berlin, Heidelberg, Jun. 2011, pp. 60–68. S. B. Shrestha and G. Orchard, “Slayer: Spike layer error reassignment in time,” *arXiv preprint arXiv:1810.08646*, 2018. L. F. Abbott, B. DePasquale, and R.-M. Memmesheimer, “,” **, vol. 19, no. 3, pp. 350–355, Mar. 2016. D. Huh and T. J. Sejnowski, “Gradient [Descent]{} for [Spiking]{} [Neural]{} [Networks]{},” in *Advances in [Neural]{} [Information]{} [Processing]{} [Systems]{} 31*, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds.1em plus 0.5em minus 0.4em Curran Associates, Inc., 2018, pp. 1440–1450. D. Ackley, G. Hinton, and T. Sejnowski, “A learning algorithm for [Boltzmann]{} machines,” *Cognitive Science: A Multidisciplinary Journal*, vol. 9, no. 1, pp. 147–169, 1985. J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner, “Optimal [Spike]{}-[Timing]{}-[Dependent]{} [Plasticity]{} for [Precise]{} [Action]{} [Potential]{} [Firing]{} in [Supervised]{} [Learning]{},” *Neural Computation*, vol. 18, no. 6, pp. 1318–1348, Apr. 2006. B. Gardner, I. Sporea, and A. Grüning, “Learning [Spatiotemporally]{} [Encoded]{} [Pattern]{} [Transformations]{} in [Structured]{} [Spiking]{} [Neural]{} [Networks]{},” *Neural Comput*, vol. 27, no. 12, pp. 2548–2586, Oct. 2015. J. Guerguiev, T. P. Lillicrap, and B. A. Richards, “,” **, vol. 6, p. e22901, Dec. 2017. J. Brea, W. Senn, and J.-P. Pfister, “,” **, vol. 33, no. 23, pp. 9565–9575, Jun. 2013. D. J. Rezende and W. Gerstner, “Stochastic variational learning in recurrent spiking networks,” *Front. Comput. Neurosci*, vol. 8, p. 38, 2014. H. Mostafa and G. Cauwenberghs, “A learning framework for winner-take-all networks with stochastic synapses,” *Neural computation*, vol. 30, no. 6, pp. 1542–1572, 2018. E. Hunsberger and C. Eliasmith, “Spiking deep networks with lif neurons,” *arXiv preprint arXiv:1510.08829*, 2015. E. O. Neftci, C. Augustine, S. Paul, and G. Detorakis, “Event-driven random back-propagation: Enabling neuromorphic deep learning machines,” *Frontiers in Neuroscience*, vol. 11, p. 324, 2017. J. H. Lee, T. Delbruck, and M. Pfeiffer, “Training deep spiking neural networks using backpropagation,” *Frontiers in Neuroscience*, vol. 10, 2016. S. M. Bohte, J. N. Kok, and H. La Poutre, “Error-backpropagation in temporally encoded networks of spiking neurons,” *Neurocomputing*, vol. 48, no. 1, pp. 17–37, 2002. H. Mostafa, “Supervised learning based on temporal coding in spiking neural networks,” *IEEE transactions on neural networks and learning systems*, vol. 29, no. 7, pp. 3227–3235, 2018. P. O’Connor, E. Gavves, and M. Welling, “Temporally efficient deep learning with spikes,” *arXiv preprint arXiv:1706.04159*, 2017. S. Wo[ź]{}niak, A. Pantazi, and E. Eleftheriou, “Deep networks incorporating spiking neural dynamics,” *arXiv preprint arXiv:1812.07040*, 2018. T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman, “Random synaptic feedback weights support error backpropagation for deep learning,” *Nature Communications*, vol. 7, 2016. A. N[ø]{}kland, “Direct feedback alignment provides learning in deep neural networks,” in *Advances in Neural Information Processing Systems 29*, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. 1em plus 0.5em minus 0.4emCurran Associates, Inc., 2016, pp. 1037–1045. P. Baldi and P. Sadowski, “A theory of local learning, the learning channel, and the optimality of backpropagation,” *Neural Networks*, vol. 83, pp. 51–74, 2016. H. Mostafa, V. Ramesh, and G. Cauwenberghs, “Deep supervised learning using local errors,” *Frontiers in neuroscience*, vol. 12, p. 608, 2018. W. Nicola and C. Clopath, “Supervised learning in spiking neural networks with force training,” *Nature communications*, vol. 8, no. 1, p. 2208, 2017.
--- abstract: 'Examining games from a fresh perspective we present the idea of game-inspired and game-based algorithms, dubbed *gamorithms*.' author: - 'Moshe Sipper and Jason H. Moore[^1] [^2] [^3] [^4] [^5]' title: Gamorithm --- game, algorithm. > Le véritable voyage de découverte ne consiste pas à chercher de nouveaux paysages, mais à avoir de nouveaux yeux. > > —Marcel Proust[^6] Applaud. Beat. Bet. Challenge. Cheat. Coach. Compete. Defend. Design. Draw. End. Enjoy. Entertain. Exhaust. Fear. Fight. Fix. Gamble. Guess. Hack. Interact. Invent. Jeopardize. Kick. Kill. Like. Lose. Love. Maneuver. Manipulate. Motivate. Navigate. Observe. Optimize. Outplay. Participate. Plan. Play. Program. Qualify. Quit. Race. Risk. Search. Solve. Threaten. Tie. Try. Unravel. Vie. Watch. Win. Xpeke.[^7] Yield. Zoom. This assortment of seemingly random actions can all be associated with *games*, one of the most ubiquitous of human endeavors. “Attested as early as 2600 BC, games are a universal part of human experience and present in all cultures.” [@wiki:Game] Games have been a subject of intense research for decades, both in academia and in industry. The field of artificial and computational intelligence (AI/CI) in games alone admits (at least) 10 broad areas [@lucas2012artificial; @yannakakis2015panorama]: 1) nonplayer character (NPC) behavior learning; 2) search and planning; 3) player modeling; 4) games as AI benchmarks; 5) procedural content generation; 6) computational narrative; 7) believable agents; 8) AI-assisted game design; 9) general game AI; 10) AI in commercial games. And beyond AI/CI there are many other areas of game research: game theory (which models conflict and cooperation between intelligent, rational decision-makers[@von1945theory]), social and psychological analysis, historical investigations,[^8] and so forth. “An algorithm is an abstract recipe, prescribing a process that might be carried out by a human, by a computer, or by other means.” [@harel2004algorithmics] Replacing “algorithm” with “game” in this latter definition underscores the similarity of the two concepts. Perchance we might view the area of games under a different light? Specifically, might games not offer us a vast reservoir of potential algorithmic ideas, or *gamorithms*? The wide diversity of game characteristics offers in turn a wide scope for inspiring novel algorithms. Games can be: 1-, 2-, or $n$-player; discrete or continuous; deterministic or stochastic; played by individuals or by teams; defined tersely (e.g., tic-tac-toe) or wordily (e.g., the game of football, with its 93-page rulebook [@nfl-rulebook]). “The game is afoot,” quipped Sherlock Holmes[^9] who stated even more famously, “Elementary”—the latter qualifier of which captures the simple essence of our proposed idea: > *To solve a computational problem find or design anew a game-based algorithm.* Problem? Gamorithm. In his seminal paper, “Computing machinery and intelligence”, Turing [@turing1950] devoted a section to a “Critique of the New Problem”, writing, “As well as asking, ‘What is the answer to this new form of the question’, one may ask, ‘Is this new question a worthy one to investigate?’” Following this illustrious example we, too, wish to address potential critiques, some due to sagacious comments made by the reviewers of the first draft. ([ ]{}) *Isn’t this simply Serious games?* Serious games is an area dealing with games that do not have entertainment as their primary purpose [@michael2006; @djaouti2011classifying]. These games appear in diverse areas such as healthcare, defense, education, and more. To mention but two examples, Foldit is an online puzzle game about protein folding [@khatib2011] and the Google Image Labeler is an image-labeling game [@wiki:Google-image]. Serious games is a broad field whereas with gamorithms we wish to focus on algorithmic problem solving. ([ ]{}) *What about gamification?* The application of game-design elements and game principles in non-game contexts—*gamification*—is another very broad area [@deterding2011game]. This field is probably even broader in scope than serious games, including such disparate cases as Google Local Guides’ scoring points and climbing through levels as they upload reviews and photos to Google Maps, and gaming elements in fitness apps (e.g., getting points and badges for various activities). Again, in contrast, gamorithms are far more focused in scope. ([ ]{}) *Are gamorithms simulation games?* In a *simulation game*, be it video (e.g., a flight simulator) or real-life (e.g., roleplay), the object is to simulate real-world activities, not solve problems (although, as part of the simulation, problems might be made available for the solving). ([ ]{}) *“Why not take an NP-complete problem and transform it into a puzzle?”* This question was posed by [@kendall2008] in a survey of NP-complete puzzles. This is an astute step in the gamorithmic direction, though our net is cast wider, with our interest going beyond NP-complete problems and beyond puzzles. ([ ]{}) *Is there any relation to models in game theory?* A gamorithm is not a *model* as in game theory, e.g., the iterated prisoner’s dilemma that models cooperation between completely “rational” individuals [@axelrod1984]. ([ ]{}) *Alice and Bob come to mind.* A gamorithm is not a form of “Alice and Bob” scenario, which is used in fields such as physics and cryptography for convenience and to aid comprehension [@rivest1978method]. ([ ]{}) *The examples of gamorithms provided below are not convincing*. This paper resulted from a prolonged period of brainstorming on our part, and we fully acknowledge that we have not yet brought ironclad answers, nor is that our intention. Rather, our aim herein is to raise questions and point to a wealth of possibilities. To put it metaphorically, we perceive this paper to have cracked open a new egg, and appeal to the effervescent games research community to join us in the making of a tasty omelette. While one might pick at this example or that we hope that the totality of them all will serve to stir enough interest and thus pass the baton of brainstorming, as it were. ([ ]{}) *This is good-old fashioned algorithmics—what is gained by framing problems as games?* Not for naught we began this paper with Proust’s adage regarding new eyes. Given that the idea expounded herein is embedded well within the fabric of the field of algorithmics, one can indeed choose to negate its novelty. We feel (and of course this is quite open to debate) that viewing algorithms through gamified glasses offers a beneficent new perspective. New algorithmic vistas open up when one views a phenomenon, a mathematical theory, a scientific field, or, for that matter, any human or natural endeavor, in a novel way. To mention but a few well-worn examples: considering evolution by natural selection through algorithmic spectacles led to evolutionary algorithms; asking whether “wet” neural networks in the brain might be an effective source of inspiration for in silico computing brought forth artificial neural networks; envisioning the direct use of quantum-mechanical phenomena—such as superposition and entanglement—to perform operations on data, gave birth to quantum computing; questioning the binary nature of Boolean logic resulted in fuzzy logic [@sipper02mn]. ([ ]{}) *So casting a problem’s solution as a gamorithm will help me in some way??* Yes, we believe it will, because this casting might provide a possible algorithmic solution (or a path leading to one). The fun factor of games may well motivate research into problems of interest. Moreover, a gamorithmic approach affords us the opportunity to bring the massive amount of research into game-playing and game-solving algorithms to bear. Superb algorithms for playing many types of games are now available: board games, card games, dice games, role-playing games, strategy games, video games, first person shooter games, mathematical games, and many more. A connection forged between a problem and a game might just form a useful bridge. This is somewhat similar to the concept of reduction in computational complexity theory, wherein one transforms one problem into another, e.g., graph coloring can be reduced to SAT (the satisfiability problem) [@garey1979]. Can the recently introduced superb machine Go player [@silver2017] serve another purpose and solve a computational problem of interest? And at a completely different end of the game spectrum, can a massively multiplayer online game like “World of Warcraft” solve a computational problem? Having averred what a gamorithm is not and addressed several critiques, and bearing in mind that our interest lies in *computing*, *approximating*, and *solving* problems, we now provide nine gamorithmic examples by way of proof-of-concept. Note that the problems addressed need not in any way be games or game-related. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> Generate two *random paths* through a map (or graph) with a single cross point. <span style="font-variant:small-caps;">Gamorithm:</span> *Hex*, a 2-player, strategy board game, where players alternate placing pieces on unoccupied spaces of a board, attempting to link their opposite sides in an unbroken chain (Figure \[fig-hex\]). Graph problems in general may be well suited to connection games [@wiki:Connection-game], a category of which Hex is a prominent member. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> *Graph coloring* is an assignment of labels (“colors”) to elements of a graph subject to certain constraints. For example, in the vertex-coloring problem each vertex must be assigned a color such that no two adjacent vertices (i.e., with a common edge) share the same color. This problem is usually NP-complete, although some special cases are polynomial-time [@malaguti2010survey]. <span style="font-variant:small-caps;">Gamorithm:</span> In the 1950s Claude Shannon invented an abstract strategy game for two players, known as the *Shannon switching game*. The game is played on a finite graph between two alternating players, *Cut* and *Short*, the former deleting a non-colored edge in her turn, the latter coloring any edge still left in his turn. There are two special nodes, $A$ and $B$, where Cut wins if she turns the graph into one where $A$ and $B$ are no longer connected, and Short wins if he manages to create a colored path from $A$ to $B$. An explicit solution was found in 1964 [@lehman1964]. We might invent new games on graphs, such as a “relative” of the switching game, dubbed *Ver Teqs*, wherein two players alternately place colored pieces on the graph, respecting the no-adjacent-same-color rule (Figure \[fig-vertex\]). A player who finds herself in a position where she must break the rule—loses. If all vertices are colored legally, a tie is reached—as well as a solution to our problem. Note that even a game that does not reach a tie may still provide an acceptable, approximate solution. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> *Imputation of missing data* in a table of data values [@gelman_hill_2006], given various constraints on placement within rows, columns, and specific regions. <span style="font-variant:small-caps;">Gamorithm:</span> Forms of *Latin square* [@wiki:Latin-square] games come to mind, one prime example being *Sudoku* (Figure \[fig-sudoku\]). ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> *Packing problems* are a class of optimization problems that involve attempting to pack objects together into bins or containers [@dyckhoff1990]. Usually, the goal is either to pack a single container as compactly as possible or pack all objects using as few containers as possible. In dynamic problems, objects arrive over time and repacking may or may not be allowed [@coffman1983]. <span style="font-variant:small-caps;">Gamorithm:</span> We propsose *Tetris*-like games (Figure \[fig-tetris\]) as gamorithms for solving dynamic packing problems with no repacking [@dyckhoff1990; @berndt2014; @gupta2017]. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> With the age of ubiquitous autonomous vehicles well-nigh upon us, one might imagine giant parking lots (perhaps at city edges), where self-driving cars and trucks plunk down at night. This may well engender interesting problems of *compact packing* and *complex routing*. For example, what if a need for a specific car arises, which must thereupon make its way through the mass of parked vehicles all the way to the exit? <span style="font-variant:small-caps;">Gamorithm:</span> *Rush Hour* is a board game that asks precisely that question, with computational intelligence solutions to boot [@hauptman2009gp; @Sipper2011Win] (Figure \[fig-rushhour\]). ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> *Polynomial regression*, a common problem in which the relationship between the independent variable $x$ and the dependent variable $y$ is modelled as an $n$th degree polynomial in $x$. <span style="font-variant:small-caps;">Gamorithm:</span> A form of *tennis* match can be adapted to serve as a gamorithm for this problem. Consider the example in Figure \[fig-tennis\], where we are given a table of independent and dependent variables, drawn from the polynomial $y=ax+b$, with $a=0.4$ and $b=0.3$. The goal is to find $a$ and $b$. We conduct a tennis match in the search space of $a,b\in[0,1]$, where the ball represents a pair of $\{a,b\}$ values. Each of the two player’s sides of the court is a plane representing the search space. The quality of a shot is calculated when the ball lands in a player’s court, by means of a specified cost function (mean absolute error, root mean squared error, etc’). The mechanics of the game are handled by a tennis controller (simulator), whose dynamics can be as simple or as complex as we wish. At the simple end an elementary formula might be used to calculate a player’s response strike in terms of, say, speed and angle, using the shot’s quality alone; at the complex end one might implement full-blown physics, with sophisticated player strategies that use more information and memory to calculate a strike. The tennis gamorithm can be generalized in any number of ways, e.g., by adding court dimensions (thus increasing the polynomial degree), by adding players, and by adding nets (i.e., the court is not divided into two halves but into $n$ partitions, whereupon new playing rules need to be defined). Interestingly, we are not concerned herein with which player wins the game but rather with having both players cooperate through competition to solve a problem. Essentially, the best shot in the game (i.e., lowest cost-function value) is our trophy. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> Given an undirected graph $G(V, E)$, a *global min-cut* is a partition of $V$ into two subsets $(A, B)$ such that the number of edges between $A$ and $B$ is minimized [@karger93]. <span style="font-variant:small-caps;">Gamorithm:</span> In *Jenga*, players take turns removing one block at a time from a tower constructed of 54 blocks. Each block removed is then placed on top of the tower, creating a progressively taller and unstable structure (Figure \[fig-jenga\]). The game ends when the tower falls, or if any piece falls from the tower other than the piece being removed to move to the top. The winner is the last person to successfully remove and place a block. Jenga-like games may well fit the gamorithmic bill where min-cut problems are concerned, with the objective being to partition graphs—or, more generally, multi-piece objects—with a minimal number of operations, cuts, or moves. (There are various other kinds of cut problems, e.g., in minimum $k$-cut one seeks a set of edges whose removal would partition the graph into $k$ connected components.) ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> *Facility location problems*, studied in operations research and computational geometry, are concerned with the optimal placement of facilities to minimize transportation costs while considering factors like avoiding placing hazardous materials near housing, and the facilities of competitors [@guha1999]. <span style="font-variant:small-caps;">Gamorithm:</span> *Monopoly* is a popular board game where players move at random (based on a dice roll), developing and selling properties on a game board (Figure \[fig-monopoly\]). The game shares some basic commonalities with facility location and might be adapted to form a gamorithm, by customizing the rules and board such that players compete to attain a tenable solution to the original problem. This design need not be as daunting a task as one might think. As is often the case with computational problems, we can begin with a simple scenario (e.g., a small number of facilities and basic rules) and gradually work our way up to a more complex game. ([ ]{}) <span style="font-variant:small-caps;">Problem:</span> More of a meta-problem, we propose the use of *virtual-world* games (e.g., video games) as problem solvers. <span style="font-variant:small-caps;">Gamorithm:</span> With virtual worlds one usually relies on extensive algorithmic creation of game content to build and populate the scene, so-called *procedural content generation* (PCG) [@shaker2016]. What if, instead, the content represented the search space of a problem of interest, e.g., a complex, real-valued optimization problem [@liang2014] (think of playing Minecraft in the world of Figure \[fig-ackley\])? Might there be a beneficial commonality between making one’s way (intelligently) through a virtual world and searching (intelligently) through a search space? Having quoted Alan Turing earlier, it is perhaps fitting to end with Turing-Award winner Judea Pearl, who recently wrote: “One final comment about these ‘games’ ... they are quite obviously not games but serious business. I have referred to them as games because the joy of being able to solve them swiftly and meaningfully is akin to the pleasure a child feels on figuring out that he can crack puzzles that stumped him before. Few moments in a scientific career are as satisfying as taking a problem that has puzzled and confused generations of predecessors and reducing it to a straightforward game or algorithm.” [@pearl2018] We believe that virtually any game has the potential of leading to a gamorithm that will solve some problem or other. The list of games available for examination is quite large, so much so in fact that Wikipedia’s game-list entry is actually a list *of lists* [@wiki:Lists-of-games]. And, fulsome as it is, this list contains but extant games, leaving an infinitude of new games—and gamorithms—yet to be imagined. > “Prepare for unforeseen consequences.” > > —*Half-Life 2: Episode Two* (video game) Acknowledgment {#acknowledgment .unnumbered} ============== We are grateful to the anonymous reviewers for their helpful comments. This work was supported by National Institutes of Health (USA) grants LM010098 and AI116794. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} Wikipedia, “Game,” 2017. \[Online\]. Available: <https://en.wikipedia.org/wiki/Game> S. M. Lucas, M. Mateas, M. Preuss, P. Spronck, and J. Togelius, “Artificial and computational intelligence in games ([D]{}agstuhl seminar 12191),” in *Dagstuhl Reports*.1em plus 0.5em minus 0.4emSchloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2012. G. N. Yannakakis and J. Togelius, “A panorama of artificial and computational intelligence in games,” *IEEE Transactions on Computational Intelligence and AI in Games*, vol. 7, no. 4, pp. 317–335, 2015. J. Von Neumann and O. Morgenstern, *Theory of Games and Economic Behavior*.1em plus 0.5em minus 0.4emPrinceton, NJ: Princeton University Press, 1945. C. Browne, “Back to the past: Ancient games as a new [AI]{} frontier,” in *AAAI Workshops*, 2017. \[Online\]. Available: <https://aaai.org/ocs/index.php/WS/AAAIW17/paper/view/15063> D. Harel and Y. A. Feldman, *Algorithmics: The Spirit of Computing*. 1em plus 0.5em minus 0.4emPearson Education, 2004. “[2017 Official Playing Rules of the National Football League]{},” 2017. \[Online\]. Available: <https://operations.nfl.com/media/2646/2017-playing-rules.pdf> A. M. Turing, “Computing machinery and intelligence,” *Mind*, vol. LIX, no. 236, pp. 433–460, 1950. D. Michael and S. Chen, *Serious Games: Games that Educate, Train, and Inform*.1em plus 0.5em minus 0.4emBoston, MA: Thompson Course Technology, PTR, 2006. D. Djaouti, J. Alvarez, and J.-P. Jessel, “Classifying serious games: The g/p/s model,” *Handbook of Research on Improving Learning and Motivation through Educational Games: Multidisciplinary Approaches*, vol. 2, pp. 118–136, 2011. F. Khatib, S. Cooper, M. D. Tyka, K. Xu, I. Makedon, Z. Popović, D. Baker, and [Foldit Players]{}, “Algorithm discovery by protein folding game players,” *Proceedings of the National Academy of Sciences*, vol. 108, no. 47, pp. 18949–18953, 2011. Wikipedia, “Google image labeler,” 2017. \[Online\]. Available: <https://en.wikipedia.org/wiki/Google_Image_Labeler> S. Deterding, D. Dixon, R. Khaled, and L. Nacke, “From game design elements to gamefulness: Defining gamification,” in *Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments*.1em plus 0.5em minus 0.4emACM, 2011, pp. 9–15. G. Kendall, A. J. Parkes, and K. Spoerer, “A survey of [NP]{}-complete puzzles,” *ICGA Journal*, vol. 31, no. 1, pp. 13–34, 2008. R. M. Axelrod, *The Evolution of Cooperation*.1em plus 0.5em minus 0.4emNew York: Basic Books, 1984. R. L. Rivest, A. Shamir, and L. Adleman, “A method for obtaining digital signatures and public-key cryptosystems,” *Communications of the ACM*, vol. 21, no. 2, pp. 120–126, 1978. M. Sipper, *Machine Nature: The Coming Age of Bio-Inspired Computing*.1em plus 0.5em minus 0.4emNew York: McGraw-Hill, 2002. M. R. Garey and D. S. Johnson, *Computers and Intractability: A Guide to the Theory of NP-Completeness*.1em plus 0.5em minus 0.4emNew York, NY, USA: W. H. Freeman & Co., 1979. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis, “Mastering the game of [Go]{} without human knowledge,” *Nature*, vol. 550, pp. 354–359, 2017. Wikipedia, “Connection game,” 2017. \[Online\]. Available: <https://en.wikipedia.org/wiki/Connection_game> E. Malaguti and P. Toth, “A survey on vertex coloring problems,” *International Transactions in Operational Research*, vol. 17, no. 1, pp. 1–34, 2010. A. Lehman, “A solution of the [S]{}hannon switching game,” *Journal of the Society for Industrial and Applied Mathematics*, vol. 12, no. 4, pp. 687–725, 1964. A. Gelman and J. Hill, *Data Analysis Using Regression and Multilevel/Hierarchical Models*, ser. Analytical Methods for Social Research.1em plus 0.5em minus 0.4emCambridge University Press, 2006. Wikipedia, “Latin square,” 2017. \[Online\]. Available: <https://en.wikipedia.org/wiki/Latin_square> H. Dyckhoff, “A typology of cutting and packing problems,” *European Journal of Operational Research*, vol. 44, no. 2, pp. 145–159, 1990. E. G. Coffman, Jr, M. R. Garey, and D. S. Johnson, “Dynamic bin packing,” *SIAM Journal on Computing*, vol. 12, no. 2, pp. 227–258, 1983. S. Berndt, K. Jansen, and K. Klein, “Fully dynamic bin packing revisited,” 2014. \[Online\]. Available: <http://arxiv.org/abs/1411.0960> A. Gupta, G. Guruganesh, A. Kumar, and D. Wajc, “Fully-dynamic bin packing with limited repacking,” 2017. \[Online\]. Available: <https://arxiv.org/abs/1711.02078> A. Hauptman, A. Elyasaf, M. Sipper, and A. Karmon, “[GP-Rush]{}: using genetic programming to evolve solvers for the [R]{}ush [H]{}our puzzle,” in *Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation*.1em plus 0.5em minus 0.4emACM, 2009, pp. 955–962. M. Sipper, *Evolved to Win*.1em plus 0.5em minus 0.4emLulu, 2011, available at <http://www.moshesipper.com/evolved-to-win.html>. D. R. Karger, “Global min-cuts in [RNC]{}, and other ramifications of a simple min-out algorithm,” in *Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms*, ser. SODA ’93.1em plus 0.5em minus 0.4emSociety for Industrial and Applied Mathematics, 1993, pp. 21–30. S. Guha and S. Khuller, “Greedy strikes back: Improved facility location algorithms,” *Journal of Algorithms*, vol. 31, no. 1, pp. 228–248, 1999. N. Shaker, J. Togelius, and M. J. Nelson, *Procedural Content Generation in Games: A Textbook and an Overview of Current Research*.1em plus 0.5em minus 0.4emSpringer, 2016. J. J. Liang, B. Y. Qu, P. N. Suganthan, and Q. Chen, “Problem definitions and evaluation criteria for the [CEC]{} 2015 competition on learning-based real-parameter single objective optimization,” Zhengzhou University, Zhengzhou, China and Nanyang Technological University, Singapore, Tech. Rep. 201411A, 2014. J. Pearl and D. Mackenzie, *The Book of Why: The New Science of Cause and Effect*.1em plus 0.5em minus 0.4emNew York, NY: Basic Books, 2018. Wikipedia, “Lists of games,” 2017. \[Online\]. Available: <https://en.wikipedia.org/wiki/Lists_of_games> [^1]: IEEE Transactions on Games, DOI (identifier) 10.1109/TG.2018.2867743 [^2]: M. Sipper is with the Institute for Biomedical Informatics (IBI), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104 and the Department of Computer Science, Ben-Gurion University, Beer-Sheva 8410501, Israel. [^3]: J. H. Moore is with the Institute for Biomedical Informatics (IBI), Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104. [^4]: Corresponding author: M. Sipper ([www.moshesipper.com/contact.html](www.moshesipper.com/contact.html)). [^5]: Manuscript received ; revised . [^6]: The real voyage of discovery consists not in seeking new landscapes, but in having new eyes. [^7]: Defined by Urban Dictionary as the *action of solo-taking a “naked Nexus” in League of Legends game*. [^8]: Of which the recent work by [@browne2017], proposing the study of ancient games as a new frontier for game AI research, is particularly illuminating. [^9]: And the Earl of Northumberland in Shakespeare’s *Henry IV* before him.
--- abstract: 'We propose models of Dark Matter that account for the 511 keV photon emission from the Galactic Centre, compatibly with experimental constraints and theoretical consistency, and where the relic abundance is achieved via $p$-wave annihilations or, in inelastic models, via co-annihilations. Due to the Dark Matter component that is inevitably upscattered by the Sun, these models generically predict keV electron recoils at detectors on Earth, and could naturally explain the excess recently reported by the XENON1T collaboration. The very small number of free parameters make these ideas testable by detectors like XENONnT and Panda-X, by accelerators like NA64 and LDMX, and by cosmological surveys like the Simons observatory and CMB-S4. As a byproduct of our study, we recast NA64 limits on invisibly decaying dark photons to other particles.' author: - Yohei Ema - Filippo Sala - Ryosuke Sato bibliography: - '511keVline\_keVrecoils.bib' title: | Dark matter models for the 511 keV galactic line\ predict keV electron recoils on Earth --- #### **Introduction.** {#introduction. .unnumbered} Data that deviate from standard predictions are lifeblood of progress in physics. The past few decades have seen a plethora of such observational ‘anomalies’, both in cosmic rays and in underground detectors, that could have been explained by some property of particle Dark Matter (DM). None of them has been so far enough to claim the discovery of a new DM property, because of the possible alternative explanations in terms of new astrophysical sources, of underestimated systematics, etc, often flavored with a healthy dose of skepticism. An awareness has therefore emerged that the confirmation of a DM origin for some anomaly would require, as a necessary condition, that many anomalies are intimately linked together within a single model of DM. It is the purpose of this letter to point out one such link. Not only we propose DM models that explain the observed 511 keV line from the Galactic Centre (GC) [@Prantzos:2010wi; @Siegert:2015knp; @Kierans:2019aqz], but also we show they predict electron recoils with energies of the order of a keV, of the right intensity and spectrum to be observed by XENON1T [@Aprile:2019xxb; @Aprile:2020tmw] and to explain the excess seen in [@Aprile:2020tmw]. Our spirit in writing this paper is not to abandon the skepticism praised above, but rather to add an interesting –in our opinion– piece of information to the debates surrounding both datasets. #### **The 511 keV galactic line.** {#the-511kev-galactic-line. .unnumbered} A 511 keV photon line emission in the galaxy has been observed since the 70’s, recent measurements include that with the SPI spectrometer on the INTEGRAL observatory [@Siegert:2015knp] and the one with the COSI balloon telescope [@Kierans:2019aqz], see [@Prantzos:2010wi] for an earlier review. The signal displays two components of comparable intensity, one along the galactic disk and one in the bulge, the latter with an extension of $O(10^\circ)$ around the galactic center (GC), strongly peaked, corresponding to a flux of $\simeq 10^{-3}$ photons cm$^{-2}$ sec$^{-1}$ [@Siegert:2015knp]. The line is attributed to the annihilation of $e^+ e^-$ into $\gamma\gamma$ via positronium formation, thus it requires sources injecting positrons in the regions where the emission is seen, and with injection energy smaller than about 3 MeV [@Beacom:2005qv]. The emission from the galactic disk has been tentatively explained with positron injection from the decay of isotopes coming from nucleosynthesis in stars (see e.g. [@Prantzos:2010wi; @Bartels:2018eyb]), while the origin of the emission in the bulge is still the object of debate (‘one of the most intriguing problems in high energy astrophysics’ [@Prantzos:2010wi]). Recent proposals to explain the positron injection include, for example, low-mass X-ray binaries [@Bartels:2018eyb] and Neutron Star mergers [@Fuller:2018ttb]. #### **The 511 line and Dark Matter: preliminaries.** {#the-511-line-and-dark-matter-preliminaries. .unnumbered} Given that the origin of the bulge 511 keV line has not yet been clarified, and given that DM exists in our galaxy, it makes sense to entertain the possibility that the latter is responsible for the former. A DM origin for the positron injection in the bulge has indeed been investigated since [@Boehm:2003bt]. The morphology of the signal excludes DM decays in favor of annihilations, see e.g. [@Vincent:2012an]. The 511 keV line emission in the galactic bulge could be accounted for by self-conjugate DM annihilations into an $e^+e^-$ pair with v\_[511]{} 510\^[-31]{} ()\^[2]{} , \[eq:fit511\] where we have used the best fit provided in [@Vincent:2012an] for an NFW DM density profile, as an indicative benchmark. Different profile shapes and the use of new data for the line could change the precise value of $\langle\sigma v\rangle_{511}$, which however is not crucial for the purpose of this paper. The need for a positron injection energy smaller than 3 MeV [@Beacom:2005qv] implies that, unless one relies on cascade annihilations [@Jia:2017iyc], $\MDM \lesssim 3$ MeV. Since so small values of $\MDM$ have been found to be in conflict with cosmological observations, a simple DM-annihilation origin of the 511 keV line has been claimed excluded in [@Wilkinson:2016gsy]. Recently, however, the refined analysis of [@Sabti:2019mhn] found that values of $\MDM$ down to $\sim 1$ MeV can be made consistent with CMB and BBN, by means of a small extra neutrino injection in the early universe, simultaneous with the electron one from the DM annihilations. We will rely on this new result in building DM models for the 511 keV line. Eq. (\[eq:fit511\]) clarifies that $s$-wave DM annihilation cannot explain the 511 keV line, because so small cross sections imply overclosure of the universe. To be compatible with a thermal generation of the DM abundance, one therefore needs annihilation cross sections in the early universe much larger than today in the GC. This is realised for example in two simple pictures, where the DM relic abundance is set by: - $p$-wave annihilations; - coannihilations with a slightly heavier partner. We will build explicit DM models that realise each of them in the next two paragraphs. #### **DM for the 511 keV line: $p$-wave.** {#dm-for-the-511kev-line-p-wave. .unnumbered} Using $\langle\sigma v\rangle_\text{relic}^{(p)}(\MDM=2~\text{MeV}) \simeq 2.2\cdot 10^{-25} v_\text{rel}^2 \text{cm}^3/\text{sec}$ [@Saikawa:2020swg], we find \^[(p)]{} 2 , \[eq:MDMpwave\_FO511\] where we have normalised $\langle v_\text{rel}^2\rangle^{1/2}_\text{bulge}$ to the value obtained from the velocity dispersion in the bulge $\sigma \simeq 140$ km/s [@Valenti_2018], and where we have assumed that the dominant annihilation channel at freeze-out is $e^+ e^-$. Note that the preferred DM mass would be the same for non-self-conjugate annihilating DM, for which both $\langle\sigma v\rangle_{511}$ and $\langle\sigma v\rangle_\text{relic}$ are larger by a factor of 2. An explicit model realising this picture consists of a Majorana fermion $\chi$ as DM candidate, whose interactions with electrons are mediated by a real scalar $S$ via the low-energy Lagrangian (we use 2 component spinor notation throughout this work) = y\_\^2 S + g\_e e\_Łe\^\_S + . \[eq:L\_pwave\_simple\] This results in the annihilation cross section v\_[e\^+e\^-]{} = v\_\^2 , and in the cross section for DM-$e$ elastic scattering \_e = , where $m_\S$ is the scalar mass, $\Gamma_\S$ its width, and $\mu_{e\DM}=m_e\MDM/(m_e+\MDM)$. Once $\sigma v_{e^+e^-}$ and $\MDM$ are fixed by the requirements to fit the 511 keV line eq. (\[eq:fit511\]) and to reproduce the correct relic abundance eq. (\[eq:MDMpwave\_FO511\]), then only two free parameters are left, which we choose as $g_e$ and $m_\S$ in Fig. \[fig:pwave\_Majorana\]. We find that a region capable of explaining the 511 keV line exists, delimited by perturbativity, direct detection (derived later) and collider limits (see App. \[app:NA64\]).[^1] ![\[fig:pwave\_Majorana\] Once the conditions to reproduce the DM abundance and the 511 keV line are imposed, the phenomenology of the model is entirely determined by the scalar mediator mass $m_\S$ and its coupling to electrons $g_e$. Shaded: non-perturbative dark coupling (gray), our recast of NA64 dark photon limit [@NA64:2019imj] (blue), indicative limit from XENON1T data [@Aprile:2019xxb] (orange). Lines: contours of constant $\sigma_e$ (orange) and of constant dark coupling $y_\D$ (gray). The thick orange line corresponds to $\se=4\cdot10^{-38}$ cm$^2$, which induces the electron recoil spectrum at XENON1T shown in Fig. \[fig:recoil\_spectra\]. ](Summary_pwave.pdf){width="48.00000%"} The existence of 3 degrees of freedom with masses $\MDM$ and $m_\S$ of a few MeV is not in conflict with cosmological data, provided one posits a small injection of neutrinos in the early universe in a proportion $\sim 1:10^4$ to the electron injection, see [@Sabti:2019mhn]. This can for example be achieved with a coupling to neutrinos, $g_\nu \nu^2 S$, of size $g_\nu \sim 10^{-2} g_e$, and where $g_e \sim 10^{-6}$ in the region allowed by the various limits. Coupling of neutrinos and electrons of these sizes can be easily obtained in electroweak-invariant completions of the Lagrangian of eq. (\[eq:L\_pwave\_simple\]). Since they do not present any particular model-building challenge, we defer their presentation to App. \[app:UVcompletions\]. #### **DM for the 511 keV line: coannihilations.** {#dm-for-the-511kev-line-coannihilations. .unnumbered} As a model that concretely realises this idea, we add to the SM a gauge group $U(1)'$, two fermions $\xi$ and $\eta$ with charges 1 and -1 respectively, and a scalar $\phi$ with charge 2 that spontaneously breaks the symmetry. The most general low-energy Lagrangian that preserves charge conjugation ($\eta \leftrightarrow \xi$, $\phi \leftrightarrow \phi^*$, $V_\mu \leftrightarrow -V_\mu$) reads $$\begin{aligned} \mathcal{L} & = & V(|\phi|) + \frac{\epsilon}{2} V_{\mu\nu} F^{\mu\nu} + (ig_\D \chi_2^\dagger \bar{\sigma}_\mu \chi_1 V^\mu +\text{h.c.}) \nonumber \\ &-& \frac{\bar{m}}{2} (\chi_1^2 + \chi_2^2) - \frac{y_\phi}{2} (\phi+\phi^*) \big(\chi_2^2 - \chi_1^2 \big) +\text{h.c.} \label{eq:L_inelastic}\,,\end{aligned}$$ where $\chi_1 = i (\eta - \xi)/\sqrt{2}$ and $\chi_2 = (\eta + \xi)/\sqrt{2}$ are the Majorana mass eigenstates, $F_{\mu\nu}$ is the electromagnetic field strength and we have understood all kinetic terms. The scalar mass and triple-coupling read V(||) = \_( ||\^2 - )\^[2]{} m\_\^2 = 2 \_v\_\^2, \_[\^3]{} = 6 \_v\_, where $\phi =( \varphi + v_\phi)/\sqrt{2}$ and $\lambda_{\varphi^3}$ is defined by $\mathcal{L}\supset \lambda_{\varphi^3} \varphi^3/6$. The physical vector and fermion masses read m\_V = 2 g\_v\_, m\_[1,2]{} = |[m]{} , =2 y\_v\_. $\chi_1$ coannihilates with $\chi_2$ via dark photon exchange. In the limit $\delta \ll m_{1,2} = \MDM$, one finds v\_[\_1\_2 e\^+e\^-]{} = 4 \_e \^2 g\^2\_, \[eq:coannihilation\] where $\alpha_e$ is the fine-structure constant. For definiteness, we then assume that $\chi_2$ decays on cosmological scales, such that coannihilations cannot be responsible for a positron injection in the GC today. We will come back to this point in the end of the paragraph. One can then explain the 511 keV line, if $m_\varphi < \MDM$ and $\varphi$ decays to $e\bar{e}$, via pair annihilations $\chi_i\chi_i \to \varphi\varphi$. The associated cross section, at first order in $y_\phi v_\phi/\lambda_{\varphi^3} \ll 1$, reads ($i=1,2$) v\_[\_i\_i ]{} = v\_\^2 . \[eq:self\_annihilation\] An operator $|\phi|^2 (e_\L e^\dagger_\R +\text{h.c.})/\Lambda_{\phi e}$ with $\Lambda_{\phi e} \sim 10^{9-10} v_\phi$ guarantees that $\varphi$ decays to $e\bar{e}$ instantaneously on astrophysical scales, while being allowed by collider, supernovae and BBN limits [@Krnjaic:2015mbs]. It could originate –at the price of some tuning– from a $|\phi|^2 |H|^2$ term, or from the models discussed in App. \[app:UVcompletions\]. Since a $\chi_1\chi_1$ annihilation injects two $e\bar{e}$ pairs, the cross section that best fits the 511 keV line is reduced by a factor of 2 with respect to eq. (\[eq:fit511\]). Therefore we impose v\_[\_i\_i ]{} = v\_[511]{}. \[eq:impose\_511\_inelastic\] If $\chi_i\chi_i \to \varphi\varphi$ were the only processes responsible for the DM abundance, then we would have found another realisation of the $p$-wave annihilating idea, just with $\MDM \simeq 4$ MeV.[^2] It follows that, for $\MDM \lesssim 4$ MeV, the DM relic density is set dominantly by coannihilations. We then fix $\epsilon$ by the simple requirement v\_[e\^+e\^-]{} + 3 = v\^[(s)]{}\_, \[eq:inelastic\_FO\] where the left-hand side sums the $s$- and $p$-wave contributions (see e.g. [@Kolb:1990vq] for the origin of the relative factors) and where we use for simplicity the $s$-wave values at $\MDM = 3$ MeV, $\sigma v^{(s)}_\FO \simeq 8\times 10^{-26} \text{cm}^3/\text{sec}$ [@Saikawa:2020swg] and $x_\FO \simeq 15$ (their dependence on $\MDM$ is very mild). ![\[fig:coannihilations\_chi2decays\] The conditions to reproduce the DM abundance and the 511 keV line impose $\MDM \lesssim 4$ MeV and leave 4 free parameters, chosen here as $\MDM$, $\delta$, $m_\varphi$ and $m_V$. Shaded: non-perturbative dark coupling (gray), NA64 limit [@NA64:2019imj] (blue), indicative limit from XENON1T data [@Aprile:2020tmw] (orange). Lines: $\bar{\sigma}_e$ (orange), $g_\D$ (gray), $\epsilon$ (cyan). The dashed gray line roughly delimits the region where $\chi_2$ decays into neutrinos are not enough to deplete the primordial $\chi_2$ population, and further constraints could arise. The blue triangle corresponds to the electron recoil spectrum at XENON1T shown in Fig. \[fig:recoil\_spectra\], and it explains the excess events presented in [@Aprile:2020tmw]. ](Delta_vs_MDM_fullFO_mphi2_mV15.pdf){width="49.00000%"} The model is then left with 4 free parameters, we visualise its parameter space in Fig. \[fig:coannihilations\_chi2decays\] for the benchmark values $m_\varphi = 2$ MeV and $m_V = 15$ MeV.[^3] The allowed region is again delimited by perturbativity, direct detection and collider limits. Analogously to the previous model, these low values of $\MDM$ can be brought in agreement with BBN and CMB data by a coupling $g_\nu V_\mu \nu^\dagger\bar{\sigma}^\mu\nu$, with $g_\nu \sim 10^{-2} e \epsilon$. We refer the reader to App. \[app:UVcompletions\] for a possible origin of $g_\nu$. Here we just point out that it induces $\Gamma_{\chi_2 \to \chi_1\bar{\nu}\nu} \simeq g^2_\nu g^2_\D\delta^5/(40 \pi^3 m_V^4)$, which for $m_V = 15$ MeV and $\delta \gtrsim 1$ keV implies $\tau_2 < 10^9$ years, so that all $\chi_2$’s left after freeze-out have decayed by today. Larger values of $\tau_2$ can be avoided by adding another operator to mediate $\chi_2$ decays (e.g. a dipole), otherwise values of $\delta \lesssim 1$ keV could potentially be in conflict with searches for the primordial population of $\chi_2$ [@Baryakhtar:2020rwy]. The allowed values of $\delta$ are restricted around a few keV, which is particularly interesting because they could explain [@Baryakhtar:2020rwy] the excess events at XENON1T [@Aprile:2020tmw], as we explicitly derive in the next paragraph. The event rate at XENON1T is proportional to the cross section $\chi_2 e \to \chi_1 e$ in the limit $\delta \to 0$, |\_e = 4 \_e g\^2\_\^2 , which we also display in Fig. \[fig:coannihilations\_chi2decays\]. Finally, we left out of this study the case where there is a residual population of $\chi_2$ today, which has also been shown to possibly explain the excess events at XENON1T [@Harigaya:2020ckz; @Lee:2020wmh; @Bramante:2020zos; @Baryakhtar:2020rwy; @Bloch:2020uzh; @An:2020tcg; @Baek:2020owl; @He:2020wjs]. While this goes beyond the purpose of this work, it would be interesting to investigate it in combination with the 511 keV line and we plan to come back to it in future work. #### **keV electron recoils from Sun-upscattered DM.** {#sec:recoils .unnumbered} The models we proposed to explain the 511 keV line require DM with a mass of a few MeV, interacting with electrons. Such a DM is efficiently heated inside the sun, resulting in a flux of solar-reflected DM with kinetic energy ($\sim \mathrm{keV}$) significantly larger than the one of halo DM, thus offering new detection avenues to direct detection experiments [@An:2017ojc]. We now show that, via this higher-energy component, both ‘$p$-wave’ and ‘coannihilations’ models for the 511 keV line automatically induce electron-recoil signals that are probed by XENON1T S2-only [@Aprile:2019xxb] and S1$+$S2 [@Aprile:2020tmw] data. We outline the procedure to obtain the event rate caused by the solar-reflected DM flux and refer to App. \[app:solar\] for more details. In the case of our interest with relatively small $\sigma_e$, the solar-reflected DM flux $\Phi_\mathrm{refl}$ is estimated as $$\begin{aligned} \frac{d \Phi_\mathrm{refl}}{dE} \simeq \frac{n_\DM}{\left(1 \mathrm{AU}\right)^2} \int_0^{r_\mathrm{sun}} \!\! dr\,r^2 \frac{v_\mathrm{esc}(r)}{v_\DM}\, n_e(r) \left\langle \frac{d\sigma_e}{dE} v_e (r)\right\rangle, \label{eq:fluxDM}\end{aligned}$$ where $E$ is the DM kinetic energy, $n_\DM$ is the DM number density, $r_\mathrm{sun}$ is the solar radius, $v_\mathrm{esc}$ is the escape velocity, $v_\DM$ is the halo DM velocity, $n_e$($v_e$) is the electron number density (velocity), and $\langle ... \rangle$ denotes the thermal average. In this formula, we have improved the analysis of [@Baryakhtar:2020rwy] by including the radial dependence of the solar parameters, taken from [@Bahcall:2004pz]. The recoil spectrum of the electron initially in the $(n, l)$ state of a XENON atom is given by $$\begin{aligned} \frac{dR_{nl}}{dE_R} &= \frac{N_T {\sigma}_e}{8\mu_{e\DM}^2 E_R}\int dq\,q \left\lvert f_{nl}\right\rvert^2 \xi\left(E_\mathrm{min}\right), \label{eq:devent1} \\ \xi\left(E_\mathrm{min}\right) &= \int_{E_\mathrm{min}} \!\!dE\, \left(\frac{\MDM}{2E}\right) \frac{d\Phi_\mathrm{refl}}{dE}, \label{eq:devent2} \\ E_\mathrm{min} &= \frac{\MDM}{2}\left(\frac{E_{nl} + E_R - \delta}{q} + \frac{q}{2 \MDM}\right)^2, \label{eq:devent3}\end{aligned}$$ where $N_T$ is the number of target particles and $E_{nl}$ is the electron binding energy, see e.g. [@Essig:2015cda] for a detailed derivation of the above expressions. We compute the atomic form factor $f_{nl}$ following [@Essig:2011nj; @Bloch:2020uzh], and leave a refined treatment including relativistic effects [@Roberts:2016xfw; @Roberts:2019chv] to future work. ![\[fig:recoil\_spectra\] Electron recoil spectra induced by solar-upscattered DM, for two benchmark values of the parameters of models that explain the 511 keV line. We overlay them with data and expected backgrounds from the Xenon1T S2 [@Aprile:2019xxb] (left) and S1+S2 [@Aprile:2020tmw] (right) analyses. ](pwave2MeV4e-38cmsq_coann3MeV1p9e-28cmsq3keV.pdf){width="49.00000%"} In Fig. \[fig:recoil\_spectra\], we show the electron recoil spectra for two benchmark points $\MDM = 2\,\mathrm{MeV}$ and $\sigma_e = 4\times 10^{-38}\,\mathrm{cm}^2$ in the $p$-wave case and $\MDM = 3\,\mathrm{MeV}$, $\sigma_e = 1.9\times 10^{-38}\,\mathrm{cm}^2$ and $\delta = 3\,\mathrm{keV}$ in the coannihilation case. The induced electron recoils peak at energies below 2 keV in the $p$-wave case, and in the coannihilation one if $\delta \lesssim$ keV. In the latter case with larger $\delta$ the events instead peak at $E_R \sim \delta$, because the downscattering $\chi_2 \rightarrow \chi_1$ releases more energy than the initial one of $\chi_2$. In particular, the events are peaked at $E_R = 2$–$3\,\mathrm{keV}$ in our benchmark point, which can explain the recent XENON1T anomaly. We emphasize that this result is non-trivial, because the allowed parameter region is defined by requirements and experimental limits that are completely independent of XENON1T. It is then a fortunate accident that this region is in the right ballpark for the explanation of the XENON1T anomaly. The results of this paragraph are of course interesting beyond these anomalies, as they quantify how XENON1T tests models of light electrophilic DM. The limits shown in Figs. \[fig:pwave\_Majorana\] and \[fig:coannihilations\_chi2decays\] are derived by the conservative requirement that signal plus background should not overshoot the data in [@Aprile:2020tmw] by more than 3$\sigma$, a more precise limit derivation is left to future work. #### **Conclusions and Outlook.** {#conclusions-and-outlook. .unnumbered} We have pointed out that Dark Matter models, where the relic abundance is set by either $p$-wave annihilations or coannihilations with a slightly heavier partner, can explain the origin of the 511 keV line in the galactic bulge compatibly with all other experimental constraints. We have found that these models induce electron recoils on Earth that are being tested by XENON1T, and that ‘coannihilation’ models could, non-trivially, simultaneously explain the 511 keV line and the excess events recently presented by XENON1T [@Aprile:2020tmw]. Independently of the XENON1T anomaly, our proposed DM explanations of the 511 keV constitute a new physics case for experiments sensitive to keV electron recoils, like XENONnT and Panda-X [@Fu:2017lfc], for accelerators like NA64 and LDMX [@Akesson:2018vlm], and for cosmological surveys like CMB-S4 [@Abazajian:2019eic] and the Simons Observatory [@Ade:2018sbj]. The origin of a long-standing astrophysical mystery could be awaiting discovery in their data. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank Marco Cirelli, Simon Knapen, Yuichiro Nakai, Diego Redigolo and Joe Silk for useful discussions. Funding and research infrastructure acknowledgements: - Y.E. and R.S. are partially supported by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC 2121 “Quantum Universe” - 390833306; - F.S. is supported in part by a grant “Tremplin nouveaux entrants et nouvelles entrantes de la FSI”. Recast of NA64 limits. {#app:NA64} ====================== NA64 sets the strongest existing constraints on invisibly decaying dark photons in [@NA64:2019imj]: the kinetic mixing $\epsilon$, defined as in eq. (\[eq:L\_inelastic\]), should be smaller than an $m_V$-dependent function that we denote $\epsilon_\text{limit} (m_V)$. As we are not aware of any recast of those limits to other invisibly decaying light particles, we perform that recast ourselves, for completeness for scalars $S$, pseudoscalars $A$ and axial vectors $V_A$, with couplings $$\begin{aligned} \mathcal{L}_\S &=& g_e S \, e^\dagger_\L e_\R+\text{h.c.},\\ \mathcal{L}_\A &=& i g_e A \, e^\dagger_\L e_\R+\text{h.c.},\\ \mathcal{L}_\VA &=& i g_e V^\mu_\A \, (e^\dagger_\R \bar{\sigma}_\mu e_\R - e^\dagger_\L \bar{\sigma}_\mu e_\L),\end{aligned}$$ which in 4-component spinor notation read, respectively, $g_e \bar{e} e S$, $i g_e \bar{e} e A$ and $i g_e \bar{e} \gamma_\mu \gamma_5 e V^\mu_\A$. We recast NA64 limits by imposing g\_e (m\_[§,,]{}) &lt; C\_[§,,]{} e \_ (m\_[§,]{}), \[eq:recast\_NA64\] where $e$ is the electric charge and C\_X = ()\^. \[eq:CX\] We have defined N\_X = \_[0.5]{}\^[x\_]{}dx (x)(eZeZX), \[eq:NX\] where $x = E_X/E_\text{beam}$ ($E_\text{beam} =100$ GeV for NA64) and the lower limit of integration in $x$ comes from the cut $E_\text{miss} > 50$ GeV [@NA64:2019imj]. The upper limit of integration $x_\text{max}$ satisfies $x_\text{max} < 0.997$, because of the trigger $E_\text{cal} > 0.3$ GeV [@Banerjee:2017hhz]. For the cross sections $d\sigma(eZ\to eZX)/dx$ we use the “improved Weizsaecker-Williams” approximations given in eq. (33) of [@Liu:2016mqv] for $X=S$ and in eq. (30) of [@Liu:2017htz] for $X = V,A,V_\A$. In Fig. \[fig:IWWcrosssections\] we display the ratio of the $X=S,A,V_\A$ cross sections and the $X=V$ cross section, the latter being the relevant one for the model on which NA64 has cast its limit. ![\[fig:IWWcrosssections\] Ratios of cross sections $d\sigma(eZ\to eZX)/dx$, with $x = E_X/E_\text{beam}$ ($E_\text{beam} =100$ GeV for NA64). Numerator: $X=S$ (blue), $A$ (dotted-red), $V_A$ (orange); denominator: $X = V$. The range $x \geq 0.5$ is the one relevant for the NA64 searches [@NA64:2019imj] that we are recasting here. We use the cross sections in the “improved Weizsaecker-Williams” approximations as given in [@Liu:2016mqv; @Liu:2017htz]. All curves assume $m_X = 10$ MeV, the dependence on $m_X$ is within the thickness of each line for $m_X > 8$ MeV, and within $\sim 20\%$ of each line for $m_X > 3$ MeV. ](Ratiosdsigmadx_NA64.pdf){width="49.00000%"} Finally, the efficiency $\text{Eff}(x)$ has a weak dependence on $x$ [@NA64:2019imj], it does so mostly for $x$ close to one, see the discussion in [@Banerjee:2017hhz] and e.g. Fig. 11 in that paper. Since we have not found a detailed study of the efficiency of NA64 in the region $x$ close to 1, we assume it is independent of $x$, so that it simplifies in the ratio $N_V/N_X$ that defines our rescaling eq. (\[eq:CX\]). As visible in Fig. \[fig:IWWcrosssections\], this procedure does not introduce any significant error for the axial vector case. For the scalar and pseudoscalar cases, since the ratios of their cross sections to the vector one are a monotonically increasing function of $x$, and since the efficiency worsens when $x$ approaches one, the value of $C_{\S,\A}$ that we obtain for $x_\text{max} = 0.997$ represent an aggressive estimate of the NA64 exclusion of such particles. A conservative one can instead be obtained by choosing a value of $x_\text{max}$ below which the efficiency is roughly a constant in $x$, which we take for definiteness as $x_\text{max} = 0.9$. Our resulting coefficients $C_{\S,\A,\VA}$, for these two extreme limits of integration and for various values of $m_X$, are given in Table \[tab:CX\]. In the $p$-wave model studied in the main text, in order to be conservative on the allowed parameter space, we have used the aggressive rescaling of the NA64 limits, i.e. $C_\S = 1.6$. $m_X$ \[MeV\] --------------- ------- ----- ------- ----- ------- ----- ---------------- 0.997 0.9 0.997 0.9 0.997 0.9 $x_\text{max}$ 1 1.7 1.8 1.8 2.0 0.8 0.8 2 1.7 2.0 1.7 2.0 0.9 0.9 3 1.6 2.0 1.7 2.1 1.0 1.0 4 1.6 2.0 1.7 2.1 1.0 1.0 5 1.6 2.0 1.6 2.1 1.0 1.0 $\geq 6$ 1.6 2.1 1.6 2.1 1.0 1.0 : \[tab:CX\] Coefficients entering eq. (\[eq:recast\_NA64\]) to recast NA64 limits, on invisibly decaying dark photons [@NA64:2019imj], to invisibly decaying scalars S, pseudoscalars A and axial vectors $V_\A$ coupled with electrons as in eq. (\[eq:recast\_NA64\]). We display our results for two cases of the upper limit of integration $x_\text{max}$ in eq. (\[eq:NX\]). Another source of uncertainty of our rescaling comes from the fact we used cross sections in the “improved Weizsaecker-Williams” approximation. The comparisons of these cross sections with the full results, in ref. [@Liu:2016mqv; @Liu:2017htz], show that the impact of the approximation over the full $x$ range is analogous for the four cases $X = S,A,V,V_\A$, as one could roughly expect by observing that this approximation consists in a different treatment of the phase-space edges. Therefore the error in our rescaling, induced by the approximations in the cross section, is qualitatively expected to be smaller than the error in the cross sections themselves, because it relies on ratios. Since this recast is not the main purpose of this paper, we content ourselves with this procedure, and we encourage the NA64 collaboration to present their very interesting results for particles other than dark photons. UV completions. {#app:UVcompletions} =============== We here propose explicit ultraviolet (UV) completions of all the low-energy couplings that are not manifestly electroweak (EW) invariant. We start by scalar couplings to electrons. A coupling $g_e$ defined as in eq. (\[eq:L\_pwave\_simple\]) $g_e e_\L e^\dagger_\R S + \text{h.c.}$, of the needed size $g_e \sim 10^{-6}$ (see Fig. \[fig:pwave\_Majorana\]), can be obtained by adding to the SM two fermions $E_\L$ and $E^\dagger_\R$, with charge assignments of $e_\R$ and $e^\dagger_\R$ respectively, and Lagrangian \_= y\_H\^E\^\_+M\_E\_ŁE\^\_ + g\_S E\_Łe\^\_. This induces a coupling to electrons ($v_\EW \simeq 246$ GeV) g\_e g\_ 2 10\^[-6]{} y\_g\_, which is of the desired size for $M_\E$ out of experimental reach and perturbative values of the couplings $y_\E$ and $g_\E$. In the coannihilation model the higher dimensional operator $|\phi|^2 (e_\L e^\dagger_\R +\text{h.c.})/\Lambda_{\phi e}$, that induces the coupling of $\varphi$ to electrons, can be obtained by adding to the SM the fermions $E_\L$ and $E^\dagger_\R$, with SM charge assignments of $e_\R$ and $e^\dagger_\R$ respectively, and $L_\L$ and $L^\dagger_\R$, with SM charge assignments of $\ell$ and $\ell^\dagger$ respectively. Furthermore, we assign to $E_\L$ and $L_\L$ ($E^\dagger_\R$ and $L^\dagger_\R$) charge $+2$ ($-2$) under the $U(1)'$ gauge group. The Lagrangian $$\begin{aligned} \mathcal{L} &=& M_\L L_\L L^\dagger_\R + M_\E E_\L E^\dagger_\R + y_\L L_\L H^\dagger E^\dagger_\R \nonumber\\ &+& g_\E \phi E_\L e^\dagger_\R + g_\L \phi L_\L \ell^\dagger +\text{h.c.}\,,\end{aligned}$$ then induces 10\^9 , where we remind the reader that we needed $\Lambda_{\phi e}/v_\phi$ in the ballpark of $10^{9-10}$, in order for $\varphi$ to decay to $e\bar{e}$ instantaneously on astrophysical scales and compatibly with collider, supernovae and BBN limits [@Krnjaic:2015mbs]. We have just seen how this value can be achieved by adding new vector-like leptons with masses out of collider reach. Otherwise the coupling to electrons of both $S$ and $\varphi$ can easily be obtained via operators that mix the new scalars with the Higgs, respectively $S |H|^2$ and $|\phi|^2 |H|^2$. In the latter case, however, one would need to tune the parameters of $V(|\phi|)$ with this quartic coupling, in order to keep $v_\phi \ll v_\EW$. A coupling of $S$ to neutrinos $g_\nu \nu^2 S$, of size $g_\nu \sim 10^{-2} g_e$ as needed to make the model compatible with cosmological data [@Sabti:2019mhn], can be achieved by extending the SM with three singlet fermions $\nu_\R$, $ N_\L$ and $N_\R$. The EW-invariant Lagrangian = y\_H \^\_+ m\_N\_ŁN\^\_+ g\_S N\_Ł\^\_+ then induces g\_g\_ 2 10\^[-8]{} y\_g\_, which is of the desired size $g_\nu \sim 10^{-2} g_e \sim 10^{-8} $(see Fig. \[fig:pwave\_Majorana\] for the interesting values of $g_e$) for $N$ out of experimental reach. We finally provide an example of an EW-invariant completion for the small coupling to neutrinos of a $U(1)'$ gauge boson. We add to the model of eq. (\[eq:L\_inelastic\]) one total singlet fermion $\nu^\dagger_\R$ and two left-handed fermions $N_\L$ and $N_\R$, with charges respectively $+2$ and $-2$ under $U(1)'$, and singlets under the SM gauge group. The Lagrangian \_= y\_H\^\^\_+ m\_N\^\_ŁN\_+ y\_N\_Ł\_R\^+ then induces a coupling of size g\_2g\_()\^2 10\^[-7]{} ()\^[2]{} ()\^[2]{}. One can then obtain the needed value $g_\nu \sim 10^{-2} e \epsilon \sim 10^{-7}$ (see Fig. \[fig:coannihilations\_chi2decays\] for the interesting values of $\epsilon$) for $m_\N \sim 30$ GeV, which is out of experimental reach because $N$ is a total SM singlet. Solar-reflected DM events at XENON1T. {#app:solar} ===================================== Here we give the procedure to compute the electron recoil spectra at XENON1T in detail. The solar-reflected DM flux is given by eq. , and we explain each term in the following. We take the DM number density as $n_\mathrm{DM} = (0.42\,\mathrm{GeV}/\MDM)\,\mathrm{cm}^{-3}$ [@Pato:2015dua; @Buch:2018qdr]. The astrophysical unit is given by $1\,\mathrm{AU} \simeq 1.5\times 10^{13}\,\mathrm{cm}$. The escape velocity is given by $$\begin{aligned} v_\mathrm{esc}(r) = \sqrt{\frac{2GM(r)}{r}},\end{aligned}$$ where $G$ is the Newton constant and $M(r)$ is the solar mass inside the radius $r$. The factor $v_\mathrm{esc}/v_\mathrm{DM}$ originates from the combination of the enhanced classical cross section by the attractive gravitational potential and the spreading of the flux by the increased DM velocity [@Baryakhtar:2020rwy].[^4] The halo DM velocity is taken as $v_\mathrm{DM} = 220\,\mathrm{km}/\mathrm{sec}$. Assuming the Maxwell-Boltzmann distribution, the thermal averaged differential cross section is given by $$\begin{aligned} \left\langle \frac{d\sigma_e}{dE} v_e\right\rangle &= \frac{{\sigma}_e \MDM}{\mu_{e\DM}^2} \sqrt{\frac{m_e}{2\pi T}} \exp\left[- \frac{m_e v_\mathrm{min}^2}{2T}\right], \nonumber \\ v_\mathrm{min} &= \frac{1}{\sqrt{2\MDM E}}\left[\frac{\MDM E}{\mu_{e\DM}} + \delta\right].\end{aligned}$$ Finally we shift the DM kinetic energy after scattering, by the gravitational potential at the point of the scattering to take into account the gravitational redshift effect, $E \to E - \MDM v^2_\mathrm{esc}(r)/2$. In Fig. \[fig:fluxDM\], we show the solar-reflected DM flux for the benchmark points used in the main text: $\MDM = 2\,\mathrm{MeV}$ and $\sigma_e = 4\times 10^{-38}\,\mathrm{cm}^2$ in the $p$-wave case and $\MDM = 3\,\mathrm{MeV}$, $\sigma_e = 1.9\times 10^{-38}\,\mathrm{cm}^2$ and $\delta = 3\,\mathrm{keV}$ in the coannihilation case. ![\[fig:fluxDM\] Solar-reflected DM flux for our benchmark points: $\MDM = 2\,\mathrm{MeV}$ and $\sigma_e =4\times 10^{-38}\,\mathrm{cm}^2$ in the $p$-wave case and $\MDM = 3\,\mathrm{MeV}$, $\sigma_e = 1.9\times 10^{-38}\,\mathrm{cm}^2$ and $\delta = 3\,\mathrm{keV}$ in the coannihilation case. ](fluxDM.pdf){width="49.00000%"} Once the reflected DM flux is computed, the electron recoil spectra are given by eqs. –,[^5] with the number of the target particle taken as $N_T = 4.2\times 10^{27}$ per tonne in our computation. As mentioned in the main text, we compute the atomic form factor following [@Essig:2011nj; @Bloch:2020uzh]. Assuming the plane wave function for the out-going electron, the atomic form factor is given by $$\begin{aligned} \left\lvert f_{nl}(q, E_R)\right\rvert^2 &= F_\mathrm{Fermi}\frac{2l+1}{2\pi^3}\frac{m_e E_R}{q} \left[\int_{k_-}^{k_+} dk\,k \left\lvert \chi_{nl}\left(k\right)\right\rvert^2\right], \nonumber \\ k_{\pm} &= \left\lvert \sqrt{2 m_e E_R} \pm q\right\rvert.\end{aligned}$$ The radial part of the wave function in the momentum space $\chi_{nl}$ is given by $$\begin{aligned} \chi_{nl}(k) = 4\pi \int_0^{\infty} dr\,r^2 j_l\left(kr\right) R_{nl}\left(r\right),\end{aligned}$$ where $j_l$ is the spherical Bessel function and $R_{nl}$ is the radial part of the real space wave function. We take $R_{nl}$ as $$\begin{aligned} R_{nl} &= \sum_{j} C_{jln} N_{jl} r^{n_{jl}-1} \exp\left(-Z_{jl}r\right), \nonumber \\ N_{jl} &= \frac{\left(2Z_{jl}\right)^{n_{jl} + 1/2}}{\sqrt{\left(2n_{jl}\right)!}}.\end{aligned}$$ where $C_{jln}, Z_{jl}$ and $n_{jl}$ are taken from [@Bunge:1993jsz]. If we define $$\begin{aligned} f_{l}\left(n; x\right) \equiv \frac{2^{n+1/2}}{\sqrt{\left(2n\right)!}} \int_0^\infty dy\,y^{n+1} j_{l}\left(xy\right) \exp\left(-y\right), \label{eq:integral_basis_function}\end{aligned}$$ the momentum-space wave function is given by $$\begin{aligned} \chi_{nl}\left(k \right) = 4\pi \sum_{j} \frac{C_{jln}}{Z_{jl}^{3/2}} f_l\left(n_{jl}; k/Z_{jl}\right).\end{aligned}$$ The integral  can be analytically performed, which simplifies the numerical computation. The wave functions are normalized as $$\begin{aligned} \int dk\,k^2 \left\lvert \chi_{nl} \right\rvert^2 = \left(2\pi\right)^3, \quad \int_0^\infty dr\,r^2 \left\lvert R_{nl}\right\rvert^2 = 1,\end{aligned}$$ which agrees with the normalization of [@Bunge:1993jsz]. Finally the Fermi factor is given by $$\begin{aligned} F_\mathrm{Fermi}(q) = \frac{2\pi d}{1-e^{-2\pi d}}, \quad d = Z_\mathrm{eff} \frac{\alpha_e m_e}{q},\end{aligned}$$ where we take the effective charge as $Z_\mathrm{eff} = 1$. We show the form factors without the Fermi factor in Fig. \[fig:atomicFF\]. They agree well with [@Bloch:2020uzh] except in the region $q \lesssim 10\,\mathrm{keV}$ for the $4d$-state electron with $E_R = 1\,\mathrm{keV}$, whose effect on the final result is anyway minor. In our computation we neglect the contribution from $1s$, $2s$ and $2p$ electrons, because their binding energies are larger than $\simeq 4.8$ keV (see e.g. [@Bunge:1993jsz]) and thus can be neglected in this specific study. We included 8 orbits, from $3s$ up to $5p$. ![\[fig:atomicFF\] Atomic form factors of the $3d$-, $4d$- and $5p$-state electrons without the Fermi factor, with two different values of the recoil energy $E_R$. ](atomicFF.pdf){width="49.00000%"} After computing the electron recoil spectra, we convolute them with the detector response to obtain the signals. For the S2-only analysis, we use the mean values in [@Aprile:2019xxb] to translate the recoil energy to photoelectron (PE). Although the efficiency depends on the position of the event, we simply multiply all the efficiency shown in [@Aprile:2019xxb] to obtain the signals in this work. A more detailed analysis on the detector response is left as a future work. For the recent S1$+$S2 analysis, we follow the procedure outlined in the original paper [@Aprile:2020tmw]. We smear the events by a gaussian distribution with the width given by $$\begin{aligned} \sigma\left(E\right) = a \sqrt{E} + b E, \end{aligned}$$ where we take $a = 0.31 \sqrt{\mathrm{keV}}$ and $b = 0.0037$ in our numerical computation. We then multiply the efficiency that is again given in [@Aprile:2020tmw]. [^1]: Limits from CMB [@Slatyer:2015jla], CR electrons [@Boudaud:2018oya] and CR-electron-upscattered DM [@Ema:2018bih; @Cappiello:2019qsw] do not constrain the explanation of the 511 keV line in the models presented in this paper. [^2]: This is larger than 2 MeV of the previous section because of the factor of 2 in $\langle\sigma v\rangle_{511}$ that we just explained, and because the relic cross-section is twice that of self-conjugate particles, because $\chi_1\chi_2$ cannot annihilate via $\sigma v_{\chi_i\chi_i \to \varphi\varphi}$. Note that, for $\MDM < 6$ MeV, the positron injection energy is always smaller than the needed 3 MeV thanks to the extra step in the annihilation. [^3]: The phenomenology we discuss next is not affected by their precise values, as long as $1.5 \lesssim m_\varphi/\text{MeV} \lesssim~3$, and $10 \lesssim m_V/\text{MeV} \lesssim 100$, where the lower limits are potentially in conflict with BBN and the upper ones close the available parameter space. Since $\bar{\sigma}_e$ is independent of $m_\varphi$, $m_\varphi < 2$ MeV would not open any new allowed parameter space. [^4]: Precisely speaking, the enhancement of the cross section by the factor $v_\mathrm{esc}^2/v_\mathrm{DM}^2$ applies only when the potential is proportional to $1/r$. It is however enough for our purpose, given the uncertainties in the other factors such as the atomic form factor. [^5]: We think that there is a typo in the formula of $\eta$ in [@An:2017ojc] (which is our $\xi$ divided by the total halo DM flux).
--- abstract: | We construct a model of unconventional superconductors. The model is based on a hypothesis which assumes a short-lived bound state of electrons with a finite size and, moreover, in the free space. The hypothesis is a far-fetched one which is stated only qualitatively and in a minimal way. It still leads us to a condition under which the electron pairs may accumulate in one mobile state. The state turns out to be [*apparently*]{} the highest of the occupied electron states. Therefore we call this condensation of electron pairs an [*apparent Fermi surface*]{}. Since a charged boson gas is theoretically known to be a type 2 superconductor our model is also expected to be also such. In addition the transition temperature of our model is expected to be closely related to the Bose-Einstein condensation, similarly with the real high Tc superconductors. In particular in our model both a superconductor with a Fermi surface and the other without one are natural. There are also other theoretical works which have shown, not exploiting any specific binding mechanism, that tightly bound electrons may explain certain aspects of high Tc superconductors. To test our model we propose two types of experiments: a low energy electron-electron scattering and a photoemission on high Tc superconductors. [*Keywords*]{}: tightly bound electrons; apparent Fermi surface; photoemission on HTS’s; low energy electron-electron scattering [*PACS*]{}: 74.20.-z, 74.20.Mn author: - | Yanghyun Byun[^1]\ [*Department of Mathematics, Hanyang University, Seoul 133-791, Korea*]{} title: 'On the charged boson gas model as a theory high Tc superconductivity[^2]' --- Introduction ============ The idea of superconductivity by tightly bound electrons have a long history of about 60 years beginning from Schafroth [@1] even if his idea was overwhelmed by the emergence of BCS theory [@2] which appeared shortly after. Schafroth concluded that the charged boson gas should be a superconductor of Type 1. However a correction was made by Friedberg et al. [@3] to conclude that the model should exhibit superconductivity of Type 2. The works [@4; @5] by Micnas [*et al.*]{}also asserted Type 2 superconductivity for a system of tightly bound electron pairs even if their works were dealing with the system from more diverse perspectives than just focusing on the Type 2 superconductivity. In particular, since high Tc superconductors (HTS’s) are of Type 2, these works show that the real space electron pair is more relevant to HTS’s rather than to conventional ones. On the other hand the Bose-Einstein condensation (BEC) temperature was the natural candidate for the transition temperature (Tc) in the charged boson gas (CBG) model of superconductivity. However it was many orders of magnitude higher than the Tc’s of conventional superconductors when calculated assuming a sizable fraction of carrier electrons were paired(cf. p. xii, [@6] or [@7]). This drawback of CBG model is much less serious in the case of HTS’s since they have rather small densities of paired electrons. In fact there is the Uemura relation ([@8]) which asserts that for underdoped cuprates the Tc’s are proportional to $n_s/m^*(T\rightarrow 0)$, where $n_s$ is the superfluid density, $m^*$, the effective electron mass and $T$ is the temperature. This relation has been regarded by some as implying that the Tc of an HTS is closely related to the BEC of real-space pairs. In fact Uemura himself, based on the observation that the 3-dimensional BEC temperatures are only 4-5 times greater than the Tc’s in case of underdoped cuprates, predicted that the Tc’s can be properly understood in terms of BEC when the two dimensional aspect is taken into account together with some other effects ([@9]). One may suspect that the partial successes represented by [@3; @4; @5] and [@8] might indicate that the CBG model itself is the right framework for high Tc superconductivity rather than a mere approximation to some other future successful theory. In this paper we will construct a model for superconductivity based on tightly bound electrons. The bound electrons are provided by Hypothesis B in §2.1 below. At this point the author would like to warn the reader that the hypothesis seemingly does not allow a binding mechanism within the known first principles. The only excuse for the dare, for the time being, is that it allows a model for superconductivity as in §2.2 below. He also would like to mention that the hypothesis is stated in a minimal way and only qualitatively. Therefore it is impossible for §2.2, even if it is the core of the paper, to be a theory with the power to explain and predict properties of HTS’s in details. Such feat is possible only if the hypothesis can be stated quantitatively, which is possible only after the hypothesis has turned out real by some experiments. One may say that §2 as a whole is the central part of this paper. In §3 we estimate the excess energy of the bound electron pair compared to free two electrons and conclude it is much less than $32\, \rm eV $. In §4.1 we list some theoretical works which are based on tightly bound electrons. They are independent of a specific binding mechanism while appear closely related to the experimental facts. We also discuss in §4.2, 3 the most conspicuous aspect of our model that it allows both a superconductor with a Fermi surface and the other without one. The most direct experimental support of our model will come from a resonance in a low energy electron-electron scattering as in §5.1 below. In §5.2 we propose a photoemission on HTS’s which may support §2.1, 2 below. A summary and outlook has been given in §6. A model of superconductivity ============================ In this section we construct a model for an unconventional superconductor based on a hypothesis, which states that there is a bound state of electrons as in the below. The hypothesis is seemingly unrealistic and is stated only qualitatively and in a minimal way. Still in §2.2 we derive a condition for a solid to have the so-called [*apparent Fermi surface*]{}, which gives rise to the superconductivity in our model. The model allows both the superconductivity without a (usual) Fermi surface and the other with one. We may say that §2.2 is the core which gives meaning to the rest of the paper. In §2.3 we discuss exclusively the case when there is a Fermi surface since the binding of electrons is not stable in that case. A few extra issues arising from the model are discussed in §2.4. The hypothesis -------------- We state the hypothesis of bound state of electrons as follows: > [**Hypothesis B.**]{} There is a bound state of two electrons which is short-lived in the free space and has a size comparable to that of the electron pairs in an HTS. To be short-lived in free space, the bound state should have larger energy than when the two electrons are free, which we omitted to avoid redundancy. That the bound system has a finite size implies that it has an intrinsic structure which can be taken into account when one considers its interaction with the lattice or with any other system at short distance. Note that Hypothesis B is truly a far-fetched one. We will not attempt to provide the microscopic binding mechanism. It will probably demand an extraordinary idea to provide the mechanism within the known first principles. Such mechanisms as polaron, exciton and spin fluctuation etc.which depend on the existence of the lattice and/or the itinerant electrons, are irrelevant to a binding in free space. The magnetic field of the electrons accompanying the spin may never overcome the Coulomb repulsion. If one still considers exploiting the hypothesis to discuss superconductivity, he or she is disregarding, even if only temporarily, the [*principle*]{} that the known first principles are sufficient for the discipline of condensed matter physics. On the other hand we note that the bound state described by Hypothesis B has a property, which constitutes a necessary condition, even if not a sufficient one, for it not to have been easily noticed. That is, if the lifetime is short enough the process of its formation and decay cannot be easily distinguished from the usual scattering of two electrons. Also we argue in §3 below that the excess energy of the bound state should be less than $32\, {\rm eV}$. Assuming this upper bound is valid, the bound state could have not been noticed in the myriad of high energy electron-electron scattering experiments by means of a resonance. The superconductor ------------------ ### The stability condition To claim any relevance of Hypothesis B to superconductivity, we need to see first of all how the bound state may exist stably in a solid. We begin by noting that there is a fundamental constant implied by our hypothesis: > $E_e >0$ denotes the excess energy of the bound electron pair of Hypothesis B in free space relative to two free electrons. In fact there might be more than one bound state of two electrons if one ever exists (see for instance §5.1.2 below). However $E_e$ in the above refers to the smallest one. The smallest value of $E_e$, not the larger ones, most likely to be the one relevant to superconductivity. Here and from now on the term ‘bound electrons’, ‘bound electron pair’ or ‘bound 2-electron system’ will mean the one given by Hypothesis B which has the smallest excess energy denoted by $E_e$. Furthermore we define $E_i$ as follows: > $E_i $ is the increase of the intrinsic energy of the bound 2-electron system originating from the distortion of its structure by putting it in a specific lattice. We expect that $E_i > 0$. Ultimately the energy values $E_0<0$ and $E_t<0$ which we define as follows will play the most important roles: > $E_0$ denotes 2 times the energy of the lowest unoccupied electron state in the solid. > > $E_t$ is the total energy of the lowest state in the solid of the bound electron pair. To make the situation simpler we assume the absolute zero temperature in the definition of $E_0$ in the above. Also to make the meaning of $E_t$ clearer we introduce the energy $E_s <0$ as follows: > $E_s$ is the energy of the lowest state of the bound 2-electron system in a specific lattice which is the sum of its electric potential in the lattice and its center-of-mass kinetic energy . Now we may write $E_t=E_s+E_i+E_e$. One may expect that $E_s$ and $E_i$ may vary greatly from a solid to another. Note that $E_t$ depends only on $E_s$ and $E_i$ since $E_e$ is a constant. Being a boson the bound 2-electron system is not limited by Pauli exclusion principle. Therefore it is possible in some solids that $E_s$ is significantly lower than $E_0$ and $E_i$ is kept at some small enough value while $E_e$ is a small enough constant. Then indeed it may happen that $E_t <E_0$. If this inequality holds and the temperature $T$ is low enough ($kT << E_0-E_t$) then the bound 2-electron system should be stable in the solid: If the bound electron pair which is in $E_t$ energy state disintegrates, the two electrons should occupy states whose energy is greater than or equal to $\frac{1}{2}E_0$. This may not happen since otherwise the energy of the two electron system has increased at least by $E_0 -E_t$. This mechanism is similar to the one by which a neutron is stable in a nucleus while it is unstable in the free space. Thus we conclude that > If the inequality $E_t<E_0$ holds and the temperature is low enough then the bound 2-electron system may exist stably in the solid. ### The location of $\frac{1}{2}E_t$ in the band structure Consider a solid at absolute zero temperature and assume the inequality $E_t \leq E_0$ holds. If there are electrons in states with energies above $\frac{1}{2} E_t$, then they should bind pairwise to be in the apparently lower $\frac{1}{2} E_t$ energy state. That is, the following holds. > If the inequality $E_t \leq E_0$ holds, then there cannot be any electron in states higher than $\frac{1}{2}E_t$. Therefore the bound electron pairs [*appear*]{} as if they are electrons concentrated in one of the highest occupied states with the energy $\frac{1}{2} E_t$. In what follows a band means a continuum of electron states regardless of whether occupied or not and regardless of its origin. This usage of the term appears widely applicable. For instance our terminology is not affected by the breakdown of conventional band theory in such systems as Mott insulators ([@10]). Now assume that $\frac{1}{2} E_t$ is the same as the energy of an electron state in a partially filled band still keeping the assumption of zero temperature. Since the band is partially filled there are electrons with energies infinitesimally close to $\frac{1}{2} E_0$. If the inequality $E_t <E_0$ held, all of those electrons with energy $E$, $\frac{1}{2}E_t<E\leq E_0$, would have been bound pairwise and have fallen into a state with apparent energy $\frac{1}{2} E_t$. Thus the strict inequality is impossible and we must have $E_t \geq E_0$. However the inequality $E_t > E_0$ implies the bound electrons cannot exist in the solid. Therefore we conclude that the bound electrons exist in the solid if and only if $E_t =E_0$. In this case the bound electrons are not stable but in an equilibrium with the itinerant electrons with energy near $\frac{1}{2} E_0$. The inequality $E_t <E_0$ may hold only if the following two conditions are satisfied: (1) All the bands which contain states with energies lower than $\frac{1}{2} E_t$ are filled. (2) All the bands which contain states with energies higher than $\frac{1}{2} E_t$ are unoccupied. Note that the inequality $E_t <E_0$ may hold even if there is no bound electron pair. For the bound electrons to exist there should have been some electrons in states above $\frac{1}{2} E_t$ if it were not for the bound state of electrons. The states above $\frac{1}{2} E_t$ have become empty because the electrons in those states have bound pairwise to be in the apparently lower $\frac{1}{2} E_t$ energy state. Only in this case the bound electrons may exist and be stable in the solid. We may say that the inequality $E_t <E_0$ may hold only when $\frac{1}{2} E_t$ lies in the energy gap below which all bands are filled and above which no band is occupied. The discussion so far has led us to the following conclusion, in which we assume the absolute zero temperature: > [**Condition S.**]{} If $E_t \leq E_0$, the bound 2-electron systems may exist in the solid. If exist, they appear electrons concentrated in the highest occupied state with the energy $\frac{1}{2} E_t$. Condition S above can be divided further into two conditions as follows. > [**Condition S1.**]{} $E_t = E_0$ if and only if the bound electrons are in an equilibrium with the itinerant electrons. These two conditions are equivalent to the one that $\frac{1}{2} E_t$ lies in a partially filled band and the bound electron pairs exist in the solid.\ > [**Condition S2.**]{} $E_t <E_0$ if and only if the bound two-electron systems are stable in the solid. These two conditions are equivalent to the one that $\frac{1}{2} E_t$ lies in the energy gap of the band structure below which all electron states are occupied and above which no state is occupied. Note that Condition S2 above does not assert existence of bound electrons in the solid in concern. For them to exist under the strict inequality, the upper bands should provide the electrons to form the bound pairs. The upper bands should become empty by doing so. ### The apparent Fermi surface: a model for superconductivity Note that the assumption of finite size for the bound 2-electron system in Hypothesis B has not played any role in reaching Condition S in 2.2.2 above. Assume the size is zero or can be regarded as zero in the atomic scale. Then the lowest state for the bound 2-electron system is the lowest state in the most massive atom in the solid as a point particle with twice the charge and with more than twice the mass of an electron. In this case there is no chance that the bound electrons can be mobile and responsible for the superconductivity. In fact bound electrons with zero size will make the known atomic phenomena impossible assuming $E_e$ is small enough. It is the assumption of finite size in Hypothesis B that allows the bound electrons any chance to be mobile. If the inequality $E_t \leq E_0$ holds the bound electrons may exist as observed in the above. If they exist and are mobile, it seems appropriate for us to say that the bound electrons have formed an [*apparent Fermi surface*]{}. In particular, as observed in the beginning of 2.2.2 above, the bound electrons appear to have accumulated in one of the highest occupied states. If the apparent Fermi surface exists, the solid is expected to be a Type 2 superconductor when we consider the works [@3] and [@4; @5]. Thus we have a model for an unconventional superconductor of type 2. Superconductivity under the equality $E_t =E_0$ ----------------------------------------------- Under the equality $E_t=E_0$ the bound pairs are not stable but in an equilibrium with the itinerant electrons as stated in Condition S1 above. Thus the system in fact cannot be approximated comfortably to a CBG in this case. Note that Condition S1 implies that under the equality $E_t=E_0$ there is a Fermi surface in addition to the apparent one and that the Fermi level is $\frac{1}{2}E_t= \frac{1}{2}E_0$. On the other hand the inequality $E_t < E_0$ in Condition S2 implies that there can be no Fermi surface. Therefore the existence of many HTS’s with Fermi surfaces, together with the assumption that our model describes real HTS’s, imply that superconductivity is possible even when $E_t=E_0$. The argument in the above relies on the assumption that our model describes real HTS’s. In fact we may proceed without this assumption. Note that under the equality $E_t =E_0$ the bound electron pairs, which form the apparent Fermi surface, are in an equilibrium with the itinerant electrons of the real Fermi surface. This means that the bound pairs last only for random finite time intervals. Both works [@12; @13] deal with the superconductivity which arises when the bound state of two electrons is tight and has a random finite life time. In particular the work [@13] shows that the $T_c$ of such a system can be much higher than that of the BCS theory. A few remarks concerning the model ---------------------------------- ### Dependence of $E_t$ on the density of bound electrons The electrons in states above $\frac{1}{2} E_t$ should pairwise bind and fall into a state with apparent energy $\frac{1}{2} E_t$. Therefore the density of bound electrons can be unrealistically high if $\frac{1}{2} E_t$ happens to be a low value. However it is reasonable to assume that $E_t$ depends not only on the lattice structure but also on the density of the bound electrons themselves. That is, as the density rises, $E_t$ also rises. This makes an even better sense when we consider the fact that in real HTS’s the size and the density of the pair together imply that there are overlaps among the pairs that cannot be ignored (cf. §2, [@9]). By assuming $E_t$ rises as the density increases an unrealistically high density of bound electrons can be prevented and the model can be made compatible with the known small values of densities of electron pairs in real HTS’s. ### Bound electron pairs above $T_c$ It is clear that the density of bound electrons should be large enough at the $T_c$ for the superconductivity to be possible. Since it is reasonable to assume that the density of bound electrons depends on temperature continuously in our model, bound electrons are expected to exist at least in some small temperature range above $T_c$. In fact it is widely believed that the electron pairs are preformed above $T_c$. Some of them think that the electron pair may exist up to $T^*$, the temperature at which the pseudogap begins to appear ([@14]) or up to some other temperature $T_{\rm pair}$ such that $T_c < T_{\rm pair} < T^*$ ([@15]). Since the measured $T^*$’s are below $300\, {\rm K}$, this may set the upper limit. However we note that the origin of the pseudogap is not a settled issue ([@16]). In our model the formation of bound electrons is not directly related to lowering the temperature. It may be the case that once the condition $E_t \leq E_0$ is met, say, at zero temperature, then the condition may persist in the solid at any temperature as long as the lattice structure is intact. ### A candidate for unifying theory? The physical properties of known HTS’s are quite diverse and often in a stark contrast. For instance the overdoped cuprates have fully developed Fermi surface while the underdoped ones have no Fermi surface or at best one whose existence is prone to debates. The parent compound of cuprates is a Mott insulator while it is a metal for iron pnictides. However Condition S in the above is not specifically tied to any of these properties. Therefore the possibility is open to our model that it may explain all the diverse HTS’s. Of course it is more likely that the model may not explain even a single HTS considering the radical nature of Hypothesis B. In §3.3 below, we propose that a necessary condition for a solid to be an HTS is that its chemical composition is such that its nontrivial portion consists of ions whose cores are relatively better-exposed. This is the only proposal, even if a vague one, which our model currently provides for a solid to be an unconventional superconductor. In fact this is to make $E_s$ low enough. We do not have at the moment any clue whatsoever as to what makes $E_i$ small. A rough estimation of the excess energy ======================================= Note that our model of superconductivity can be real only if Hypothesis B is so. Furthermore it can be realistic only if the excess energy $E_e>0$ is small enough as to allow the inequality $E_t \leq E_0$ in some solids. Recall that $E_e$ is a universal constant in our model by which the energy of the bound electron pair in free space is greater than that of two free electrons (§2.2.1 above). In this section we propose an upper bound for $E_e$. Recall also $E_s$, $E_i$ and the relation $E_t=E_s+E_i +E_e$ from §2.2.1. In particular we are assuming that $E_i >0$. Also recall Condition S in §2.2.2 which demands the inequality $E_t \leq E_0$ for superconductivity. Therefore we have that $E_e \leq E_0-E_s-E_i$. Thus $E_0 -E_s$ is an upper bound for $E_e$. Note that the upper bound $E_0 -E_s$ is better if the value $E_i>0$ is smaller. However we do not know how to estimate $E_i$. Thus it is difficult to tell how good the upper bound $E_0-E_s$ is for $E_e$. We will put $E_0=2ⅹ(-4\, {\rm eV})$, where $4\, {\rm eV}$ is chosen as the typical work function of a metal. Then the upper bound depends only on the estimation of $E_s$. Basic facts and assumptions --------------------------- First of all we assume the interaction of the bound electrons with the lattice is electric. Then we observe the following facts: > \(1) The bound electron pairs are apparently mobile in HTS’s. > > \(2) The electrons in the bound state are not subject to the same constraints in their allowed states as the itinerant electrons. For instance Pauli exclusion principle is not applicable. > > \(3) The inner space of an atom is positively charged and apparently provides potential energy to a point particle with charge $-e$ (by $e$ we mean the charge of a proton). Let $Z$ denote the atomic number. Let $r$ denote the distance from the nucleus. Then the potential energy is $- \kappa{e\over r}$ near the outermost region of the atom and to $- \kappa{eZ\over r}$ near the nucleus, where $\kappa$ is an appropriate constant. Recall that $E_s$ is defined in 2.2.1 above as the sum of the potential energy of the bound 2-electron system and its center-of-mass kinetic energy. We take the mobility condition in (1) above as meaning that the kinetic energy is zero since a bound electron pair is a boson. Then only the potential energy contributes to $E_s$. In fact we will look for a lower bound of $E_s$ to have an upper bound of $E_e$ since we exploits the inequality $E_e<E_0 -E_s$. An estimation ------------- For a calculation of the lower bound for $E_s$, let us interpret the mobility condition as follows: > The wave function of the bound electron system is such that the electrons are more or less evenly distributed throughout the space occupied by the solid regardless of whether it is the inner space of the atoms or the outer space. We also consider a solid specified as follows: > \(1) The lattice structure is simple cubic with edge $\rm 0.3\, nm$. > > \(2) There is an ion with charge $e$ at each vertex and the radius of the ion is $\rm 0.15\, nm$. > > \(3) In the inner space of the ion the potential of a particle with charge $-e$ at distance $r$ from the nucleus can be approximated by $-\frac{\epsilon}{r^2}$ with $\epsilon=0.22\, \rm eV\cdot nm^2$. > > \(4) In the outer space the potential for a particle of charge $-e$ is homogeneously $ -9.6\rm \, eV$. We have $-\frac{\delta}{r}$ for the potential of an electron at distance $r$ from a proton, where $\delta = \rm 1.22\, eV\cdot nm$. In fact the constant $\epsilon$ in (3) above is chosen so that $-\frac{\epsilon}{r^2}=-\frac{\delta}{r}$ when $r=\rm 0.15\, nm$. If $Z$ is the atomic number of the ion, the inequality $-\frac{Z\delta}{r} \leq -\frac{\epsilon}{r^2}\leq -\frac{\delta}{r}$ holds when $\frac{a}{Z} \leq r \leq a$, where $a=\rm 0.15\, nm$. Thus $-\frac{\epsilon}{r^2}$ is a reasonable choice at least in the interval $\frac{a}{Z} \leq r \leq a$. Furthermore since our goal is to find a rough lower bound for $E_s$, our choice in (3) can be justified in the whole interval $0< r \leq a$. Also note that the homogeneous potential $\rm -9.6\, eV$ for the outer space in (4) makes a good sense: (i) The equality holds that $-\frac{\epsilon}{a^2}= -9.6\, \rm eV$. (ii) The itinerant electrons present in the outer space will make the potential nearly homogeneous. (iii) $9.6\, \rm eV$ is a value close to the sum of the typical work function $4\, \rm eV$ and the typical Fermi energy $\rm 4\, eV$ of a metal. Then we obtain $\rm -31\, eV$ as the contribution of the inner space. By adding the contribution of the outer space we obtain $E_s =\rm -40\, eV$ as a lower bound. Note that the calculation implies that $E_s$ will be a larger negative value if the ions are more densely packed. In other words, if the ratio of inner space of ions to the total volume of the solid is greater, then the calculation will give a larger negative value for the lower bound of $E_s$. The values, $E_0=-8\, {\rm eV}$ and $E_s= -40\, {\rm eV}$, mean that we have $E_0-E_s=32\, {\rm eV}$. We conclude that $E_e< 32\, {\rm eV}$. The screening effect -------------------- In fact the potential $-\frac{\epsilon}{r^2}$ with $\epsilon=0.22\, \rm eV\cdot nm^2$ in (3), §3.2 above cannot be a good approximation. For instance the deep inner space of an atom with a large atomic number may not provide such a large energy gain as implied by $-\frac{\epsilon}{r^2}$ to a point particle with charge $-e$. This is because the screening of positive charge of the nucleus will raise the energy levels of all the outer electrons and some portion of the energy gain by nearing the nucleus will be compensated. Considering the screening effect, it is not clear and appears not known to what extent a point particle with charge $-e$, which need not be an electron, will feel an attractive force toward the nucleus inside an atom. In any case $32\, {\rm eV}$ based on the potential $-\frac{\epsilon}{r^2}$ appears overly generous upper bound for $E_e$ even in the hypothetical solid of §3.2 above. The screening effect seems to imply, for $E_s$ to be low enough, that the chemical composition of the solid should be such that its nontrivial portion consists of ions whose inner cores are relatively better-exposed. Note that the calculation of $E_s$ in §3.2 above depends on the ratio of inner space of ions to the total volume of the solid which is closely related to the atomic number density, which does not vary greatly from a solid to another. Furthermore the lattice structure of the solid in §3.2 above is realistic enough. Thus the estimation of $E_e< 32\, \rm eV$ in the above appears to represent a quite generous upper bound for $E_e$. On the plausibility of the model ================================ We do not know the intrinsic structure of the bound electron pair given by Hypothesis B. Moreover we know neither the interaction between the bound electron pairs (see §2.4.1 above) nor the one between a pair and an itinerant electron. Therefore it is impossible for one to construct a sufficiently sophisticated theory based on our hypothesis. Even if there are also other theories based on tightly bound electrons, the binding mechanisms in some of them are provided by polaron, exciton or spin fluctuation etc., which are clearly irrelevant to the binding of our model. In addition if an argument assumes that the pairing is strictly a Fermi liquid phenomenon near the Fermi surface, it is not compatible with our model either: Note that $E_s$, being the sum of the electric potential and the center-of-mass kinetic energy of the bound pair, must be lower than the Fermi level (in case there is a Fermi surface) by $E_e + E_i$. Moreover a superconductor without a Fermi surface is allowed in our model. We begin this section by an overview of some works which considered superconductivity by tightly bound electrons. We consider only those that resort neither to a specific binding mechanism nor to the assumption that the pairing is a Fermi liquid phenomenon. In §4.2, 3 below we focus on the fact that in our model superconductivity originates from the existence of the apparent Fermi surface while the real Fermi surface is optional. Both options are considered in relation to real HTS’s. Tightly bound electrons in literature ------------------------------------- As said in the introduction Type 2 superconductivity of CBG has been shown by Friedberg et al.[@3; @13] and also mentioned by Micnas et al. [@4; @5]. Meissner effect of the CBG system has been discussed in §VI, [@12]. The density of pairs appears closely related to the Tc and it has been discussed in §I.[**C**]{}, [@13]. Even if the discussion of [@13] is not backed up by an argument rigorous and general enough it makes at the very least the CBG model appear compatible with the Tc’s of real HTS’s. We note that both [@12; @13] assumes the presence of itinerant electrons together with the bosons which are electron pairs. This is very similar to the case $E_0=E_t$ in our model (In particular see §1.[**A**]{}, [@12] and §1.[**A**]{}, [@13]). Moreover the Hall coefficient of an HTS is in general known to be positive in their normal state and the sign changes from negative to positive abruptly at the critical doping (cf. §3.5, [@17] and the references therein). A remarkable calculation [@18] shows that the hard-core boson system at half filling, assuming planar rectangular lattice structure, the Hall conductivity changes sign abruptly. On the other hand it is well-known that the resistivity of cuprate superconductors depends linearly on temperature in the normal state at near optimal doping. The work [@19] illustrates this linearity again by the hard-core boson model at the near half filling. There certainly are many more works than mentioned in the above which studied the consequences of assuming tightly bound electron pairs, exploiting neither any specific binding mechanism nor the Fermi liquid constraint. In particular there are theoretical works (\[18–27\]) which studied the properties of cuprate superconductors based on lattice bosons of charge $-2e$ (see VIII.B, [@19]). Many arguments in these works are without any specific binding mechanism and also without the Fermi liquid constraint. The question of Fermi liquid in underdoped cuprates --------------------------------------------------- Apparently existence of a Fermi surface, and therefore that of a Fermi liquid, in the underdoped cuprates has been established by quantum oscillation [@28; @29; @30]. One should note however that it is only under the magnetic field $H >H_{irr}$, where $H_{irr}$ denotes the irreversibility field. At zero magnetic field Fermi arcs are known to exist by ARPES in the underdoped regime at the temperature range $T_c < T < T^*$. Since the Fermi surface of a two dimensional Fermi liquid should form a closed loop in the momentum space there has been a debate regarding the origin. Moreover there is a study [@31] which concludes that the Fermi arcs are in fact not related to true Fermi liquids. Apparently there is no decisive evidence for the existence of Fermi surface under zero magnetic field in the underdoped regime (cf. [@32]). Even if the Fermi arc forms a closed loop in some Bi-based cuprate superconductors at some specific doping levels which belong to the underdoped regime ([@33]), there is a study ([@34]) which shows that the loop does not necessarily imply a Fermi liquid. That is, according to [@34], the loop may be only ‘apparently’ a Fermi surface. Thus considering the works [@31; @32; @34] there is a good chance that the case described by the inequality $E_t<E_0$ (Condition S2 in §2.2.2 above) has been realized in underdoped cuprates. However this requires a supporting arguments within our model which explain the Fermi arc and, most of all, the Fermi surface behavior that emerges in quantum oscillation. Unfortunately such arguments are not available at the moment. This will be the case even if the model turns out to be realistic by experiments such as proposed in §5 below until the model is mature enough. HTS’s with Fermi surfaces ------------------------- If a Fermi surface exists, the Fermi level should be equal to the apparent level $\frac{1}{2}E_t$ of the apparent Fermi surface. That is, Condition S1 in §2.2.2 above should apply. Note that the apparent Fermi surface is the one that is responsible for the superconductivity. If our model represents HTS’s correctly, this also explains the observation of Fermi surface below the $T_c$ (cf. [@35; @36]) in some HTS’s. Note that the Fermi surface is destroyed by the emergence of superconductivity in conventional superconductors. We would like to mention also that the apparent Fermi surface might affect ARPES. In particular note that the bound electrons may constitute a source of the most energetic electrons in photoemission. This means that, if our model represents HTS’s correctly, then some aspects of ARPES data on the Fermi surfaces of HTS’s cannot be properly understood. We note that there are studies such as [@35; @37; @38] which report some anomalies in the Fermi surfaces. Experiments to test the model ============================= The first experiment concerns directly Hypothesis B on which our model is based. It looks for a resonance in the formation of bound state of electrons. However the resonance might not be detectable by the electron-electron scattering if the cross section of bound pair formation is too small. The second one may depend less on the cross section. This method may support the arguments of §2.2 above and also Hypothesis B. A positive result of any of the two experiments will be a strong support for the arguments in §2.1, 2 above. It will also support the theoretical works which are based on tightly bound electrons as discussed in §4.1 above. Low energy electron-electron scattering --------------------------------------- ### Under the background noise Let us consider an electron-electron beam scattering arrangement. If Hypothesis B in §2.1 above are real, one may expect that there will be a resonance for the formation of the bound states when the kinetic energy of each beam is $\frac{1}{2} E_e$. Note that we have proposed an upper bound for $E_e$ by the inequality $E_e < 32\, {\rm eV}$ in §3.3 above. The bound state will shortly decay into two free electrons. We do not know whether or not the decay will accompany emission of photons. In any case the event cannot be easily distinguished from the usual electron-electron scattering, which means that there is a strong background noise. The resonance may not be detected because of this noise if the cross section of the formation of bound pairs is too small. One may reduce the noise of usual scattering to some extent by concentrating on the events such that two electrons are scattered off from each other in directions perpendicular to the beams. This is because electrons are fermions. In fact there are some graduate texts of quantum mechanics which explicitly deal with fermion-fermion scattering. They say that the noise can be reduced by this arrangement to a quarter of the value when electrons were bosons instead of fermions. The resonance can be more conspicuous if one concentrates on the events in which two electrons are scattered in directions opposite to each other with the same kinetic energies. However this will work only when a nontrivial portion of the bound pairs disintegrate without significant electromagnetic radiation. ### Eliminating the noise In principle the noise in §5.1.1 above can be made vanish by taking into account the spin states as well in addition to the momenta. However this method is useful only under the following assumptions: > \(1) A large fraction of bound electron pairs decay without any significant emission of photons accompanied. > > \(2) The possibility is not significantly suppressed that the two electrons from a decay may be in the same spin state. In fact the second assumption might appear more suspicious since the two electrons in a bound state must have spin states opposite to each other as fermions of the same species. Therefore if both (1, 2) above are satisfied it will be a surprise by itself and will be an important information regarding the structure of the bound 2-electron system. The resonance energy is expected to be higher than when the spins are opposite but in the same scale. This is because the increase in the binding energy due to the same spin is expected to be not too large since the size of the bound pair is larger than the atomic scale almost by one order of magnitude. The vanishing of the noise can be achieved as follows: Assume we have arranged the two beams so that they are polarized respectively upward and downward when the $z$-axis is chosen perpendicular to the beams. Then we concentrate on the events in which the electrons are scattered off elastically and perpendicularly to the beams. Furthermore let us choose the beam line as the $x$-axis. Then in addition we consider only the case when both scattered electrons are in spin up (or down) state with respect to the $x$-axis. Then the contribution of usual scattering to this event should vanish. In fact this vanishing will be achieved even when both of the beams are polarized upward (or downward) with respect to the $x$-axis, which illustrates somewhat dramatically the fact that spin is not conserved in scattering of identical fermions. The proof of this vanishing is as follows: Let $R$ denote the reflection of the space with respect to the $yz$-plane and let ${\rm\bf R}$ be the corresponding quantum transformation. Let $|p, \pm_z\rangle$ denote free electron states where $p$ is the $4$-momentum. Then it is straightforward to see that $${\rm\bf R}|p, \pm_z\rangle =i |Rp, \mp_z\rangle.$$ Also it is not difficult to see that $${\rm\bf R}|p,\pm_x\rangle=\pm i|Rp, \pm_x\rangle.$$ Let $p_1, p_2$ represent the initial electron 4-momentums which are related by $p_2 =Rp_1$ and $p_1', p_2'$ be the final electron 4-momentums which satisfies $Rp_i'=p_i'$, $i=1,2$. Consider a Feynman diagram FD and let $S_{\rm FD}$ denote the scattering operator represented by FD. Note that $S_{\rm FD}$ is invariant under $R$ (cf. p. 76. [@39]). Now we have: $$\begin{aligned} \lefteqn{\langle p_1', +_x; p_2',+_x| S_{\rm FD} |p_1, +_z; p_2, -_z \rangle}\\ && =\langle p_1', +_x; p_2',+_x|{\rm\bf R}^* S_{\rm FD}{\rm\bf R}|p_1, +_z; p_2, -_z \rangle\\ && =\langle p_1', +_x; p_2',+_x|S_{\rm FD}|p_2, -_z; p_1, +_z \rangle.\end{aligned}$$ The last expression is the contribution of the Feynman diagram obtained by exchanging the initial electrons. Since electrons are fermions the contributions of the two Feynman diagrams cancel each other completely. Note that this cancellation should work also when both of the electrons are initially in $|+_x\rangle$ (or $|-_x\rangle$) spin states. A phtoemission on HTS’s ----------------------- Consider the work function which is $-{{1}\over{2}} E_t=-{{1}\over{2}} E_0 $ when there is a Fermi surface. In general, when only one of the bound electrons is emitted and the other enters the ${{1}\over{2}}E_0$ state, the work function is $-{{1}\over{2}} E_t + {{1}\over{2}} (E_0 -E_t)={{1}\over{2}} E_0 -E_t$. To be precise, if $E_t<E_0$ and $e_1$ denotes the energy of highest state in the filled state below ${{1}\over{2}} E_t$, then the smaller of $-e_1$ and ${{1}\over{2}} E_0 -E_t$ is the work function. In the rest of this subsection the work function will mean $-{{1}\over{2}} E_t$, which is none other than the usual one when the HTS in concern has a Fermi surface. If the photon energy reaches $E_e +2$(work function), an extra channel of photoemission may open: The bound electron pair itself may be emitted and shortly disintegrate into two free electrons with additional momentums in opposite directions corresponding to the kinetic energies ${\frac{1}{2}}E_e$. For instance one may consider the arrangement in which a homogeneous light beam is directed perpendicularly onto a flat surface of an HTS and a pair of electron detectors are located on the plane spanned by the HTS surface and in positions opposite to each other with respect to the spot where the light beam is directed. Then one counts the events each of which is such that two electrons arrive simultaneously at each of the two detectors with the same energy. A peak (or sudden increase) of such events will signal the photon energy have reached $E_e +2$(work function). Detection of this channel of photoemission can be difficult if the life time of bound electron pair is too short or too long. On the other hand the electron pair is expected to have approximately the zero momentum near the surface when the photon energy is close to the escape energy. Thus one may expect that the electron pair stays near the spot relatively long. Accordingly a somewhat long lifetime of the pair may not be a serious obstacle to the experiment. The kinetic energy of each electron in the above arrangement should be ${{1}\over{2}}E_e$ regardless of the type of HTS. Therefore if the channel of photoemission as described in the above is observable, it must be unmistakable. Note that there are many arrangements similar to the above by which one may look for the electron pairs emitted into the free space by photons. Summary and outlook =================== The assumption of short lived bound electron pairs in free space with finite size (Hypothesis B) leads us to a model of an unconventional superconductors. In fact under an inequality ($E_t \leq E_0$ in §2.2.2) and at zero temperature the bound electrons accumulate in a single energy state which we call the ‘apparent Fermi surface’. We observed that the apparent Fermi surface appears to be one of the highest occupied electron states. In fact §2.2 consists the core arguments which give a meaning to the the rest of the paper. For our model to be realistic, Hypothesis B should of course be real. In addition the excess energy ($E_e$ in §2.2.1) should be small enough for $E_t \leq E_0$ may hold in some solid. We estimated that $E_e$ should be much less than $32\, \rm eV$ (§3). The inequality $E_t < E_0$ implies that the unconventional superconductor does not have a (usual) Fermi surface while the equality $E_t = E_0$ implies that it has one (§2.2, 3). The former may correspond to underdoped cuprates while the latter corresponds to any HTS with a Fermi surface (§4.2, 3). The low energy electron-electron scattering seems to be the most direct method to test Hypothesis B. However the cross section of the formation of bound electrons could be too small for the scattering experiment to work (§5.1). A photoemission on HTS’s may be another way to prove our model and, in particular, to verify Hypothesis B itself. This method may work regardless of the size of the cross section of the pair formation (§5.2). The fate of this paper is subject to the results of the experiments proposed in §5 or possibly some other ones yet to appear. Nevertheless the arguments of §2 by themselves appear interesting to the author himself. If any of the experiments is actually performed and yields a positive result it will mean the main claims of the paper are correct. However the theoretical understanding of high Tc superconductivity will be still only at a beginning stage. A portion of the vast experimental data on HTS’s can be exploited to determine the intrinsic structure of the bound state. We also need to understand the interaction between bound pairs and also the one between a pair and an itinerant electron. Ideally it should be possible that $E_i$ and $E_s$ can be estimated by some calculations when a specific lattice structure is given. Then one may attempt to build a detailed theory of high Tc superconductivity by introducing an appropriate quantum mechanical theory of many body system. Hypothesis B, if turns out real, most likely implies a new first principle. This new principle will be studied at the beginning as the cause of the binding of electrons. However its meaning from the view point of physics in general will be a virtually unknown territory which waits inquisitive minds. [30]{} M.R. Schafroth, Phys. Rev. 100 (1955) 463–475. J. Bardeen, L.N. Cooper, J.R. Schrieffer, Phys. Rev. 106 (1957) 162–164. R. Friedberg, T.D. Lee, H.C. Ren, Ann. Phys. 208 (1991) 149–215. R. Micnas, J. Ranninger, S. Robaszkiewicz, Rev. Mod. Phys. 62 No. 1 (1990) 113–171. R. Micnas, S. Robaszkiewicz, T. Kostyrko, Phys. Rev. B 52 No. 9 (1995) 6863–6879. A.S. Alexandrov, IOP Publishing, Philadelphia, 2003. M. Rabinowitz, T. McMullen, Appl. Phys. Lett. 63 (1993) 985–986. Y.J. Uemura et al., Phys. Rev. Lett. 62 (1989) 2317. Y.J. Uemura, J. Phys.: Condens. Matter 16 (2004) S4515–S4540. V.I.  Anisimov, J. Jaanen, O.K. Andersen, Phys. Rev. B 44 (3) (1991) 943–954. J.R. Clow, J.D. Reppy, Phys. Rev. Lett. 16 (1969) 887–888. R. Friedberg, T.D. Lee, Phys. Rev. 40 (1989) 6745–6762. R. Friedberg, T.D. Lee, H.C. Ren, Phys. Rev. B No. 7 (1990) 4122–4134. H.B. Yang, [*et al.*]{}, Nature 456 (2008) 77–80. T. Kondo, [*et al.*]{}, Nature Phys. 7 (2011) 21–25. V. Hinkov, [*et al.*]{}, Nature Phys. 3 (2007) 780–784. J.E. Hirsch, Phys. Scr. 80 (2009) 035702. N. Lindner, A. Auerbach, D. Arovas, Phys. Rev. B 82 (2010) 134510. N. Lindner, A. Auerbach, Phys. Rev. B 81 (2010) 054512. A. Mihlin and A. Auerbach, Phys. Rev. B 80 (2009) 134521. T. Kostyrko and J. Ranninger, Phys. Rev. B 54 (1996) 13105. A. Paramekanti, M. Randeria, T.V. Ramakrishnan, S.S. Mandal, Phys. Rev. B 62 (2000) 6786. H.J. Kwon, A.T. Dorsey, P.J. Hirschfeld, Phys. Rev. Lett. 86 (2001) 3875. E. Altman, A. Auerbach, Phys. Rev. B 65 (2002) 104508. M. Franz, A.P. Iyengar, Phys. Rev. Lett. 96 (2006) 047007. I.F. Herbut, M.J. Case, Phys. Rev. B 70 (2004) 094516. C.C. Homes, S.V. Dordevic, T. Valla, M. Strongin, Phys. Rev. B 72 (2005) 134517. N. Doiron-Leyraud, [*et al.*]{}, Nature 447 (2007) 565–568. N. Barisić, [*et al.*]{}, Nature Phys. 9 (2013) 761–764. S.E. Sebastian, [*et al.*]{}, Nature 454 (2008) 200–203. T.J. Reber, [*et al.*]{}, Nature Phys. 8 (2012) 606–610. A.D. LaForge, [*et al.*]{}, Phys. Rev. B 81 (2010) 064510. J. Meng [*et al.*]{}, Nature (London) 462, (2009) 335–338. P.D.C. King [*et al.*]{}, Phys. Rev. Lett. 106 (2011) 127005. J. Chang, [*et al.*]{}, Nat. Commun. 4 (2013) 3559. M. Platé, [*et al.*]{}, Phys. Rev. Lett. 95 (2005) 077001. V.P.S. Awana, [*et al.*]{}, J. Appl. Phys. 106 (2009) 096102. H. Castro, G. Deutscher, Phys. Rev. B 70, (2004) 174511. S. Weinberg, Cambridge University Press, 1995. [^1]: Electronic mail:yhbyun@hanyang.ac.kr [^2]: Alternative title: A charged boson gas model of high Tc superconductivity
--- abstract: 'The Parkes multibeam pulsar survey is a sensitive survey of a strip along the Galactic plane with $|b|<5\degr$ and $l=260\degr$ to $l=50\degr$. It uses a 13-beam receiver on the 64-m Parkes radio telescope, receiving two polarisations per beam over a 288 MHz bandwidth centred on 1374 MHz. Receiver and data acquisition systems are described in some detail. For pulsar periods in the range 0.1 – 2 s and dispersion measures of less than 300 cm$^{-3}$ pc, the nominal limiting flux density of the survey is about 0.2 mJy. At shorter or longer periods or higher dispersions, the sensitivity is reduced. Timing observations are carried out for pulsars discovered in the survey for 12 – 18 months after confirmation to obtain accurate positions, spin parameters, dispersion measures, pulse shapes and mean flux densities. The survey is proving to be extremely successful, with more than 600 pulsars discovered so far. We expect that, when complete, this one survey will come close to finding as many pulsars as all previous pulsar surveys put together. The newly discovered pulsars tend to be young, distant and of high radio luminosity. They will form a valuable sample for studies of pulsar emission properties, the Galactic distribution and evolution of pulsars, and as probes of interstellar medium properties. This paper reports the timing and pulse shape parameters for the first 100 pulsars timed at Parkes, including three pulsars with periods of less than 100 ms which are members of binary systems. These results are briefly compared with the parameters of the previously known population.' author: - | R. N. Manchester,$^1$[^1] A. G. Lyne,$^2$ F. Camilo,$^{2,3}$ J. F. Bell,$^1$ V. M. Kaspi,$^{4,5}$ N. D’Amico,$^{6,7}$ N. P. F. McKay,$^2$ F. Crawford,$^5$ I. H. Stairs,$^{2,8}$ A. Possenti,$^6$ M. Kramer,$^2$ and D. C. Sheppard$^2$\ $^1$ Australia Telescope National Facility, CSIRO, P.O. Box 76, Epping NSW 1710, Australia\ $^2$ University of Manchester, Jodrell Bank Observatory, Macclesfield, Cheshire, SK11 9DL, UK\ $^3$ Columbia Astrophysics Laboratory, Columbia University, 550 W. 120th Street, New York, NY 10027, USA\ $^4$ McGill University, Ernest Rutherford Physics Building, 3600 University Street, Montreal, QC, Canada H3A 2T8\ $^5$ Massachusetts Institute of Technology, Center for Space Research, 70 Vassar Street, Cambridge, MA 02139, USA\ $^6$ Osservatorio Astronomico di Bologna, via Ranzani 1, 40127 Bologna, Italy\ $^7$ Istituto di Radioastronomia del CNR, via Gobetti 101, 40129 Bologna, Italy\ $^8$ National Radio Astronomy Observatory, Green Bank, WV 24944, USA date: 'Received by MNRAS on December 11, 2000. Revised version accepted on June 14, 2001' nocite: - '[@lmt85; @lbdh93; @hbwv97a; @cc98; @lml+98]' - '[@kas00]' - '[@wpm+77; @tbb+99]' - '[@ric77]' - '[@clj+92; @jlm+92]' - '[@clm+00]' - '[@mlc+00]' - '[@lcm+00]' - '[@dlm+00]' - '[@ckl+00]' - '[@sml+01]' - '[@clm+01]' - '[@dkm+01]' - '[@mld+96]' - '[@cra00]' - '[@vm66]' - '[@hssw82]' - '[@clm+01]' - '[@tc93]' - '[@jlm+92]' - '[@clm+01]' title: 'The Parkes Multibeam Pulsar Survey: I. Observing and Data Analysis Systems, Discovery and Timing of 100 Pulsars' --- 0= =0 ‘!=! = [ ]{} methods: observational — pulsars: general — pulsars: searches — pulsars: timing INTRODUCTION {#sec:intro} ============ Since the discovery of pulsars more than 30 years ago [@hbp+68], many different searches for these objects have contributed to the 730 or so pulsars known prior to mid-1997 when the survey described here commenced. Some efforts with a relatively narrow focus have resulted in the discovery of extremely important objects, for example, the Crab pulsar [@sr68] or the first millisecond pulsar [@bkh+82]. However, the vast majority of known pulsars have been found in larger-scale searches. These searches generally have well-defined selection criteria and hence provide samples of the Galactic population which can be modeled to determine the properties of the parent population. Most of our knowledge about the Galactic distribution and the evolution of pulsars has come from such studies (e.g. Lyne, Manchester & Taylor 1985, Lorimer et al. 1993, Hartman et al. 1997, Cordes & Chernoff 1998, Lyne et al. 1998). Of particular significance are young pulsars. These are often associated with supernova remnants (e.g. Kaspi 2000), show significant period irregularities such as glitches [@lsg00] and have pulsed emission at optical, X-ray and $\gamma$-ray wavelengths (e.g. Wallace et al. 1977, Thompson et al. 1999). Of comparable importance though, are the serendipitous discovery of unusual and often unique objects by larger-scale surveys. Examples of this abound — for example, the first binary pulsar, PSR B1913+16 [@ht74], the first star with planetary-mass companions [@wf92], the first pulsar with a massive stellar companion [@jml+92], and the first eclipsing pulsar [@fst88]. Pulsars show an amazingly diverse range of properties and most major surveys turn up at least one object with new and unexpected characteristics. Some of these are of great significance. The prime example is of course PSR B1913+16, which has provided the first observational evidence for gravitational waves and the best evidence so far that general relativity is an accurate description of gravity in the strong-field regime [@tw89]. Pulsars are relatively weak radio sources. Successful pulsar surveys therefore require a large radio telescope, low-noise receivers, a relatively wide bandwidth and long observation times. Pulsar signals suffer dispersion due to the presence of charged particles in the interstellar medium. The dispersion delay across a bandwidth of $\Delta\nu$ centred at a frequency $\nu$ is $$\label{eq:dm} \tau_{\rm DM} = 8.30 \times 10^3\,{\rm DM}\,\Delta\nu\,\nu^{-3}\;\;{\rm s},$$ where the dispersion measure, DM, is in units of cm$^{-3}$ pc and the frequencies are in MHz. To retain sensitivity, especially for short-period, high-dispersion pulsars, the observing bandwidth must be sub-divided into many channels. In most pulsar searches to date, this has been achieved using a filterbank system. The sensitivity of pulsar searches is also limited by the Galactic radio continuum background and by interstellar scattering, especially for low radio frequencies and at low Galactic latitudes. Interstellar scattering results in a one-sided broadening of the observed pulse profile with a frequency dependence $\sim \nu^{-4.4}$ (e.g. Rickett 1977) which cannot be removed by using narrow bandwidths. Most pulsar searches along the Galactic plane have therefore been at higher radio frequencies, often around 1400 MHz (e.g. Clifton et al. 1992, Johnston et al. 1992). The Clifton et al. 1400 MHz survey was carried out using the 76-m Lovell Telescope at Jodrell Bank Observatory, and covered a strip along the Galactic plane with $|b| < 1.1\degr$ between longitudes of $355\degr$ and 95, with a narrower extension to 105. The limiting sensitivity to long-period pulsars away from the Galactic plane was about 1 mJy. A total of 61 pulsars was detected, of which 40 were not previously known. Johnston et al. carried out a complementary survey of the southern Galactic plane in the region $|b| < 4\degr$ and between $l=270\degr$ and $l=20\degr$, with a central frequency of 1500 MHz. The limiting sensitivity was very similar to that for the Clifton et al. survey. A total of 100 pulsars was detected of which 46 were previously unknown. These surveys found a sample of young and generally distant pulsars which are strongly concentrated at low Galactic longitudes, $|l|\la40\degr$. They include a number of interesting objects, including the eclipsing high-mass binary system PSR B1259$-$63 [@jml+92] and many glitching pulsars [@sl96; @wmp+00]. The Parkes multibeam receiver was conceived with the aim of undertaking large-scale and sensitive searches for relatively nearby galaxies ($z \la 0.04$) by detection of their emission in the 21-cm line of neutral hydrogen. The receiver has 13 feeds with a central feed surrounded by two rings, each of six feeds, arranged in a hexagonal pattern [@swb+96]. This arrangement permits the simultaneous observation of 13 regions of sky, increasing the speed of surveys by approximately the same factor. It was quickly realised that this system would make a powerful instrument for pulsar surveys, provided the bandwidth was increased above the original specification and the necessary large filterbank system could be constructed. A new data acquisition system capable of handling multibeam data sets was also a fundamental component of the system. These requirements were met, and the Parkes multibeam pulsar survey commenced in August 1997. This survey aims to cover a strip with $|b|<5\degr$ along the Galactic plane between Galactic longitudes of 260 and 50. The filterbank system gives $96 \times 3$ MHz channels of polarisation-summed data for each beam which are sampled every 250 $\mu$s. Observation times per pointing are 35 min, giving a very high sensitivity, about seven times better than those of the Clifton et al. (1992) and Johnston et al. (1992) surveys, at least for pulsars not in short-period binary systems. Although not yet complete, the survey has been outstandingly successful, with over 600 pulsars discovered so far. Preliminary reports on the multibeam survey and its results have been given by Camilo et al. (2000a), Manchester et al. (2000), Lyne et al. (2000) and D’Amico et al. (2000). Also, papers on the discovery of several pulsars of particular interest have been published. Lyne et al. (2000) announced the discovery of PSR J1811$-$1736, a pulsar with a period of 104 ms in a highly eccentric orbit of period 18.8 d with a companion of minimum mass 0.7 M$_{\odot}$, most probably a neutron star, making this the fourth or fifth double-neutron-star system known in the Galactic disk. Camilo et al. (2000b) report the discovery of two young pulsars, J1119$-$6127 and J1814$-$1744, which have the highest surface dipole magnetic field strengths among known radio pulsars. PSR J1119$-$6127 has a characteristic age, $\tau_c$, of only 1600 years, a measured braking index, $n = 2.91 \pm 0.05$ and is associated with a previously unknown supernova remnant [@cgk+01; @pkc+01]. PSR J1814$-$1744 has a much longer period, 3.975 s, and the highest inferred surface dipole field strength of any known radio pulsar, $5.5 \times 10^{13}$ G, in the region of so-called “magnetars” [@pkc00]. PSR J1141$-$6545 is a relatively young pulsar ($\tau_c \sim 1.4$ Myr) in an eccentric 5-hour orbit for which the relativistic precession of periastron has been measured [@klm+00a]. This implies that the total mass of the system is 2.30 M$_{\odot}$, indicating that the companion is probably a massive white dwarf formed before the neutron star we observe as the pulsar. Stairs et al. (2001) discuss the high-mass binary system PSR J1740$-$3052 which is in a highly eccentric 230-day orbit with a companion star of minimum mass 11 M$_{\odot}$. A possible companion is a late-type star identified on infrared images, but the absence of the expected eclipses and precession of periastron due to tidal interactions suggest that the actual companion may be a main-sequence B-star or a black hole hidden by the late-type star. Camilo et al. (2001) report the discovery of five circular-orbit binary systems with orbital periods in the range 1.3 – 15 days. Three of these pulsars, PSRs J1232$-$6501, J1435$-$6100 and J1454$-$5846, as well as PSR J1119$-$6127, were discovered early in the survey and hence are included in the pulsars described in this paper. Finally, D’Amico et al. (2001) report the discovery of two young pulsars, PSRs J1420$-$6048 and J1837$-$0604, which may be associated with EGRET $\gamma$-ray sources. In the following section we describe the observing and analysis systems and the search strategy. Timing observations undertaken after the confirmation of a pulsar and our data release policy are described in Section 3. In Section 4, we give parameters for the first 100 pulsars discovered by the survey. Implications of these results are discussed in Section 5. Detailed information about the survey, observing instructions, data release policy, and results may be found under the pulsar multibeam web page.[^2] OBSERVING AND SEARCH ANALYSIS SYSTEMS ===================================== In this section, we describe in detail the receiver system, data acquisition system, analysis procedures and search strategy being used for the Parkes multibeam pulsar survey. The Receiver System {#sec:rcvr} ------------------- The Parkes multibeam receiver consists of a 13-feed system operating at a central frequency of 1374 MHz with a bandwidth of 288 MHz at the prime focus of the Parkes 64-m radio telescope. Orthogonal linear polarisations are received from each feed and fed to cryogenically cooled HEMT amplifiers, constructed under contract at Jodrell Bank Observatory. The horns are arranged in a double hexagon around a central horn with a spacing between horns of 1.2 wavelengths; the corresponding beam spacing on the sky is close to twice the nominal half-power beamwidth of 14.2 arcmin [@swb+96]. Measured system parameters[^3] are listed in Table \[tb:rcvr\]. System temperatures vary by a degree or so over the 26 receivers; the value of 21 K quoted in the table is an average value. For the central beam, this corresponds to an equivalent system flux density of 28.6 Jy. Outer feeds have a somewhat lower efficiency, reduced by about 0.27 db for the inner ring and 1.0 db for the outer ring. The outer beams are also somewhat elliptical, with the major axis in the radial direction, and have a significant coma lobe. Predicted beam patterns for the central and outer beams are given by Staveley-Smith et al. (1996); at least to the half-power point, the beam patterns are well represented by a two-dimensional Gaussian function. [llll]{} Number of beams & 13 & &\ Polarisations/beam & 2 & &\ Frequency channels/polarisation & $96\times 3$ MHz\ System temperature (K) & 21 & &\ \ Beam & Centre & Inner Ring & Outer Ring\ Telescope gain (K/Jy) & 0.735 & 0.690 & 0.581\ Half-power beamwidth (arcmin) & 14.0 & 14.1 & 14.5\ Beam ellipticity & 0.0 & 0.03 & 0.06\ Coma lobe (db) & none & $-17$ & $-14$\ \[tb:rcvr\] After further amplification, all 26 signals are down-converted in the focus cabin to intermediate frequency using a local oscillator frequency of 1582 MHz. These signals are transferred to the tower receiver room via low-loss coaxial cables and pass through cable-equalising amplifiers and level setting attenuators to a down-conversion system. This splits the 288-MHz bandwidth of each signal into three equal parts with output between 64 and 160 MHz using an up-down conversion system with band-limiting filters centred at 1060 MHz. These signals are then fed to a very large filterbank system, designed and constructed at Jodrell Bank Observatory and Osservatorio Astronomico di Bologna, which gives 96 3-MHz channels for each polarisation of each feed. The output of each filter is detected and summed with its corresponding polarisation pair. These summed outputs are high-pass filtered with an effective time constant of approximately 0.9 s, integrated for the sampling interval of 250 $\mu$s and then one-bit digitised. Data Acquisition and Analysis {#sec:data} ----------------------------- Data acquisition is controlled by a multi-threaded C++ program, [pmdaq]{}, running on a Digital Alpha [picmg]{} processor. A custom-designed board with a programmable Xilinx device is installed on the computer’s PCI bus, and interfaces between the digitiser and an Ikon-10116 16-bit direct memory access card. Integration of the first sample of an observation is triggered by the Observatory 1-s pulse, allowing measurement of pulse arrival times. The first 16-bit word of every input sample is a counter which is checked by the data acquisition program and then discarded. Time synchronisation is further checked by using a 5-s pulse from the Observatory clock. Data can be output to disk, double-density Exabytes or digital linear tapes (DLTs). Each output block contains a 640-byte header giving telescope, receiver, source and observation parameters and 48 kbyte of one-bit data, all from a single beam. Successive blocks have data from successive beams. Survey-mode data are normally output to DLTs and timing data to Exabytes. For survey observations, the data rate is 640 kbyte s$^{-1}$, which fills a DLT in approximately 15 hours of continuous observation. Observations are controlled using a Tcl-Tk interface to a control program, [pmctrl]{}, operating on a Sun Sparc workstation. The interface allows setting of observation parameters such as the receiver, filterbank system, sampling interval, observation time, output device, pointing centre and feed position angle and the logging of operator messages. [pmctrl]{} has socket interfaces to the Observatory clock, the telescope drive system and the receiver translator system and an RPC interface to PMDAQ. The program maintains a record of tape operations and handles status returns and error conditions from the telescope or data acquisition system. It also writes a summary observation file and a complete log file giving details of all observations. Details of the observing strategy for the multibeam survey are given in §\[sec:strategy\]. Observations can be monitored in real time using a program, [pmmon]{}, which runs on a networked workstation with user input via a Tcl-Tk interface. [pmmon]{} communicates with [pmdaq]{} via an RPC interface, obtaining either complete tape blocks or data streams summed across all filter channels for each beam. Several forms of output are provided, including mean digitiser levels for each beam, modulation spectra and time sequences for each beam, and modulation spectra for each filterbank channel of a given beam. The latter form of output is especially valuable for tracing narrow-band interference. For observations of known pulsars (normally with the centre beam), integrated pulse profiles for each frequency channel and a dedispersed mean pulse profile can be displayed and may be recorded to disk for later examination. Offline processing runs on networked workstations at each of the collaborating institutions under the control of a Java program, [pmproc]{}. The processing consists of four main stages. Data are first examined for the presence of narrow-band radio-frequency interference by computing the modulation spectrum for each frequency channel, normally using a subset of each data file of length $2^{19}$ samples. Samples in channels containing strong interference are set to zero or one in alternate channels (to give a mean of 0.5) as the data are transferred to disk in subsequent stages. The second stage of processing concerns identification of interfering signals in the modulation spectrum. Since most interference is undispersed, this analysis is performed on the ‘zero-DM’ spectrum. Data for each observation are summed across all frequency channels on reading from the tape to produce a zero-DM data stream of $2^{23}$ samples per beam. This is Fourier-transformed to give the modulation spectrum. Known signals which are present all or most of the time, such as the power line frequency (50 Hz) and its harmonics, are first identified and their bandwidth determined. The remaining spectrum is then searched for significant spectral features. This search is performed on the fundamental spectrum and on spectra obtained by summing 2, 4, 8 and 16 harmonics. Signals are identified and their bandwidth and harmonic content recorded. Any signal which appears in four or more beams of a given pointing is flagged as interference; that signal and its harmonics are deleted in subsequent processing steps for that pointing. Similarly, any signal which appears in a given beam in more than three pointings is marked for deletion in subsequent processing for that beam in all pointings on that tape, and any signal which appears more than seven times in any beam of a given tape is marked for deletion in all pointings on that tape. A summary output is produced for each tape (normally containing 20 – 25 pointings) which gives grey-scale images of the modulation spectra as a function of beam and pointing, and lists the frequency ranges identified as interference. In the third and major stage of processing, the data are searched for periodic signals over a range of dispersion delays. The basic analysis procedure is very similar to that employed in the Parkes Southern pulsar survey and described in detail by Manchester et al. (1996). A ‘tree’ dedispersion algorithm [@tay74] is used. Dispersion delays are proportional to $\nu^{-2}$, but the tree algorithm assumes that they are linear with frequency. This is approximately true for small fractional bandwidths, but the multibeam survey has a fractional bandwidth of about 20 per cent, and straightforward application of tree dedispersion would lead to excessive pulse smearing for short-period pulsars. Also, the tree algorithm requires a number of frequency channels which is a power of two. To overcome these problems, the delays are ‘linearised’ on reading from tape. The number of frequency channels is increased from 96 to 128, and channel data streams are reassigned in channel number to remove the second-order dispersion-delay term. These channel reassignments are independent of dispersion measure. The linearised data are split into 8 sub-bands, each of 16 channels. A tree dedispersion is performed on each of these sub-bands to give dedispersed data streams for 16 dispersions between zero and the ‘diagonal DM’ (at which the dispersion smearing across one channel equals the sampling interval), approximately 35 cm$^{-3}$ pc. These are subsequently added with varying delays to give a range of DMs about the central value. Another application of the tree algorithm to delayed data gives a further 16 data streams for dispersions from 35 to 70 cm$^{-3}$ pc. Data samples are then summed in pairs to give an effective sampling interval of 0.5 ms and the tree algorithm is applied again to give 16 data streams for dispersions from 70 to 139 cm$^{-3}$ pc. This process is repeated up to four more times, to an effective sampling interval of 8 ms, until a maximum DM of 2177 cm$^{-3}$ pc or 42/$\sin |b|$ cm$^{-3}$ pc, where $b$ is the Galactic latitude, whichever is less, is reached. The dedispersed data streams for each sub-band are then summed with a range of delays to give up to 325 dedispersed data streams with DM in the range 0 to 2203 cm$^{-3}$ pc. The DM steps are 0.54 cm$^{-3}$ pc for the first tree data set, 0.81 cm$^{-3}$ pc for the second, and 26 cm$^{-3}$ pc for the last, increasing by roughly a factor of two for each successive tree data set after the second. For each DM, the summed data stream is high-pass filtered by subtracting a running mean of length 2.048 s and then Fourier-transformed using a fast Fourier transform (FFT) routine. After deletion of spectral channels affected by interference and interpolation to recover spectral features lying midway between Fourier bins, the resulting spectra are searched for significant peaks. This process is repeated for spectra in which 2, 4, 8 and 16 harmonics have been summed to give a set of 50 candidate periods (10 from the fundamental and from each harmonic sum) for each DM. A pulse profile is then formed for each candidate period by inverse transformation of the complex Fourier components for the fundamental and its harmonics, and the signal-to-noise ratio of this profile computed. All such profiles from the full analysis over all DMs for a given beam are then ordered by signal-to-noise ratio. For the top 66 candidates, the appropriate tree data streams are summed into 4 sub-bands and folded into 16 sub-integrations, each of duration a little over 2 min, using the nominal period and DM. These are then summed with a range of delays in frequency and time, up to one sample per sub-band and per sub-integration respectively, to search for the highest signal-to-noise ratio over a range of period and DM about the nominal values. The candidate parameters, including the maximum signal-to-noise ratios obtained from the harmonic summing, the reconstructed profile and results from the $P$–DM search are then recorded for later examination. In the next stage of processing, candidates from all pointings on a given tape are collated and searched for common periods. Candidate periods seen in more than 6 beams are rejected as interference. Remaining candidates with a $P$–DM signal-to-noise ratio above a threshold (normally 8.0, corresponding to a random occurrence every few beams) are then examined using an interactive display and classified as Class 1 or Class 2 candidates or rejected as probable interference. Fig. \[fg:cand\] shows the display plot for a typical Class 1 candidate, later confirmed as a pulsar. The classification is necessarily somewhat subjective and is based on the similarity of the subplots to those for known pulsars. The most important criteria are final signal-to-noise ratio, continuity across sub-integrations and sub-bands of the pulse signal, and a well-defined peak in signal-to-noise ratio versus DM. The signal should also be linear or parabolic (indicating a constant acceleration) in the phase-time plot and linear in the phase-frequency plot. Most Class 1 candidates have a signal-to-noise ratio of 10 or more. For the early low-latitude phases of the survey, a Class 1 candidate was selected every one or two pointings. Each candidate is identified by a unique code based on the processing centre and a sequential number. Candidates are then re-observed using the centre beam of the multibeam receiver in order to confirm their reality as pulsars. Observations are made at five grid positions, the nominal position and four positions offset in latitude and longitude by 9 arcmin, normally with 6 min integration per point. These observations are searched in period and DM about the nominal values and, if two or three detections are obtained, an improved position is computed from the relative signal-to-noise ratio. If there is no detection in the grid observations, a 35-min observation is made at the nominal position and searched for a significant signal. This search is usually made using Fourier techniques to detect pulsars whose period may have changed significantly from the nominal value, due to binary motion for example. Candidates which are not redetected in one or two such observations are down-graded or rejected. To date, all Class 1 candidates have been re-observed with about 80 per cent of them being confirmed as pulsars. ----------------------------------------- ------------- Galactic longitude range 260  to 50 Galactic latitude range $-5$  to 5 Hexagonal grid spacing $0\fdg2333$ Number of survey pointings 2670 Sampling interval, $\tau_{samp}$ 250 $\mu$s Observation time/pointing, $\tau_{obs}$ 2100 s Limiting sensitivity for centre beam 0.14 mJy ----------------------------------------- ------------- : Pulsar multibeam survey parameters \[tb:survey\] Survey Sensitivity {#sec:sens} ------------------ Survey parameters are summarised in Table \[tb:survey\]. The system sensitivity for the centre beam has been modeled by Crawford (2000), assuming the parameters given in Tables \[tb:rcvr\] and \[tb:survey\]. The raw limiting flux density is given by the radiometer equation $$S_{lim} = \frac{\sigma \beta T_{sys}}{G \sqrt{B N_p \tau_{obs}}}$$ where $\sigma$ is a loss factor, taken to be 1.5,[^4] $\beta$ is the detection signal-to-noise ratio threshold, taken to be 8.0, $T_{sys}$ is the system temperature, $G$ is the telescope gain, $B$ is the receiver bandwidth in Hz, $N_p$ is the number of polarisations and $\tau_{obs}$ is the time per observation in seconds. An idealised pulse train of frequency $f_1=P^{-1}$, where $P$ is the pulse period, is represented in the Fourier domain by its fundamental and 15 harmonics $F(f_i)$, where each of the harmonics has an amplitude $y_0(f_i)= 1/S_{lim}$. These harmonics are then multiplied by a series of functions, representing the responses of the various filters in the system, to give a final set of Fourier amplitudes $y(f_i)$. The first filter function is the Fourier transform of the intrinsic pulse profile, assumed to be Gaussian with a half-power width of $W_{50}=0.05 P$, $$|g_1(f)| = \exp\left(-\frac{\pi^2 f^2 W_{50}^2}{4 \ln 2}\right)$$ and by a similar function $g_2(f)$ representing the Fourier transform of the smearing due to dispersion in each filter channel, also assumed to have a Gaussian response, with $W_{50}$ replaced by $\tau_{\rm DM}$ (Equation \[eq:dm\]). Since the analysis is based on the amplitude spectrum and each of the filters is real, we only have to consider the amplitude response of each filter. The harmonics are then multiplied by the Fourier response of each of the filters in the hardware and software system. These result from the finite sampling interval, $$|g_3(f)| = \left|\frac{\sin(\pi f \tau_{samp})}{\pi f \tau_{samp}}\right|,$$ the digitiser high-pass filtering, a two-pole filter with amplitude response $$|g_4(f)| = \frac{(2\pi f \tau_{\rm HP})^2}{[1+(2\pi f \tau_{\rm HP})^4]^{1/2}},$$ where $\tau_{\rm HP} = 0.9$ s (see §\[sec:100psrs\]), and a software high-pass filter, implemented by subtracting a box-car average of length $\tau_S = 2.048$ s from the dedispersed data stream, giving $$|g_5(f)| = 1 - \frac{\sin(\pi f \tau_S)}{\pi f \tau_S}.$$ The harmonic range is then limited to $f > f_{min}$, where $f_{min} = 0.2$ Hz, a limit set mainly by the need to reject low-level interference and other red noise, and $f < f_N = 1/(2 \tau_{samp})$, the Nyquist frequency. Harmonics of the lowest valid signal frequency are then summed to give a final amplitude $$Y(f_n) = \frac{\sum_{i=1}^{n} y(f_i)}{\sqrt{n}}.$$ The final limiting sensitivity $S_{min}$ is then given by $$S_{min} = \frac{1}{Y_{max}(f_n)},$$ where $Y_{max}(f_n)$ is the largest $Y(f_n)$ for $n$ = 1, 2, 4, 8 or 16. The resultant sensitivity curves for four representative values of DM are shown in Fig. \[fg:sens\]. These curves show that for low-DM pulsars with periods greater than about 10 ms, the limiting sensitivity is about 0.14 mJy. Steps in the zero-DM curve at short periods result from changes in the number of harmonics below the Nyquist frequency; at higher DMs, the higher harmonics are attenuated and the steps are not as evident. Steps between 100 ms and 1 s result from the software high-pass filtering. The Fourier cutoff at $f_{min}$ and the hardware and software high-pass filtering results in reduced sensitivity at longer periods. Especially for distant pulsars near the Galactic plane, the sensitivity is degraded by two effects not included in the modeling: sky background temperature ($T_{sky}$) and pulse smearing due to scattering ($\tau_{scatt}$). Limiting sensitivities should be scaled by factors $(T_{sys} + T_{sky})/T_{sys}$ and $[w/(P-w)]^{1/2}/[w_0/(P-w_0)]^{1/2}$, where $w = (W_{50}^2 + \tau_{samp}^2 + \tau_{\rm DM}^2 + \tau_{scatt}^2)^{1/2}$ is the effective pulse width, $W_{50}$ is the intrinsic pulse width, and $w_0 = [(0.05P)^2 + \tau_{samp}^2 + \tau_{\rm DM}^2]^{1/2}$. Sky background temperatures are highest close to the Galactic plane and towards the Galactic Centre; for example at ($l,b = 300\degr, 0\degr$), $T_{sky} \sim 5$ K and for ($l,b = 350\degr, 0\degr$), $T_{sky} \sim 18$ K. Scattering parameters have not yet been measured for the multibeam pulsars, but a cursory examination of the mean pulse profiles shows that at least 15 per cent have scattering broadening of a few milliseconds or more. It should also be emphasised that these sensitivity figures refer to centre of the central beam. As Table \[tb:rcvr\] shows, the outer beams are less sensitive. Averaged over the 13 beams, the limiting sensitivity is about 0.16 mJy. Also, of course, pulsars do not usually lie at the beam centre in the discovery observation. The limiting sensitivity is further degraded by the beam response at the position of the pulsar relative to that at the beam centre. The average beam gain over the hexagonal area covered by one beam (see Section \[sec:strategy\] below) assuming a gaussian beamshape, is 0.70, giving an average limiting flux density for the survey as a whole of 0.22 mJy. The sensitivity is also degraded by radio frequency interference, but this is much more difficult to quantify. There are many forms of interference, including both natural and man-made signals. Natural interference such as lightning is not a major problem as it is not periodic and some protection is afforded by the one-bit digitisation. Some of the man-made interference originates from within the Observatory and even from within the receiving system itself, but most sources are narrow-band transmissions such as radar beacons and communication links. Much of the interference is transient, which makes it difficult to trace. Typically 6 – 8 frequency channels are routinely rejected because they contain persistent modulated narrow-band signals. The sensitivity of the system to modulation at the power-line frequency (50 Hz) was minimised by choosing a sampling interval such that the Nyquist frequency is a harmonic of 50 Hz. Although not strictly interference, beam 8A has been disconnected since the start of the survey because of a quasi-periodic gain modulation occurring in the cryogenically cooled part of the receiver. Also, coupling within the one-bit digitiser results in periodic signals at frequencies of $f_N/2^n$, where $n$ is an integer, and their harmonics. These are rejected in the Fourier domain. After rejection of the known sources of interference, typically there are 20 – 30 narrow-band signals (‘birdies’) detected in the zero-DM modulation spectra for a full tape. These are flagged and deleted from the pointings in which they were detected. Typically, much less than one per cent of the modulation spectrum is rejected. Search Strategy {#sec:strategy} --------------- The 13 beams of the multibeam receiver are spaced by approximately two beamwidths on the sky. Therefore interleaved pointings are required to cover a given region. As shown in Fig. \[fg:beams\], a cluster of four pointings covers a region about 1.5  across with adjacent beams touching at the half-power points. Clusters tessellate to fully cover a region. For this configuration, the multibeam receiver must be oriented at a Galactic position angle of 30. Since the time per pointing is relatively long (35 min), the variation of parallactic angle is tracked during the observation. The range of parallactic angle is $\pm 180\degr$ but the multibeam receiver has a feed-angle range limited to $\pm 75\degr$, and so $\pm 60\degr$ or $\pm 120\degr$ may be added to the feed angle to keep it within the legal range throughout the observation. This changes the labels on the beams in Fig. \[fg:beams\] but not the pattern. The survey region, $-100\degr < l < 50\degr$ and $|b| < 5\degr$, is covered by a grid of survey pointings, defined by $$\begin{aligned} l & = & (i_l-5000+0.5\,i_{b2})\,d_l~{\rm and} \\ b & = & (i_b-500)\,d_b,\end{aligned}$$ where $$\begin{aligned} i_l & = & 4400+7n+2m+c_l, \\ i_b & = & 500-2n-8m+c_b,\end{aligned}$$ $d_l = 0.5\,\Delta$, $d_b = 0.5\,\Delta\,$[sin]{}60, $\Delta = 0\fdg46667$ is the beam separation, and $i_{b2}$ is 1 if $i_b$ is odd and 0 if $i_b$ is even. The pointings within a cluster are defined by $(c_l,c_b) =$ (0,0), (1,0), (0,1) and ($-1$,1), and $n$ and $m$ are integers, the range of which is determined by the area to be covered. For example, the pointing closest to the Galactic Centre is at $l=359\fdg767$, $b=0\fdg0$, with $i_l = 4999$ and $i_b = 500$, corresponding to $n=92$, $m=-23$ and $(c_l,c_b) = (1,0)$. A record of the observational and processing status is maintained in a file, where each pointing is identified by a 7-digit number, $1000\,i_l + i_b$, known as the pointing ID. The inverse transformation, from $(l,b)$ to the nearest pointing ID is given by $$\begin{aligned} i_b & = & 500 + b/d_b +0.5~{\rm and} \\ i_l & = & 5000 + l/d_l - 0.5 i_{b2} + 0.5,\end{aligned}$$ where $-180\degr < l \le 180\degr$. Each of the 13 beam positions has a unique ‘grid ID’ which, for a feed Galactic position angle of 30, is offset from the pointing ID by $\Delta i_l$ = 0, $-$1, 1, 2, 1, $-$1, $-$2, $-$3, 0, 3, 3, 0 and $-$3, and $\Delta i_b$ = 0, 2, 2, 0, $-$2, $-$2, 0, 2, 4, 2, $-$2, $-$4 and $-$2 respectively. An interactive program, [hexview]{}, is used to display the status of each pointing and to select pointings for observation. Consecutive pointings observed in one session are separated by about 5  to avoid the possibility of a strong pulsar appearing in more than one pointing and hence possibly being flagged as interference. As a system check, the strong pulsar PSR J1359$-$6038 is observed on most observing days for about 1 min, centred on each beam in turn. Initially the survey region extended from $l=220\degr$. However, a decision was made to limit it at $l=260\degr$ after a few months because of the low pulsar density between these two longitudes. Observations began at low latitudes where the pulsar concentration is high. The discovery rate for the first year of observation was at the unprecedented rate of more than one pulsar per hour of observing time. TIMING OBSERVATIONS AND ANALYSIS {#sec:timing} ================================ Almost all follow-up investigations require a more precise pulsar position, pulsar period $P$, and/or period derivative $\dot P$ than those obtained from the discovery observation. Improved estimates of the DM, the mean pulsed flux density $S_{1400}$ and the pulse widths at the 50 per cent and 10 per cent levels, $W_{50}$ and $W_{10}$, are also valuable. All of these parameters are determined from a series of timing observations made over a span of at least one year. These observations also reveal binary motion if present, and enable the binary parameters to be determined. Timing observations are made using either the Parkes 64-m telescope or the Lovell 76-m telescope at Jodrell Bank Observatory, with most of the detected pulsars north of declination $-35\degr$ being timed at Jodrell Bank. In this paper, we give results only from Parkes timing observations. The centre beam of the multibeam receiver is used, with the same filterbank and data acquisition system as is used for the survey. Typically, observations are of duration between 2 and 30min, dependent upon the pulsar flux density, and are made at intervals of 2 – 6 weeks, with some more closely spaced observations to resolve pulse counting ambiguities. The data for each observation are dedispersed and synchronously folded at the predicted topocentric pulsar period in off-line processing to form an ‘archive’ file. These files normally have 8 sub-bands across the observed bandwidth and a series of sub-integrations, typically of 1-min duration. These are summed over both frequency and time to form a mean pulse profile. This is then convolved with a ‘standard profile’ for the corresponding pulsar, producing a topocentric time-of-arrival (TOA). These are then processed using the [tempo]{} program[^5] which converts them to barycentric TOAs at infinite frequency and performs a multi-parameter fit for the pulsar parameters. Barycentric corrections are obtained using the Jet Propulsion Laboratory DE200 solar-system ephemeris [@sta90]. Initially, standard profiles are formed from a high signal-to-noise ratio observation. Once a valid timing solution is obtained, all or most of the observations are summed to form a ‘grand average’ profile. A new standard profile is then made from this average profile and the TOAs recomputed. This often reduces the final residuals for the timing solution by a factor of two or more. As evidenced by the discovery that PSR J2144$-$3933 has an 8.5-s period [@ymj99], standard search software can sometimes mis-identify the pulse period by a factor of two or three. As mentioned above (§\[sec:sens\]) there is a software limit at a period of 5 s. Furthermore, interference can sometimes mask low-frequency spectral components. In such cases a pulsar may be detected by its 2nd or 3rd harmonic, leading to the assumption of an incorrect period. Such errors can be identified by folding the data at twice and three times the nominal period and examining the resulting mean pulse profiles. This check is routinely done for all pulsars discovered in this survey and has resulted in period correction for several pulsars. In a few pulsars, at the confirmation stage or soon after, significant variations in solar-system barycentric period are observed. These may be due to an especially large period derivative, or to binary motion. In either case, an improved estimate of the barycentric period is obtained by summing the archive sub-integrations over a range of periods about the nominal value. Where the rate of period change is not too great, improved periods can be obtained by fitting TOAs for several observations over one or a few adjacent days. A series of these barycentric periods can then be fitted with either a period derivative term or a binary model. The parameters from this fit then form the basis for a coherent timing solution using [tempo]{}. Improved estimates of the dispersion measure can also be obtained from individual archive files by summing the sub-bands with a range of delays corresponding to different DM values about the nominal value and searching for the highest signal-to-noise ratio. After a timing solution is available, a final DM value for each pulsar is obtained by summing each archive in time and forming four sub-bands across the 288 MHz observed bandwidth. TOAs are then obtained for all archives for each of the four sub-bands. Improved estimates of the DM and its error are then obtained using [tempo]{}, holding all parameters except DM fixed at the values from the final timing solution. The grand average profile for each pulsar is also used as a basis for estimating the mean flux density and pulse width parameters. Flux densities were calibrated by observing a sample of 13 pulsars with previously catalogued 1400 MHz flux densities of moderate value (to give reasonable signal-to-noise ratio while avoiding digitiser saturation) and high DMs (to minimise variations due to scintillation). Table \[tbl:flux\_cal\] lists the pulsars used, their DM and their assumed flux density [@tml93]. This calibration is based on the accumulated digitiser counts with the multibeam system, and hence is relative to the system equivalent flux density. The effect of the varying sky background temperature was allowed for in the calculation by scaling values of sky background temperature at 408 MHz from the Haslam et al. (1982) all-sky survey to 1374 MHz assuming a spectral index of $-2.5$. Based on the rms fluctuation of computed flux densities among the calibration pulsars and independently calibrated observations of these and other pulsars using the Australia Telescope Compact Array and the Caltech correlator [@nms+97], we estimate that the flux scale is accurate at the 10 – 15 per cent level. ------------- -------------- ------------ PSR J DM    $S_{1400}$ cm$^{-3}$ pc mJy 1157$-$6224 325.2!! 10 1224$-$6407 97.8!! !5 1243$-$6423 297.2!! 13 1306$-$6617 436.9!! !3.9 1326$-$5859 288.1!! 10 1327$-$6222 318.4!! 12 1327$-$6301 294.9!! !3.4 1338$-$6204 638.0!! !5.1 1359$-$6038 294.1!! !7 1430$-$6623 65.3!! !6 1512$-$5759 628.7!! !4.0 1522$-$5829 199.9!! !4.8 1539$-$5626 176.5!! !4.2 ------------- -------------- ------------ : Flux density calibration pulsars \[tbl:flux\_cal\] Except for a few especially interesting cases, timing observations cease 12 – 18 months after confirmation. By this time a coherent timing solution has normally been obtained, giving an accurate pulsar position, pulse period, period derivative, dispersion measure and, if applicable, binary parameters. Pulsars are renamed at this stage, based on the accurate J2000 position. The parameters are then entered into the pulsar catalogue, allowing accurate predictions for future observations, and listed on the Parkes multibeam pulsar survey [New Pulsars]{} web page. The multibeam pulsar survey web pages also specify policy for release of raw data tapes. On request, these are made available for copying two years after the date of recording. The [Data Release]{} web page lists all available observations sorted by date, Parkes project identification, observed position and tape label. We will provide documentation specifying the data format and software to read and copy data tapes on request. DISCOVERY AND TIMING OF THE FIRST 100 PULSARS {#sec:100psrs} ============================================= In this paper we report the discovery of 100 pulsars by the Parkes multibeam pulsar survey. These pulsars were selected as the first 100 from the list of pulsars being timed at Parkes, ordered by the date at which regular Parkes timing observations commenced. All are south of declination $-35\degr$. Table \[tb:posn\] lists the pulsar name, the J2000 right ascension and declination from the timing solution, the corresponding Galactic coordinates, the beam in which the pulsar was detected, the radial distance of the pulsar from the beam centre in units of the beam radius (cf. Table \[tb:rcvr\]), the signal-to-noise ratio of the discovery observation from the final time-domain folding in the search process, the mean flux density averaged over all observations included in the timing solution, and pulse widths at 50 per cent and 10 per cent of the peak of the mean pulse profile. Flux densities have been corrected for off-centre pointing during the timing observations. Many of these pulsars were detected more than once by the survey. Beam and signal-to-noise details refer to the detection having the highest signal-to-noise ratio. The 10 per cent width is not measurable for pulsars with mean profiles having poor signal-to-noise ratio. Estimated uncertainties are given in parentheses where relevant and refer to the last quoted digit. Flux densities may be somewhat over estimated for very weak pulsars or those which have extended null periods, since non-detections are not included in the timing solution. Table \[tb:prd\] gives solar-system barycentric pulse periods, period derivatives, epoch of the period, the number of TOAs in the timing solution, the MJD range covered by the timing observations, the final rms timing residual and the dispersion measure. Three of the pulsars in Tables \[tb:posn\] and \[tb:prd\] are members of binary systems. As mentioned in §\[sec:intro\], all three of these pulsars have been previously published by Camilo et al. (2001); details are repeated here for completeness. Table \[tb:binpar\] gives the binary parameters for these pulsars obtained from the timing solutions. Two of these pulsars are in low-eccentricity orbits, for which the longitude and time of periastron are not well determined. For these pulsars the reference epoch is the time of passage through the ascending node. PSR J1454$-$5846 has a larger (although still small) eccentricity and the longitude and epoch of periastron could be determined with precision. Mean pulse profiles at 1374 MHz for the 100 pulsars are given in Fig. \[fg:prf\]. As mentioned in §\[sec:timing\], these profiles were formed by adding all data used for the timing solution. They typically have several hours of effective integration time. For display purposes, these profiles have been corrected for the effects of the high-pass filter in the digitiser. To apply this correction, the profile is first given zero mean. The corrected profile $b_n$, where $n$ is the bin number and $N$ is the number of bins in the profile, is then given by $$\begin{aligned} b_n & = & a_n, \;\;\; (n=0) \nonumber\\ b_n & = & a_n + (t_{bin}/\tau_{\rm HP}) \sum_{m=0}^{n-1} a_{m}, \;\;\;(0<n<N)\end{aligned}$$ where $a_n$ is the uncorrected zero-mean profile, $t_{bin}$ is the length of each profile bin in seconds and $\tau_{\rm HP}$ is the high-pass filter time constant in seconds. The value of $\tau_{\rm HP} = 0.9$ s was empirically determined by requiring a flat corrected baseline on several long-period pulsars. Prior to the commencement of the Parkes multibeam survey, there were 731 known radio pulsars, of which 693 are in the Galactic disk. (Five are in the Magellanic Clouds and 33 are in globular clusters.) Of the 693 disk pulsars, 247 lie within the nominal search area of the multibeam survey. Since the current survey is much more sensitive than any previous survey of this region, we would expect to redetect essentially all of these pulsars. Because of the current incompleteness of the survey, a definitive list of detected previously known pulsars is deferred to a later paper. DISCUSSION AND CONCLUSIONS ========================== In this paper we have described in some detail the Parkes multibeam pulsar survey, currently being conducted using a 13-beam receiver operating at a central frequency of 1374 MHz on the Parkes 64-m radio telescope. Data acquisition and analysis techniques are described and a detailed discussion of the survey sensitivity and observing strategy is given. After confirmation of a candidate, timing data are obtained, typically over a 12 – 18 month period, giving an accurate position, pulse period, period derivative and DM. The pulse width and mean flux density are estimated from the mean pulse profile. We give the principal observed properties of the first 100 pulsars discovered in the survey. Table \[tb:deriv\] gives derived parameters for these 100 pulsars. After the name, the first three columns give the log$_{10}$ of the characteristic age, $\tau_c = P/(2\dot P)$, in years, the surface dipole magnetic field, $B_s = 3.2 \times 10^{19} (P \dot P)^{1/2}$, in Gauss, and the rate of loss of rotational energy, $\dot E = 4\pi^2 I \dot P P^{-3}$, in erg s$^{-1}$, where a neutron-star moment of inertia $I = 10^{45}$ g cm$^2$ is assumed. The next two columns give the pulsar distance, $d$, computed from the DM assuming the Taylor & Cordes (1993) model for the Galactic distribution of free electrons, and the implied Galactic $z$-distance. Although distances are quoted to 0.1 kpc, in fact they are generally more uncertain than that owing to uncertainties in the electron density model. This is especially so for pulsars with very large DMs, indicating large distances from the Sun. The final column gives the radio luminosity $L_{1400} = S_{1400} d^2$. Pulsars discovered at relatively high radio frequencies, for example, at 1400 MHz, tend to have a flatter spectrum than those discovered at lower frequencies. For example, the sample of pulsars discovered by Johnston et al. (1992) has a mean spectral index of $-1.0$ compared to the value of $-1.7$ found for pulsars detected in the Parkes 70-cm survey[@tbms98]. However, the Johnston et al. and Clifton et al. surveys were the first extensive surveys at these higher frequencies. Most of the previously discovered pulsars had been found in lower-frequency searches, which selected the steeper-spectrum pulsars. The present survey is much more sensitive than any previous survey of this region, and hence the discovered pulsars are a largely unbiased sample. Adopting a compromise mean spectral index of $-1.3$ for the multibeam discoveries, the $L_{1400}$ values may be converted to the more commonly quoted 400 MHz luminosity by multiplying by 5.0. Fig. \[fg:phist\] gives histograms of the distributions in pulse period for the 100 multibeam pulsars and previously known disk pulsars, i.e., excluding those in globular clusters and the Magellanic Clouds. For the so-called ‘normal’ or non-millisecond pulsars, the distribution of the multibeam pulsars is similar to that of previously known pulsars, except for a larger number of pulsars with periods of just less than 100 ms. As shown by Table \[tb:deriv\], three of these, PSRs J0940$-$5428, J1112$-$6103 and J1718$-$3825, are relatively young pulsars with ages between 30,000 and 100,000 years and spin-down luminosities in excess of $10^{36}$ erg s$^{-1}$. The other two, PSRs J1232$-$6501 and J1454$-$5846, have very small period derivatives and are members of binary systems (Table \[tb:binpar\]). As discussed by Camilo et al. (2001), both of these systems have unusual properties. The first is atypical of low-mass binary pulsars, having a relatively long spin period, while the second is unusual in that it has a larger companion mass and higher eccentricity than most pulsar – white-dwarf binaries. Eleven of these first 100 pulsars have characteristic ages of less than 100 kyr; this is a much higher proportion than that for the previously known population. Only one millisecond pulsar, PSR J1435$-$6100, which has a period of 9.3 ms and is a member of a binary system (Table \[tb:binpar\]), is included in first 100 pulsars discovered by the Parkes multibeam survey (although several more have subsequently been discovered). As Fig. \[fg:phist\] shows, this is a much smaller proportion than that for previously known pulsars, although it is worth noting that there are no previously known disk millisecond pulsars within the area currently searched ($|b| \la 1\fdg5$). There are several factors which contribute to this low detection rate for millisecond pulsars. This paper reports the earliest multibeam survey observations which were made along and adjacent to the Galactic equator — the vast majority of the discovered pulsars have Galactic latitudes of $\la 1\degr$ (Table \[tb:posn\]). At these latitudes, the volume searched for millisecond pulsars is greatly reduced by dispersion broadening. Fig. \[fg:sens\] shows that the sensitivity is halved for a 10-ms pulsar with DM of 100 cm$^{-3}$ pc, corresponding to a distance of 3 kpc or less in the Galactic plane. The generally lower luminosity of millisecond pulsars results in a flux-density-limited distribution which extends to high Galactic latitudes [@lml+98], so the expected number in our search volume is small. Furthermore, most radio-frequency interference produces spurious signals at millisecond periods. At the early stage at which most of these data were processed, techniques for eliminating the effects of interference were not optimised. Consequently, real pulsars tended to be lost in a forest of spurious candidates. Finally, many millisecond pulsars are members of binary systems. The long observation time of this survey tends to discriminate against detection of short-period binary systems. All of these factors have been or will be largely overcome in subsequent observations and analyses. At the other end of the period range, PSR J1307$-$6318 has a pulse period of 4.96 s, the third longest known. Unlike PSR J2144$-$3933, the 8.5-s pulsar [@ymj99], PSR J1307$-$6318 has a relatively wide double pulse (Fig. \[fg:prf\]) with a 50 per cent width of 505 ms, more than 10 per cent of the period. Fig. \[fg:dmhist\] shows that the DM distribution of the multibeam pulsars is very different from that of previously known pulsars, peaking at a DM of 300 cm$^{-3}$ pc or so. This is readily explained by the low Galactic latitude and very high sensitivity of the multibeam survey. Most of the pulsars are distant and of relatively high luminosity (Table \[tb:deriv\]). The Taylor & Cordes (1993) distance model puts many of them at distances greater than that of the Galactic Centre, and several are beyond the limit of the model (those with a distance of 30 kpc in Table \[tb:deriv\]) and certainly over-estimated. Fig. \[fg:prf\] shows that a significant number of these distant pulsars have highly scattered profiles. However, there is not a close relationship between DM and the width of the scattering tail, with several pulsars of similar period and dispersion measure (e.g. PSRs J1609$-$5158 and J1616$-$5109) having quite different scattering times [@man00]. We expect that the pulsars discovered in this survey will make a major contribution to improving our knowledge of the Galactic electron density model and the distribution of the fluctuations responsible for interstellar scattering, especially in the central regions of the Galaxy. Finally, in Fig. \[fg:s14hist\] we show the distribution of mean 1400 MHz flux densities for the multibeam pulsars. Of the two-thirds of known pulsars with a published 1400 MHz flux density, only about 10 per cent have a value of less than 1 mJy. Values above 1 mJy are generally only quoted to the nearest mJy, so they are not well suited to display in Fig. \[fg:s14hist\]. Ten or so newly discovered pulsars have $S_{1400} \la 0.2$ mJy, lower than the nominal survey limiting flux density. Interstellar scintillation is not normally observed for the pulsars discovered in this survey, as diffractive scintillation bandwidths are much less than the observed bandwidth of 288 MHz and refractive scintillations are weak for high-DM pulsars [@ric77; @ks92]. The principal reason for the low observed flux densities is the dependence of effective survey sensitivity on pulse width (§\[sec:sens\]). With only a few exceptions, observed flux densities are greater than the nominal limiting flux density scaled by $[W_{50}/(P-W_{50})/0.05]^{1/2}$. Another factor is that many pulsars show intrinsic intensity variations such as nulling, and it is likely that some of these pulsars were detected when they had a greater than average flux density. As expected, most of the detected pulsars are relatively weak, with mean flux densities in the range 0.2 to 0.5 mJy. However, because of the large distances of most of these pulsars, their luminosities are typically large (Table \[tb:deriv\]). All have $L_{1400} > 1$ mJy kpc$^2$, corresponding to $L_{400} \ga 5$ mJy kpc$^2$ and most are above the low-luminosity cutoff in the luminosity distribution which, at 400 MHz, begins at about 10 mJy kpc$^2$ [@lml+98]. The newly discovered pulsars reported in this paper represent only a small fraction of the total sample which will be discovered by the Parkes multibeam pulsar survey when it is complete. We therefore defer a more detailed analysis of the properties of the multibeam sample, its relation to previously known pulsars and its implications for the Galactic distribution and evolution of pulsars to later publications. Acknowledgements {#acknowledgements .unnumbered} ================ We gratefully acknowledge the technical assistance provided by George Loone, Tim Ikin, Mike Kesteven, Mark Leach and all of the staff at the Parkes Observatory toward the development of the Parkes multibeam pulsar system. We also thank Russell Edwards for providing the program for detecting narrow-band radio-frequency interference and the Swinburne University of Technology group led by Matthew Bailes for assistance with development of the timing analysis software. At various times many people have assisted with the observing — we especially thank Paulo Freire, Dominic Morris and Russell Edwards. FC gratefully acknowledges support from NASA grant NAG 5-9095 and the European Commission through a Marie Curie fellowship under contract no. ERB FMBI CT961700. VMK is an Alfred P. Sloan Research Fellow and was supported in part by a US National Science Foundation Career Award (AST-9875897) and by a Natural Sciences and Engineering Research Council of Canada grant (RGPIN 228738-00). IHS received support from NSERC and Jansky postdoctoral Fellowships. The Parkes radio telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. [[[Lyne]{}, [Shemar]{} & [Graham-Smith]{} ]{}[2000]{}]{} Backer D. C., Kulkarni S. R., Heiles C., Davis M. M., Goss W. M., 1982, Nature, 300, 615 Camilo F. [et al.]{}, 2000a, in Kramer M., Wex N., Wielebinski R., eds, Pulsar Astronomy - 2000 and Beyond, [IAU]{} Colloquium 177. Astronomical Society of the Pacific, San Francisco, p. 3, astro-ph/9911185 Camilo F. M., Kaspi V. M., Lyne A. G., Manchester R. N., Bell J. F., D’Amico N., McKay N. P. F., Crawford F., 2000b, ApJ, 541, 367 Camilo F. [et al.]{}, 2001, ApJ, 548, L187 Clifton T. R., Lyne A. G., Jones A. W., McKenna J., Ashworth M., 1992, MNRAS, 254, 177 Cordes J. M., Chernoff D. F., 1998, ApJ, 505, 315 Crawford F., 2000, [PhD thesis]{}, Massachusetts Institute of Technology Crawford F., Gaensler B. M., Kaspi V. M., Manchester R. N., Camilo F., Lyne A. G., Pivovaroff M. J., 2001, ApJ, 554, 152 D’Amico N. [et al.]{}, 2000, in Palumbo G., White N., eds, Proceedings of X-ray Astronomy 999: Stellar Endpoints, AGN and the X-ray Background. Gordon & Breach, Singapore, In press. D’Amico N. [et al.]{}, 2001, ApJ, 552, L45 Fruchter A. S., Stinebring D. R., Taylor J. H., 1988, Nature, 333, 237 Hartman J. W., Bhattacharya D., Wijers R., Verbunt F., 1997, A&A, 322, 477 Haslam C. G. T., Salter C. J., Stoffel H., Wilson W. E., 1982, A&AS, 47, 1 Hewish A., Bell S. J., Pilkington J. D. H., Scott P. F., Collins R. A., 1968, Nature, 217, 709 Hulse R. A., Taylor J. H., 1974, ApJ, 191, L59 Johnston S., Lyne A. G., Manchester R. N., Kniffen D. A., D’Amico N., Lim J., Ashworth M., 1992a, MNRAS, 255, 401 Johnston S., Manchester R. N., Lyne A. G., Bailes M., Kaspi V. M., Qiao G., D’Amico N., 1992b, ApJ, 387, L37 Kaspi V. M., Stinebring D. R., 1992, ApJ, 392, 530 Kaspi V. M., 2000, in Kramer M., Wex N., Wielebinski R., eds, Pulsar Astronomy - 2000 and Beyond, [IAU]{} Colloquium 177. Astronomical Society of the Pacific, San Francisco, p. 485 Kaspi V. M. [et al.]{}, 2000, ApJ, 543, 321 Kramer M., Wex N., Wielebinski R., eds, Pulsar Astronomy - 2000 and Beyond, [IAU]{} Colloquium 177, Astronomical Society of the Pacific, San Francisco, 2000 Lorimer D. R., Bailes M., Dewey R. J., Harrison P. A., 1993, MNRAS, 263, 403 Lyne A. G., Manchester R. N., Taylor J. H., 1985, MNRAS, 213, 613 Lyne A. G. [et al.]{}, 1998, MNRAS, 295, 743 Lyne A. G. [et al.]{}, 2000, MNRAS, 312, 698  A. G., [Shemar]{} S. L., [Graham-Smith]{} F., 2000, MNRAS, 315, 534 Manchester R. N., 2000, in Strom R., ed, Sources and Scintillations: refraction and scattering in radio astronomy, [IAU]{} Colloquium 182. Kluwer Academic Publishers, Netherlands, In press. Manchester R. N. [et al.]{}, 1996, MNRAS, 279, 1235 Manchester R. N. [et al.]{}, 2000, in Kramer M., Wex N., Wielebinski R., eds, Pulsar Astronomy - 2000 and Beyond, [IAU]{} Colloquium 177. Astronomical Society of the Pacific, San Francisco, p. 49 Navarro J., Manchester R. N., Sandhu J. S., Kulkarni S. R., Bailes M., 1997, ApJ, 486, 1019 Pivovaroff M., Kaspi V. M., Camilo F., 2000, ApJ, 535, 379 Pivovaroff M. J., Kaspi V. M., Camilo F., Gaensler B. M., Crawford F., 2001, ApJ, 554, 161 Rickett B. J., 1977, Ann. Rev. Astr. Ap., 15, 479 Shemar S. L., Lyne A. G., 1996, MNRAS, 282, 677 Staelin D. H., Reifenstein [III]{} E. C., 1968, Science, 162, 1481 Stairs I. H. [et al.]{}, 2001, MNRAS, In press (astro-ph/0012414) Standish E. M., 1990, A&A, 233, 252 Staveley-Smith L. [et al.]{}, 1996, Proc. Astr. Soc. Aust., 13, 243 Taylor J. H., Cordes J. M., 1993, ApJ, 411, 674 Taylor J. H., Weisberg J. M., 1989, ApJ, 345, 434 Taylor J. H., 1974, A&AS, 15, 367 Taylor J. H., Manchester R. N., Lyne A. G., 1993, ApJS, 88, 529 Thompson D. J. [et al.]{}, 1999, ApJ, 516, 297 Toscano M., Bailes M., Manchester R., Sandhu J., 1998, ApJ, 506, 863 Van Vleck J. H., Middleton D., 1966, Proc. IEEE, 54, 2 Wallace P. T. [et al.]{}, 1977, Nature, 266, 692 Wang N., Manchester R. N., Pace R., Bailes M., Kaspi V. M., Stappers B. W., Lyne A. G., 2000, MNRAS, 317, 843 Wolszczan A., Frail D. A., 1992, Nature, 355, 145  M. D., [Manchester]{} R. N., [Johnston]{} S., 1999, Nature, 400, 848 [^1]: Email: rmanches@atnf.csiro.au [^2]: http://www.atnf.csiro.au/research/pulsar/pmsurv/. [^3]: From http://www.atnf.csiro.au/research/multibeam/lstavele/description.html. [^4]: One-bit sampling at the Nyquist rate introduces a loss of $\sqrt{2/\pi}$ relative to a fully sampled signal (cf. Van Vleck & Middleton 1966). The principal remaining loss results from the non-rectangular bandpass of the channel filters. [^5]: See http://pulsar.princeton.edu/tempo or http://www.atnf.csiro.au/research/pulsar/timing/tempo.
--- author: - | Dr. Chris. J. Oates[^1], Dr. Daniel Simpson[^2] and Prof. Mark Girolami[^3]\ Department of Statistics, Zeeman Building, University of Warwick,\ Gibbet Hill Road, Coventry, CV4 7AL, UK title: 'Discussion of “Sequential Quasi-Monte Carlo” by Mathieu Gerber and Nicolas Chopin' --- This paper is timely for highlighting the benefits of Quasi-Monte Carlo (QMC) in contemporary computational statistical methodology. Below we address the question of whether there is scope to further reduce the error of QMC estimators. The analysis of QMC used by Gerber and Chopin is rooted in the Koksma-Hlawka inequality $$\begin{aligned} \left| \frac{1}{N} \sum_{n=1}^N \varphi(\bm{u}^n) - \int_{[0,1]^d} \varphi(\bm{u}) d\bm{u} \right| \leq V(\varphi) D^*(\bm{u}^{1:N})\end{aligned}$$ where $\varphi : [0,1]^d \rightarrow \mathbb{R}$ is a test function of interest, $\bm{u}^{1:N}$ is a point set (or sequence), $V(\varphi)$ is the (Hardy-Krause) total variation and $D^*(\bm{u}^{1:N})$ is the (star) discrepancy term that is the target of the QMC innovation. Our discussion explores the potential to simultaneously tackle the rate constant $V(\varphi)$ in conjunction with the use of QMC methods to tackle $D^*(\bm{u}^{1:N})$. This direction has received considerably less attention due to typical analytic intractability of the rate constant. [@Hickernell] showed that classical control variate strategies from Monte Carlo (MC) are typically not well-suited to QMC, since the total variation is only weakly related to the MC variance that is the target of classical variance reduction techniques. Below we hint toward a general strategy to reduce QMC error that targets the rate constant directly. Following recent work on “control functionals” by [@Oates], we consider evaluation of $\varphi$ on two sets $\bm{u}^{1:N}$ and $\bm{v}^{1:N}$ at a computational cost (asymptotically) equivalent to evaluating $\varphi$ on one such set. The first set $\bm{u}^{1:N}$ is used to compute an arithmetic mean $$\begin{aligned} I_{\text{CF}} = \frac{1}{N} \sum_{n=1}^N \hat{\varphi}_N(\bm{u}^n), \label{est}\end{aligned}$$ based on a surrogate function $\hat{\varphi}_N : [0,1]^d \rightarrow \mathbb{R}$. This surrogate function is itself estimated from the second set $\bm{v}^{1:N}$, in a preliminary step. In situations where $\hat{\varphi}_N$ can be made to satisfy (i) $\int\hat{\varphi}_N(\bm{u})d\bm{u} = \int\varphi(\bm{u})d\bm{u}$ for all $N \in \mathbb{N}$ and (ii) $V(\hat{\varphi}_N) \rightarrow 0$ as $N \rightarrow \infty$, then the control functional estimator $I_{\text{CF}}$ is unbiased (in an appropriate sense) and has asymptotically zero error relative to the standard QMC estimator. [@Oates2] provides an explicit implementation of this strategy in the more general reproducing kernel Hilbert space formulation of QMC methodology [@Dick]. As a simple example, we note that for differentiable $\varphi$ with sufficiently regular partial derivatives, a basic implementation produces a total variation $V(\varphi_N)$ that vanishes at a rate $O(N^{-1/d})$. Thus control functional QMC estimators are asymptotically superior to standard (R)QMC estimators under appropriate regularity conditions. Preliminary empirical results strongly support our theoretical analysis; an example is given in Fig. \[example\]. Given the gains in accuracy that are provided by QMC, it is surely a priority to establish complementary methodology that targets the rate constant governing the practical performance of these algorithms. Control functionals provide one (explicit) route to achieve this goal. The combination of control functionals with the Sequential QMC approach of Gerber and Chopin should provide a highly effective approach to estimation. [7]{} Dick, J., Kuo, F. Y., Sloan, I. H. (2013) High-Dimensional Integration: The Quasi-Monte Carlo Way. [*Acta Numerica*]{}, 22:133-288. Hickernell, F. J., Lemieux, C., Owen, A. B. (2005) Control Variates for Quasi-Monte Carlo. [*Statistical Science*]{}, 20(1):1-31. Oates, C. J., Girolami, M., Chopin, N. (2014) Control Functionals for Monte Carlo Integration. [*CRiSM Working Paper Series, The University of Warwick*]{}, 14:22. Oates, C. J., Girolami, M. (2015) Variance Reduction for Quasi-Monte Carlo. [*Forthcoming*]{}. Owen, A. B. (1997) Scrambled net variance for integrals of smooth functions. [*Annals of Statistics*]{} 25 (4):1541-1562. [^1]: Email: c.oates@warwick.ac.uk [^2]: Email: d.p.simpson@warwick.ac.uk [^3]: Email: m.girolami@warwick.ac.uk
--- address: | $^a$ Departamento de Física Teórica, Universidad Autónoma de Madrid,\ Cantoblanco, 28049 Madrid, Spain\ E-mail: [claudia@mail.desy.de]{} author: - | C Glasman$^{a*}$\ representing the H1 and ZEUS Collaborations title: 'QCD Tests at HERA$^\dag$' --- Introduction ============ Measurements of cross sections for processes involving a large momentum transfer (e.g. jets in $\gp$ interactions) are compared to next-to-leading order (NLO) calculations to test QCD. These measurements also provide a test of the parametrisations of the photon parton densities and may be used in global analyses to constrain the parton distributions. The investigation of the internal structure of jets gives insight into the transition between a parton produced in a hard process and the experimentally observable spray of hadrons. Measurements of jet substructure allow the study of the characteristics of quark and gluon jets. Jet cross sections in $\gp$ interactions ======================================== At HERA, positrons of energy $E_e=27.5$ GeV collide with protons of energy $E_p=820$ GeV. The main source of jets at HERA is hard scattering in $\gp$ interactions in which a quasi-real photon ($\q2\approx 0$, where $\q2$ is the virtuality of the photon) emitted by the positron beam interacts with a parton from the proton to produce two jets in the final state. At leading order (LO) QCD, there are two processes which contribute to the jet production cross section: the resolved process (figure \[fig1\]a) in which the photon interacts through its partonic content, and the direct process (figure \[fig1\]b) in which the photon interacts as a point-like particle. (5.0,4.0) (-3.0,0.2) (4.0,0.2) (-0.2,-0.3)[(a)]{} (5.5,-0.3)[(b)]{} The cross section for jet production at LO QCD in $\gp$ interactions is given by and 0em where $f_{\gamma /e}(y)$ is the flux of photons in the positron, usually estimated by the Weizsäcker-Williams approximation [@wwa] ($y$ is the fraction of the positron energy taken by the photon); $f_{j/p}(x_p,\mu^2_F)$ are the parton densities in the proton, determined from e.g. global fits [@mrs] ($x_p$ is the fraction of the proton momentum taken by parton $j$ and $\mu_F$ is the factorisation scale); and $d\sigma(\gamma(i)j\rightarrow {\rm jet}\ {\rm jet})$ is the subprocess cross section, calculable in perturbative QCD. In the case of resolved processes, there is an additional ingredient: $f_{i/\gamma}(x_{\gamma},\mu^2_F)$ are the parton densities in the photon, for which there is only partial information ($x_{\gamma}$ is the fraction of the photon momentum taken by parton $i$). The integrals are performed over the phase space represented by “$d\Omega$”. 0.8em The iterative cone algorithm ---------------------------- In hadronic type interactions, jets are usually reconstructed by a cone algorithm [@cone]. Experimentally, jets are found in the pseudorapidity ($\eta$) $-$ azimuth ($\varphi$) plane using the transverse energy flow of the event. The jet variables are defined according to the Snowmass Convention [@snow], $\etjet = \sum_i E^i_T$ $\etajet = \frac{\sum_i E^i_T\cdot\eta_i}{\etjet};\ \phijet = \frac{\sum_i E^i_T \cdot\varphi_i}{\etjet}$. In the iterative cone algorithm, jets are found by maximising the summed transverse energy within a cone of radius $R$. Inclusive jet cross sections ============================ Inclusive jet cross sections have been measured [@incjet] using the $1995-1997$ ZEUS [@status] data (which amounts to an integrated luminosity of ${\cal L}\sim 43$ 1) as a function of the jet transverse energy. The jets have been searched for with an iterative cone algorithm with $\rr1$. The measurements have been performed for jets of hadrons with $\etjet$ between 17 and 74 GeV and $\etajet$ between $-0.75$ and $2.5$, and are given for the kinematic region defined by $0.2<y<0.85$ and $\q2\leq 4$ 2. (10.0,8.0) (-2.0,0.5) (4.5,0.5) (3.5,6.0)[(a)]{} (9.5,6.0)[(b)]{} Figure \[fig2\]a shows the measured $\set$ (black dots). The systematic uncertainties not associated with the absolute energy scale of the jets have been added in quadrature to the statistical errors (thick error bars) and are shown as thin error bars. The shaded band represents the uncertainty on the energy scale of the jets. The data show a steep fall-off over four orders of magnitude in the measured range. NLO QCD calculations -------------------- There are several complete calculations of jet cross sections at NLO for $\gp$ interactions [@klasen1; @harris; @frixione; @aurenche]. Two types of corrections contribute at NLO: the virtual corrections which include internal particle loops and the real corrections which include a third parton in the final state. The existing calculations differ mainly in the treatment of the real corrections. A detailed comparison of the calculations [@khf] show that for NLO calculations of dijet cross sections and LO calculations of three-jet cross sections a reasonable agreement is found, and the differences are up to $\sim 5\%$. The curves in figure \[fig2\]a are NLO QCD calculations [@klasen1; @harris] using different parametrisations of the photon structure function: GS96 [@gs] (solid line), GRV-HO [@grv] (dashed line) and AFG [@afg] (dot-dashed line). The CTEQ4M [@cteq4] proton parton densities have been used in all cases. In the calculations shown here, the renormalisation and factorisation scales have been chosen equal to $\etjet$ and $\alpha_s$ was calculated at 2-loops with $\Lambda^{(4)}_{\overline{MS}}=296$ MeV. The NLO calculations give a reasonable description of the data. Figure \[fig2\]b shows the fractional differences between the measured $\set$ and the NLO calculations based on GRV-HO. Comparing theory and experiment ------------------------------- To perform tests of QCD and to extract information on the photon parton densities, the experimental and theoretical uncertainties must be reduced as much as possible. Among the main experimental uncertainties the presence of a possible underlying event, which is the result of soft interactions between the partons in the photon and proton remnants (for resolved events), and is not included in the calculations. The uncertainty of the measurements due to the underlying event is reduced by decreasing the cone radius or by increasing the transverse energy of the jets [@zeus2]. On the theoretical side, since calculations are made only at NLO, the implementation of the iterative cone jet algorithm in the theory does not match the experimental procedure exactly. The theoretical uncertainty coming from this effect is reduced by using the longitudinally invariant $\kt$ cluster algorithm [@kt]. The $\kt$ cluster algorithm --------------------------- In the inclusive $\kt$ cluster algorithm [@kt], jets are identified by successively combining nearby pairs of particles until a jet is complete. The $\kt$ algorithm allows a transparent translation of the theoretical jet algorithm to the experimental set-up by avoiding the ambiguities related to the merging and overlapping of jets and it is infrared safe to all orders. Figure \[fig3\] shows $\set$ for jets found using the $\kt$ cluster algorithm. The NLO QCD calculations, using the current knowledge of the photon structure, are able to describe the data within the present experimental and theoretical uncertainties. (10.0,8.0) (-2.0,0.5) (4.5,0.5) (3.5,6.5)[(a)]{} (10.0,6.5)[(b)]{} A comparison between $\set$ measured using the cone and the $\kt$ algorithms has been made (see figure \[fig4\]). The differences between the measured cross sections are typically smaller than 10%. This comparison shows that the cone with $\rr1$ and the $\kt$ cluster algorithms probe the underlying parton dynamics in a comparable way. Therefore, in the experiment, the choice of jet algorithm is not crucial. The use of the $\kt$ cluster algorithm is dictated by the need to reduce the theoretical uncertainties. (10.0,8.0) (1.5,0.5) Jet substructure ================ The use of the $\kt$ cluster algorithm allows the study of the internal structure of the jets in terms of subjets. Subjets are jet-like objects within a jet and are resolved by reapplying the $\kt$ cluster algorithm until for every pair of particles $i$ and $j$, Measurements of the mean subjet multiplicity ($<n_{\rm subjet}>$) have been performed by ZEUS [@subjet] using an inclusive sample of jets with $\etjet>15$ GeV and $-1<\etajet<2$ in the kinematic range defined by $0.2<y<0.85$ and $\q2\leq 1$ 2. Figure \[fig5\]a shows $<n_{\rm subjet}>$ as a function of $y_{\rm cut}$. The data (black dots, the statistical and systematic uncertainties are included but are smaller than the dots) show that $<n_{\rm subjet}>$ grows as $y_{\rm cut}$ decreases within the measured range. The lines are calculations from the leading-logarithm parton-shower Monte Carlo’s PYTHIA [@pythia] and HERWIG [@herwig]. The calculations based on PYTHIA give a good description of the data. Figure \[fig5\]b shows $<n_{\rm subjet}>$ as a function of $\etajet$ for $y_{\rm cut}=0.01$: $<n_{\rm subjet}>$ increases as $\etajet$ increases. The comparison with the predictions for quark and gluon jets shows that the increase in $<n_{\rm subjet}>$ as $\etajet$ increases is consistent with the predicted increase in the fraction of gluon jets. (10.0,8.0) (-2.0,0.5) (4.5,0.5) (0.5,6.5)[(a)]{} (10.0,6.5)[(b)]{} Dijet cross sections ==================== Dijet cross sections have been measured [@dijeth1] using the $1994-1997$ H1 [@statush1] data (which amounts to an integrated luminosity of ${\cal L}\sim 36$ 1) as a function of the transverse energy of the leading jet and the average transverse energy of the two leading jets. The jets have been found using the $\kt$ cluster algorithm. The measurements have been performed for jets of hadrons with $E_T^{jet1}>25$ GeV, $E_T^{jet2}>15$ GeV and $-0.5<\etajet<2.5$, and are given for the kinematic region defined by $y<0.9$ and $\q2<4$ 2. (10.0,8.0) (-2.0,1.5) (4.5,1.5) (3.5,6.0)[(a)]{} (9.5,6.0)[(b)]{} Figure \[fig6\] shows the measured cross sections (black dots). The systematic uncertainties (including that associated with the absolute energy scale of the jets) have been added in quadrature to the statistical errors (inner error bars) and are shown as the outer error bars. The data show a steep fall-off of three orders of magnitude in the measured range. The histograms are the calculations using PYTHIA which provide a good description of the shape of the measured distributions. High-mass dijet cross sections ============================== The dijet mass distribution $\mj$ is sensitive to the presence of new particles or resonances that decay into two jets. The distribution of the angle between the jet-jet axis and the beam direction in the dijet centre-of-mass system ($\cos\theta^*$) reflects the underlying parton dynamics and is sensitive to the spin of the exchanged particle. New particles or resonances decaying into two jets may also be identified by deviations in the measured $\cos\theta^*$ distribution with respect to the QCD predictions. The cross section as a function of the dijet invariant mass has been measured by H1 [@dijeth1] using the $\kt$ cluster algorithm between 48 and 148 GeV (figure \[fig7\]). The data show a steep fall off of more than two orders of magnitude within the measured range. The histogram is the calculation using PYTHIA which gives a reasonable description of the shape of the measured distribution. There are large uncertainties in the normalisation of the LO QCD calculations which indicate the need for NLO corrections. (10.0,8.0) (2.0,1.5) High-mass dijet cross sections have been measured [@dijetze] using the $1995-1997$ ZEUS data as a function of $\mj$ and $\cost$. The measurements have been performed for $\mj>47$ GeV and $\cost<0.8$ using the $\kt$ cluster algorithm. (10.0,8.0) (-2.0,0.5) (4.5,0.5) (3.5,6.0)[(a)]{} (9.5,6.0)[(b)]{} The data show a steep fall-off in $\mj$ of three orders of magnitude in the measured range (figure \[fig8\]a). The measured $\scost$ rises as $\cost$ increases (figure \[fig8\]b). The NLO QCD calculations [@klasen1] give a reasonable description of the measured distributions. The calculations based on GRV-HO are closer in magnitude to the measured cross sections. No significant deviation between data and NLO calculations is observed in the measured range of $\mj$ and $\cost$. (10.0,6.0) (2.3,1.0) (2.5,6.63) Three-jet cross sections ======================== Measurements of three-jet cross sections provide a test of QCD beyond LO and allow a search for new phenomena. NLO calculations for three-jet cross sections in $\gp$ interactions are not yet available. The calculations shown here are LO for these processes and are therefore subject to large renormalisation and factorisation scale uncertainties. The cross section for three-jet production at LO is given by Five parameters are necessary to uniquely determine the three-body phase-space. These are the three-jet invariant mass ($M_{3j}$); the energy-sharing quantities $X_3$ and $X_4$ (the jets are numbered 3, 4 and 5 in order of decreasing energy), $X_i\equiv\frac{2E_i}{M_{3j}}$; 0em the cosine of the scattering angle of the highest energy jet with respect to the beam, $\costh3\equiv\frac{\vec{p}_{B}\cdot\vec{p}_3}{|\vec{p}_{B}| |\vec{p}_3|}$; and $\psi_3$, the angle between the plane containing the highest energy jet and the beam and the plane containing the three jets. The latter is defined by $\cos{\psi_3}\equiv\frac{(\vec{p}_3\times\vec{p}_{B})\cdot(\vec{p}_4 \times \vec{p}_5)}{|\vec{p}_3 \times \vec{p}_{B}| |\vec{p}_4 \times \vec{p}_5|}$. 0.8em The definition of the angles $\theta_3$ and $\psi_3$ is illustrated in figure \[fig9\]. Since $\theta_3$ involves only the highest energy jet, the distribution of $\costh3$ in three-jet events is expected to follow closely the distribution of $\cos\theta^*$ in dijet events. The $\psi_3$ angle, on the other hand, reflects the orientation of the lowest energy jet. (10.0,8.0) (-2.0,1.5) (4.5,1.5) (3.0,6.0)[(a)]{} (9.0,6.0)[(b)]{} Figure \[fig10\] shows the three-jet cross section as a function of the transverse energy of the lowest energy jet and the three-jet invariant mass. The comparison to the calculations using PYTHIA Monte Carlo shows that the parton shower models describe the shape of the measured cross sections. The three-jet invariant mass cross section was measured by ZEUS [@zeus3j] in the kinematic region defined by $|\costh3|<0.8$ and $X_3<0.95$ (see figure \[fig11\]). The curves in figure \[fig11\] are the $\oaa$ QCD calculations [@klasenk; @harrisk] using the GRV-LO [@grv] parametrisations of the photon structure function and the CTEQ4 LO [@cteq4] proton parton densities. The renormalisation and factorisation scales have been chosen equal to $E_T^{\max}$, the largest of the $\etjet$ values of the three jets. $\alpha_s$ was calculated at 1-loop with $\Lambda^{(5)}_{\overline{MS}}=181$ MeV. The calculations give a good description of the data, even though they are LO for this process. Monte Carlo calculations are also compared to the data: they provide a good description of the data in shape, but the magnitude is $30-40\%$ too low. (10.0,8.0) (1.7,1.0) The search for new particles or resonances decaying into two jets can be extended by looking for deviations in the distributions of the dijet invariant masses in three-jet events with respect to the predictions of QCD. Figure \[fig12\] shows the dijet invariant mass distributions in three-jet events for all possible pairs of jets. The histograms are the predictions from the QCD-based Monte Carlo models. No significant deviation between data and calculations is observed up to the highest invariant mass value studied. (10.0,12.8) (-2.0,6.5) (4.5,6.5) (2.0,0.5) (3.0,11.0)[(a)]{} (9.0,11.0)[(b)]{} (7.0,5.0)[(c)]{} Figure \[fig13\]a and \[fig13\]b show the $X_3$ and $X_4$ distributions measured by ZEUS [@zeus3j] for $M_{3j}>50$ GeV, $|\cos\theta_3|<0.8$ and $X_3<0.95$. Calculations from different models are compared to the data: a pure phase space calculation does not describe the data. The $\oaa$ QCD calculations are in good agreement with the data. (10.0,12.8) (0.0,0.0) (-0.1,2.2) (0.0,-3.4) (1.7,7.0)[(c)]{} The angular distribution of the lowest energy jet in three-jet events is a distinct probe of the dynamics beyond LO. The measured $\psi_3$ distribution (figure \[fig13\]d) is drastically different from pure phase space and is in agreement with the $\oaa$ QCD calculations. The comparison of the data to parton shower models shows that the data favour color coherence. The measured $\costh3$ distribution [@zeus3jn] (figure \[fig13\]c) indicates that the highest energy jet tends to go either forward (proton direction) or towards the rear (photon direction). The $\oaa$ QCD calculations and the Monte Carlo models are in good agreement with the data. The variable $x_{\gamma}^{OBS}$, $\xo\equiv{\sum_{jets}\etjet e^{-\etajet}\over 2yE_e}$ 0em gives the fraction of the photon energy invested in the production of the three-jet system and can be used to define resolved and direct processes in a meaningful way to all orders. 0.8em (10.0,12.8) (-2.0,0.0) Figure \[fig14\] shows the $\costh3$ distribution for different regions of $\xo$. The data indicate that the highest energy jet tends to go in the forward direction for $\xo<0.5$ (where LO resolved processes dominate) and it goes more backward as $\xo$ increases. The $\oaa$ QCD calculations are in good agreement with the data except for $0.9<\xo<1$. Conclusions =========== Significant progress in comparing measurements and QCD calculations of jet cross sections in $\gp$ interactions has been achieved; experimental and theoretical uncertainties have been reduced. NLO QCD calculations of inclusive jet and dijet cross sections and $\oaa$ QCD calculations of three-jet cross sections describe reasonably well the measurements. No significant deviation with respect to QCD predictions has been observed within the measured range of the variables studied. Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank my colleagues from H1 and ZEUS for their help in preparing this report. [99]{} C.F.v. Weizsäcker, ; E.J. Williams, . A.D. Martin, R.G Roberts, W.J. Stirling and R.S. Thorne, hep-ph/9805205; M. Glück, E. Reya and A. Vogt, ; CTEQ Collab., H.L. Lai [*et al*]{}, hep-ph/9903282. CDF Collab., F. Abe [*et al*]{}, . J. Huth [*et al*]{}, Proc. of the 1990 DPF Summer Study on High Energy Physics, Snowmass, Colorado, edited by E.L. Berger (World Scientific, Singapore, 1992) p. 134. ZEUS Collab., Contributed paper to ICHEP’98, Vancouver, Canada, July 1998, N-812. The ZEUS Detector, Status Report (1993), DESY 1993. M. Klasen, T. Kleinwort and G. Kramer, . B.W. Harris and J.F. Owens, . S. Frixione and G. Ridolfi, . P. Aurenche, L. Bourhis, M. Fontannaz and J. Guillet, Proc. of “Future Physics at HERA” (1996) 570. B.W. Harris, M. Klasen and J. Vossebeld, hep-ph/9905348. L.E. Gordon and J.K. Storrow, . M. Glück, E. Reya and A. Vogt . P. Aurenche, J.P. Guillet, M. Fontannaz, . H.L. Lai [*et al*]{}, . ZEUS Collab., J. Breitweg [*et al*]{}, . S. Catani, Yu.L. Dokshitzer, M.H. Seymour and B.R. Webber, : S.D. Ellis and D.E. Soper, ; M.H. Seymour, . ZEUS Collab., Contributed paper to IECHEP’99, Tampere, Finland, July 1999, N-530. H.-U. Bengtsson and T. Sjöstrand, ; T. Sjöstrand, . G. Marchesini [*et al*]{}, . H1 Collab., Contributed paper to IECHEP’99, Tampere, Finland, July 1999, N-157u. H1 Collab., I. Abt [*et al*]{}, and 348. ZEUS Collab., Contributed paper to ICHEP’98, Vancouver, Canada, July 1998, N-805. ZEUS Collab., J. Breitweg [*et al*]{}, . M. Klasen, . B.W. Harris and J.F. Owens, . ZEUS Collab., Contributed paper to IECHEP’99, Tampere, Finland, July 1999, N-544.
--- abstract: 'Let $r=r(n)$ be a sequence of integers such that $r\leq n$ and let $X_1,\ldots,X_{r+1}$ be independent random points distributed according to the Gaussian, the Beta or the spherical distribution on $\mathbb{R}^n$. Limit theorems for the log-volume and the volume of the random convex hull of $X_1,\ldots,X_{r+1}$ are established in high dimensions, that is, as $r$ and $n$ tend to infinity simultaneously. This includes, Berry-Esseen-type central limit theorems, log-normal limit theorems, moderate and large deviations. Also different types of mod-$\phi$ convergence are derived. The results heavily depend on the asymptotic growth of $r$ relative to $n$. For example, we prove that the fluctuations of the volume of the simplex are normal (respectively, log-normal) if $r=o(n)$ (respectively, $r\sim \alpha n$ for some $0 < \alpha < 1$).' address: - 'Julian Grote: Fakultät für Mathematik, Ruhr-Universität Bochum, 44780 Bochum, Germany' - 'Zakhar Kabluchko: Institut für Mathematische Stochastik, Universität Münster, 48149 Münster, Germany' - 'Chrisotph Thäle: Fakultät für Mathematik, Ruhr-Universität Bochum, 44780 Bochum, Germany' author: - Julian Grote - Zakhar Kabluchko - Christoph Thäle title: Limit theorems for random simplices in high dimensions --- Introduction ============ In the last decades, random polytopes have become one of the most central models studied in stochastic geometry. In particular, they have seen numerous applications to other branches of mathematics such as asymptotic geometric analysis, compressed sensing, computational geometry, optimization or multivariate statistics; see, for example, the surveys of Bárány [@BaranySurvey], Hug [@HugSurvey] and Reitzner [@ReitznerSurvey] for further details and references. The focus in most works has been on models of the following type. First, we fix a space dimension $n\in{\mathbb{N}}$ and a probability measure $\mu$ on ${\mathbb{R}}^n$. Then, we let $X_1,\ldots,X_r$, where $r\geq n+1$, be independent random points in ${\mathbb{R}}^n$ that are distributed according to $\mu$. A random polytope $P_r$ now arises by taking the convex hull of the point set $X_1,\ldots,X_r$. Starting with the seminal paper of Rényi and Sulanke [@RenyiSulanke], the asymptotic behaviour of the expectation and the variance of the volume or the number of faces of $P_r$ has been studied intensively, as $r\to\infty$, while keeping $n$ fixed. Moreover, it has been investigated whether these quantities satisfy a ’typical’ and ’atypical’ behaviour, i.e., fulfil a central limit theorem, large or moderate deviation principles and concentration inequalities, respectively, to name just a few topics of current research. However, up to a few exceptions it has not been investigated what happens if the space dimension $n$ is not fixed, but tends to infinity. The only such exceptions we were able to localize in the literature are the papers of Ruben [@RubenCLT], Mathai [@MathaiCLT], Anderson [@Anderson] and Maehara [@Maehara]. It is shown in the first two of these works that for any [*fixed*]{} $r\in{\mathbb{N}}$ the $r$-volume of the convex hull of $r+1\le n+1$ independent and uniform random points of which some are in the interior of the $n$-dimensional unit ball and the others on its boundary, is asymptotically normal, as $n\to\infty$. The third one establishes the same result in the situation where the $r$ points are distributed according to the so-called Beta-type distribution in the $n$-dimensional unit ball. The fourth mentioned text generalizes the set-up to an arbitrary underlying $n$-fold product distribution on $\mathbb{R}^n$. On the other hand, the regime in which $r$ and $n$ tend to infinity *simultaneously* is not treated in these papers. The purpose of the present text is to close this gap and to prove a collection of probabilistic limit theorems for the $r$-volume of the convex hull of $r+1\le n+1$ random points that are distributed according to certain classes of probability distributions that allow for explicit computations, especially focusing on different regimes of growths of the parameter $r$ relative to $n$. More precisely, we distinguish between the following three regimes. The first one is the case where $r$ grows like $o(n)$ with the dimension $n$, which means that $r/n$ converges to zero, as $n \rightarrow \infty$. This of course includes the situations where $r$ is fixed – covering thereby the case considered in the four papers mentioned above – or behaves like $n^\alpha$ wir $\alpha\in (0,1)$, to give just two examples (let us emphasize at this point that we interpret expressions like $\sqrt{n}$ or $n/2$ as $\lfloor \sqrt{n} \rfloor$ and $\lfloor n/2 \rfloor$, respectively, in what follows). Secondly, the underlying situation might be the one where $r$ is asymptotically equivalent to $\alpha n$ for some $\alpha \in (0,1)$. Lastly, we analyse the setting where $n-r = o(n)$, as $n\rightarrow \infty$. In particular, for $r=n$ we arrive in the situation where we choose $n+1$ random points and thus their convex hull is nothing but a full-dimensional simplex in $\mathbb{R}^n$.\ Our paper and the results we are going to present (and which represent a ’complete’ description of the high-dimensional probabilistic behaviour of the underlying random simplices) are organized as follows. In Section $2$ we introduce the different random point models we consider and state formulas for the moments of the volume of the random simplices induced by the convex hulls of these point sets. By using these moments, we are then able to derive the precise distributions of the previously mentioned volumes. In Section $3$ we start with the first limit theorems. By using the method of cumulants we give ’optimal’ Berry-Esseen bounds and moderate deviation principles for the logarithmic volumes of our random simplices. Then, we transfer the limit theorem from the log-volume to the volume itself and obtain thereby a phase transition in the limiting behaviour depending on the choice of the parameter $r$. Section $4$ establishes results concerning so-called mod-$\phi$ convergence and is also the starting point to prove the results presented in Section $5$, where we add large deviation principles to the moderate ones obtained earlier in Section $3$. Models, volumes and probabilistic representations ================================================= The four models {#sec:SectionModels} --------------- In this paper we consider convex hulls of random points $X_1,X_2,\ldots$ We only consider the following four models which allow for explicit computations. These models were identified in [@Miles71] and [@RubenMiles80] by Miles and Ruben and Miles, respectively. - The *Gaussian model*: $X_1,X_2,\ldots$ are i.i.d. with standard normal density $$f(|x|) = (2\pi)^{-n/2} \cdot {{\rm e}}^{-\frac 12 |x|^2}, \quad x\in{\mathbb{R}}^n.$$ - The *Beta model* with parameter $\nu>0$: $X_1,X_2,\ldots$ are i.i.d. points in the ball of radius $1$ with density $$f(|x|) = \frac 1 {\pi^{n/2}} \frac{\Gamma\left(\frac{n+\nu}{2}\right)}{\Gamma\left(\frac \nu 2\right)} \cdot \left(1- |x|^2\right)^{(\nu-2)/2}, \quad x\in{\mathbb{R}}^n, \;\; |x| < 1.$$ - The *Beta prime model* with parameter $\nu>0$: $X_1,X_2,\ldots$ are i.i.d. points with density $$f(|x|) = \frac 1 {\pi^{n/2}} \frac{\Gamma\left(\frac{n+\nu}{2}\right)}{\Gamma\left(\frac \nu 2\right)} \cdot \left(1 + |x|^2\right)^{-(n+ \nu)/2}, \quad x\in{\mathbb{R}}^n.$$ - The *spherical model*: $X_1,X_2,\ldots$ are uniformly distributed on the sphere of radius $1$ centered at the origin of ${\mathbb{R}}^n$. Observe that in the Beta prime model the power is $(n+\nu)/2$ (which depends on $n$) rather than just $\nu/2$. Moments for the volumes of random simplices and parallelotopes -------------------------------------------------------------- Let $1\leq r\leq n$ be an integer and $X_1,\ldots,X_{r+1}$ be independent random points in ${\mathbb{R}}^n$ that are distributed according to one of the distributions introduced in Section \[sec:SectionModels\]. By $\mathcal V_{n,r}$ we denote the $r$-dimensional volume of the simplex with vertices $X_1,\ldots,X_{r+1}$. Moreover, we use the symbol $\mathcal W_{n,r}$ to indicate the $r$-dimensional volume of the parallelotope spanned by the vectors $X_1,\ldots,X_r$. Note that up to a factor $r!$, ${\mathcal{W}}_{n,r}$ is the same as the $r$-dimensional volume of the simplex with vertices $0,X_1,\ldots,X_{r}$. We start by recalling formulas for the moments of ${\mathcal{W}}_{n,r}$. Moments of integer orders can directly be computed using the well-known linear Blaschke-Petkantschin formula from integral geometry together with an induction argument. \[theo:vol\_Parallelotopes\_Moments\] Let $\mathcal W_{n,r}$ be the volume of the $r$-dimensional parallelotope spanned by the vectors $X_1,\ldots,X_{r}$ chosen according to one of the above four models. - In the Gaussian model we have, for all $k\geq 0$, $${\mathbb E}[{\mathcal{W}}_{n,r}^{2k}] = \prod_{j=1}^r\Bigg[2^k{\Gamma\Big({n-r+j\over 2}+k\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}\Bigg].$$ - In the Beta model with parameter $\nu>0$ we have, for all $k\geq 0$, $${\mathbb E}[{\mathcal{W}}_{n,r}^{2k}] =\prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+k\Big)\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)\Gamma\Big({n+\nu\over 2}+k\Big)}\Bigg].$$ - In the Beta prime model with parameter $\nu>0$ we have, for all $k\in (0,\frac \nu 2]$, $${\mathbb E}[{\mathcal{W}}_{n,r}^{2k}] =\prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+k\Big)\Gamma\Big({\nu\over 2}-k\Big)\over\Gamma\Big({n-r+j\over 2}\Big)\Gamma\Big({\nu\over 2}\Big)}\Bigg].$$ - In the spherical model we have, for all $k\geq 0$, $${\mathbb E}[{\mathcal{W}}_{n,r}^{2k}] =\prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+k\Big)\Gamma\Big({n\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)\Gamma\Big({n\over 2}+k\Big)}\Bigg].$$ The formula in (a) can be concluded from [@Mathai_Random_Parallelotopes] or [@Ruben79]. Formula (b) is Theorem 19.2.5 from [@mathai_charalambides], Formula (c) is Theorem 19.2.6 from [@mathai_charalambides]. Formula (d) is the limiting case of (c) for $\nu \downarrow 0$ but is actually also contained both in Theorems 19.2.5 and 19.2.6 from [@mathai_charalambides] because these deal with a slightly more general model which allows for some points to be chosen uniformly on the unit sphere. For simplices, the moments are very similar. The products appearing in the formulas for simplices are the same as for parallelotopes, but certain additional factors involving the $\Gamma$-function appear. Again, for moments of integer order, a direct proof for these formulas can be carried out using the affine Blaschke-Petkantschin formula and an induction argument (compare, for example, with the proof of [@SW Theorem 8.2.3] for the special case of the Beta model with $\nu=2$ and the spherical model.) \[theo:vol\_Simplices\_Moments\] Let $\mathcal V_{n,r}$ be the volume of the $r$-dimensional simplex with vertices $X_1,\ldots,X_{r+1}$ chosen according to one of the above four models. - In the Gaussian model we have, for all real $k\geq 0$, $${\mathbb E}[(r!{\mathcal{V}}_{n,r})^{2k}] = (r+1)^{k}\, \prod_{j=1}^r\bigg[2^{k}{\Gamma\big({n-r+j\over 2}+k\big)\over\Gamma\big({n-r+j\over 2}\big)}\bigg].$$ - In the Beta model with parameter $\nu>0$ we have, for all real $k\geq 0$, $$\begin{aligned} {\mathbb E}[(r!{\mathcal{V}}_{n,r})^{2k}] &= \prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+k\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}{\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+k\Big)}\Bigg]\\ &\qquad\qquad\qquad\times{\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+k\Big)}{\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+(r+1)k\Big)\over\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+rk\Big)}.\end{aligned}$$ - In the Beta prime model with parameter $\nu>0$ we have, for all real $0\leq k<{\nu\over 2}$, $$\begin{aligned} {\mathbb E}[(r!{\mathcal{V}}_{n,r})^{2k}] &= \prod_{j=1}^r \Bigg[ \frac{\Gamma\Big(\frac{n-r+j}{2}+k\Big)}{\Gamma\Big(\frac{n-r+j}{2}\Big)} \frac{\Gamma\Big(\frac \nu 2 -k \Big)}{\Gamma\Big(\frac \nu2\Big)} \Bigg] \frac{\Gamma\Big(\frac \nu 2 -k \Big)}{\Gamma\Big(\frac \nu2\Big)} \frac{\Gamma\Big( \frac{(r+1)\nu}{2} -rk \Big)}{\Gamma\Big( \frac{(r+1)\nu}{2} - (r+1)k \Big)}.\end{aligned}$$ - In the spherical model we have, for all real $k\geq 0$, $$\begin{aligned} {\mathbb E}[(r!{\mathcal{V}}_{n,r})^{2k}] &= \prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+k\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}{\Gamma\Big({n\over 2}\Big)\over\Gamma\Big({n\over 2}+k\Big)}\Bigg]{\Gamma\Big({n\over 2}\Big)\over\Gamma\Big({n\over 2}+k\Big)}{\Gamma\Big({r(n-2)+n\over 2}+(r+1)k\Big)\over\Gamma\Big({r(n-2)+n\over 2}+rk\Big)}.\end{aligned}$$ Formula (a) is Equation (70) in [@Miles71]. Formula (b) is Equation (74) in [@Miles71]. Formula (c) is Equation (72) in [@Miles71]. Finally, Formula (d) is obtained from (b) by letting $\nu \to 0$. Note that the formula in [@Miles71] contains a typo, which is corrected, for example, in [@Chu]. Also Miles [@Miles71] considers only integer moments. Extension to non-integer moments can be found in [@KabluchkoTemesvariThaele]. Observe that the moments in the spherical case can be obtained from the moments in the Beta model by taking $\nu=0$ there. In fact, the uniform distribution on the sphere is the weak limit of the Beta distribution as $\nu \downarrow 0$; see the proof of Theorem \[theo:distance\_distr\], below. Since the proofs of our limit theorems are based on the above formulas for the moments, we may and will consider the spherical and the Beta models together, the former being the special case of the latter with $\nu=0$. We refrain from stating the limit theorems in the Beta prime case because they seem similar to the Beta case. Distributions for the volumes of random simplices and parallelotopes -------------------------------------------------------------------- The purpose of this section is to derive probabilistic representations for the random variables ${\mathcal{W}}_{n,r}^2$ and ${\mathcal{V}}_{n,r}^2$ for the four models introduced in Section \[sec:SectionModels\]. For this, we need to recall certain standard distributions. A random variable has a Gamma distribution with shape $\alpha>0$ and scale $\lambda>0$ if its density is given by $$g(t) = \frac{\lambda^{\alpha}}{\Gamma(\alpha)} t^{\alpha-1} {{\rm e}}^{-\lambda t}, \quad t\geq 0.$$ Especially if $\alpha={d/2}$ for some $d\in{\mathbb{N}}$ and $\lambda={1/ 2}$, we speak about a $\chi^2$ distribution with $d$ degrees of freedom. A random variable has a Beta distribution with parameters $\alpha_1>0, \alpha_2>0$ if its density is $$g(t) = \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)} t^{\alpha_1-1} (1-t)^{\alpha_2-1}, \quad t\in (0,1).$$ Finally, a random variable has a Beta prime distribution with parameters $\alpha_1>0$, $\alpha_2>0$ if its density is $$g(t) = \frac{\Gamma(\alpha_1+\alpha_2)}{\Gamma(\alpha_1)\Gamma(\alpha_2)} t^{\alpha_1-1} (1+t)^{-\alpha_1-\alpha_2}, \quad t > 0.$$ Note that the Beta prime distribution coincides, up to rescaling, with the Fisher F distribution. We agree to denote by $\chi^2_d$, respectively $\Gamma_{\alpha, \lambda}, \beta_{\alpha_1,\alpha_2}, \beta'_{\alpha_1,\alpha_2}$, a random variable with $\chi^2$-distribution with $d\in{\mathbb{N}}$ degrees of freedom and the Gamma, Beta or Beta prime distribution with corresponding parameters, respectively. We shall also use the notation $X\sim{\text{\rm Beta}}(\alpha_1,\alpha_2)$ or $X\sim{\text{\rm Beta}}'(\alpha_1,\alpha_2)$ to indicate that a random variable $X$ has a Beta or a Beta prime distribution with parameters $\alpha_1$ and $\alpha_2$, respectively. Also, we agree that all such variables are assumed to be independent. We recall that the moments (of real order $k\geq 0$, as long as they exist) of these distributions are given by: $${\mathbb E}[\chi_d^{2k}] = 2^k \frac{\Gamma\Big(\frac d2 + k\Big)}{\Gamma\Big(\frac d2\Big)}, \quad {\mathbb E}[\beta_{\alpha_1,\alpha_2}^k] = \frac{\Gamma(\alpha_1+\alpha_2)\Gamma(\alpha_1+k)}{\Gamma(\alpha_1) \Gamma(\alpha_1+\alpha_2+k)}, \quad {\mathbb E}[(\beta_{\alpha_1,\alpha_2}')^{k}] = \frac{\Gamma(\alpha_1+k)\Gamma(\alpha_2-k)}{\Gamma(\alpha_1) \Gamma(\alpha_2)}.$$ Using Theorem \[theo:vol\_Parallelotopes\_Moments\] we first obtain probabilistic representations for the volume of random parallelotopes spanned by vectors whose distributions belong to one of the classes introduced in Section \[sec:SectionModels\]. \[theo:vol\_distr\_linear\] Let $\mathcal W_{n,r}$ be the volume of the $r$-dimensional parallelotope spanned by the vectors $X_1,\ldots,X_{r}$ chosen according to one of the above four models. - In the Gaussian model we have $\mathcal W_{n,r}^2 {\stackrel{d}{=}}\prod\limits_{j=1}^{r} \chi^2_{n-r+j}$. - In the Beta model we have $\mathcal W_{n,r}^2 {\stackrel{d}{=}}\prod\limits_{j=1}^r \beta_{{n-r+j\over 2}, {\nu + r -j\over 2}}$. - In the Beta prime model we have $\mathcal W_{n,r}^2 {\stackrel{d}{=}}\prod\limits_{j=1}^r \beta'_{{n-r+j\over 2}, {\nu\over 2}}$. - In the spherical model we have $\mathcal W_{n,r}^2 {\stackrel{d}{=}}\prod\limits_{j=1}^r \beta_{{n-r+j\over 2}, {r -j\over 2}}$. The random variables involved in the products are assumed to be independent. The distribution of the volume of a random simplex generated by one of the four models is more involved and can be derived from Theorem \[theo:vol\_Simplices\_Moments\]. \[theo:vol\_distr\_affine\] Let $\mathcal V_{n,r}$ be the volume of the $r$-dimensional simplex with vertices $X_1,\ldots,X_{r+1}$ chosen according to the one of the above four models. - In the Gaussian model we have $(r!\mathcal V_{n,r})^2 {\stackrel{d}{=}}(r+1) \prod\limits_{j=1}^{r} \chi^2_{n-r+j}$. - In the Beta model we have $\xi(1-\xi)^r (r!\mathcal V_{n,r})^2 {\stackrel{d}{=}}(1-\eta)^r \prod\limits_{j=1}^r \beta_{{n-r+j\over 2}, {\nu + r -j\over 2}}$, where $\xi,\eta\sim \text{Beta}(\frac{n+\nu}{2}, \frac{r(n+\nu-2)}{2})$ are random variables such that $\xi$ is independent of $\mathcal V_{n,r}$, while $\eta$ is independent of $\beta_{{n-r+j\over 2}, {\nu + r -j\over 2}}$, $j=1,\ldots,r$. - In the Beta prime model we have $(1+\eta)^r (r!\mathcal V_{n,r})^2 {\stackrel{d}{=}}\xi^{-1}(1+\xi)^{r+1} \prod\limits_{k=1}^r \beta'_{{n-r+j\over 2}, {\nu\over 2}}$, where $\xi,\eta\sim \text{Beta}'(\frac{\nu}{2}, \frac{r\nu}{2})$ are random variables such that $\eta$ is independent of $\mathcal V_{n,r}$, while $\xi$ is independent of $\beta'_{{n - r +j\over 2}, {\nu\over 2}}$, $j=1,\ldots,r$. - In the spherical model we have $\xi(1-\xi)^r (r!\mathcal V_{n,r})^2 {\stackrel{d}{=}}(1-\eta)^r \prod\limits_{j=1}^r \beta_{{n-r+j\over 2}, {r -j\over 2}}$, where $\xi,\eta\sim \text{Beta}(\frac{n}{2}, \frac{r(n-2)}{2})$ are random variables such that $\xi$ is independent of $\mathcal V_{n,r}$, while $\eta$ is independent of $\beta_{{n-r+j\over 2}, {r -j\over 2}}$, $j=1,\ldots,r$. The assertion in (a) follows directly from Theorem \[theo:vol\_Simplices\_Moments\] (a) combined with the fact that the $k$th moment of a $\chi_{n-r+j}^2$ random variable is given by $$2^k{\Gamma({n-r+j\over 2}+k)\over \Gamma({n-r+j\over 2})}.$$ To prove (b) we define $\alpha_1:={n+\nu\over 2}$ and $\alpha_2:={r(n+\nu-2)\over 2}$. Denoting by $B(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$, $x,y>0$, the Beta function, we observe that, since $\xi,\eta\sim{\text{\rm Beta}}(\alpha_1,\alpha_2)$, $${\mathbb E}[(1-\eta)^{rk}] = {1\over B(\alpha_1,\alpha_2)}\int_0^1 x^{\alpha_1-1}(1-x)^{\alpha_2+rk-1}\,{{\rm d}}x = {B(\alpha_1,\alpha_2+rk)\over B(\alpha_1,\alpha_2)}$$ and $${\mathbb E}[\xi^k(1-\xi)^{rk}] = {1\over B(\alpha_1,\alpha_2)}\int_0^1 x^{\alpha_1+k-1}(1-x)^{\alpha_2+rk-1}\,{{\rm d}}x = {B(\alpha_1+k,\alpha_2+rk)\over B(\alpha_1,\alpha_2)}\,.$$ This implies that $$\begin{aligned} \frac {{\mathbb E}[(1-\eta)^{rk}]}{{\mathbb E}[\xi^k(1-\xi)^{rk}]} &= \frac {B(\alpha_1,\alpha_2+rk)}{B(\alpha_1+k,\alpha_2+rk)} = {\Gamma(\alpha_1+\alpha_2+(r+1)k)\Gamma(\alpha_1)\over\Gamma(\alpha_1+k)\Gamma(\alpha_1+\alpha_2+rk)}\\ &={\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+(r+1)k\Big)\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+k\Big)\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+rk\Big)}\end{aligned}$$ and this is precisely the last factor in the formula for the moments, see Theorem \[theo:vol\_Simplices\_Moments\] (b). Next, we consider (c). Since $\xi,\eta\sim \text{Beta}'(\alpha_1, \alpha_2)$ with $\alpha_1 = \frac{\nu}{2}$ and $\alpha_2 = \frac{r\nu}{2}$, we apply the formula $\int_0^{\infty} x^{\alpha_1-1} (1+x)^{-\alpha_1-\alpha_2} {{\rm d}}x = B(\alpha_1,\alpha_2)$ to obtain $${\mathbb E}[(1+\eta)^{rk}] = \frac 1 {B(\alpha_1,\alpha_2)} \int_0^\infty x^{\alpha_1-1}(1+x)^{-\alpha_1-(\alpha_2-rk)}{{\rm d}}x = \frac{B(\alpha_1, \alpha_2 - rk)}{B(\alpha_1,\alpha_2)}$$ and $${\mathbb E}\Big[\xi^{-k}(1+\xi)^{(r+1)k}\Big] = \frac 1 {B(\alpha_1,\alpha_2)}\int_0^\infty x^{\alpha_1-k-1}(1+x)^{-\alpha_1-\alpha_2 - (r+1)k} {{\rm d}}x = \frac{B(\alpha_1 - k, \alpha_2 - rk)}{B(\alpha_1, \alpha_2)}.$$ It follows that $$\begin{aligned} \frac{{\mathbb E}\Big[\xi^{-k}(1+\xi)^{(r+1)k}\Big]}{{\mathbb E}[(1+\eta)^{rk}]} &= \frac{B(\alpha_1 - k, \alpha_2 - rk)}{B(\alpha_1, \alpha_2 - rk)}= \frac{\Gamma(\alpha_1 - k)\Gamma(\alpha_1+\alpha_2 -rk)}{\Gamma(\alpha_1+\alpha_2 - (r+1) k) \Gamma(\alpha_1)}\\ &= \frac{\Gamma\Big(\frac \nu 2 -k \Big)}{\Gamma\Big(\frac \nu2\Big)} \frac{\Gamma\Big( \frac{(r+1)\nu}{2} -rk \Big)}{\Gamma\Big( \frac{(r+1)\nu}{2} - (r+1)k \Big)},\end{aligned}$$ which is exactly the last factor in the formula for the moments given by Theorem \[theo:vol\_Simplices\_Moments\] (c). The assertion in (d) follows as a limit case from that in (b), as $\nu\downarrow 0$. The distributional equality in Theorem \[theo:vol\_distr\_affine\] (a) has already been noted by Miles, see [@Miles71 Section 13]. The other probabilistic representations in (b)–(d) seem to be new. Distance distributions ---------------------- As in the previous sections let $X_1,\ldots,X_{r+1}$ be independent random points that are distributed according to one of the four models from Section \[sec:SectionModels\]. Our interest now lies in the distance from the origin to the $r$-dimensional affine subspace spanned by $X_1,\ldots,X_{r+1}$. \[theo:distance\_distr\] Let $X_1,\ldots,X_{r+1}$ be chosen according to one of the above four models and denote by ${\mathcal{D}}_{n,r}$ the distance from the origin to the $r$-dimensional affine subspace spanned by $X_1,\ldots,X_{r+1}$. - In the Gaussian model we have ${\mathcal{D}}_{n,r}^2{\stackrel{d}{=}}(r+1)^{-1}\chi_{n-r}^2$. - In the Beta model we have ${\mathcal{D}}_{n,r}^2{\stackrel{d}{=}}\beta_{{n-r\over 2},{\nu(r+1)+r(n-1)\over 2}}$. - In the Beta prime model we have ${\mathcal{D}}_{n,r}^2{\stackrel{d}{=}}\beta'_{{n-r\over 2},{\nu(r+1)\over 2}}$. - In the spherical model we have ${\mathcal{D}}_{n,r}^2{\stackrel{d}{=}}\beta_{{n-r\over 2},{r(n-1)\over 2}}$. The density of ${\mathcal{D}}_{n,r}$ in the cases (a)–(c) can be computed from a formula on page 16 in [@RubenMiles80]. In fact, for the Gaussian model we obtain that ${\mathcal{D}}_{n,r}$ has density $$h\mapsto c_{n,r}\,h^{n-r-1}\,e^{-{h^2(r+1)\over 2}}\,,\qquad h>0\,,$$ which implies (a). For the Beta model we obtain the density $$h\mapsto c_{n,r,\nu}\,h^{n-r-1}(1-h^2)^{{r(n+1)\over 2}+{(r+1)(\nu-2)\over 2}}\,,\qquad 0<h<1\,,$$ for ${\mathcal{D}}_{n,r}$ and (b) follows. Next, for the Beta prime model the density of ${\mathcal{D}}_{n,r}$ is given by $$h\mapsto c_{n,r,\nu}\,h^{n-r-1}(1+h^2)^{{r(n+1)\over 2}-{(r+1)(n+\nu)\over 2}}\,,\qquad h>0\,,$$ whence (c) follows. Finally, the spherical model follows from the Beta model in the limit, as $\nu\downarrow 0$. In fact, since the centred ball of radius $1$ can be regarded as a compact metric space, the family of probability measures $({\mathbb{P}}_\nu)_{\nu>0}$ with densities $f_\nu(|x|):={\rm const}(1-|x|^2)^{(\nu-2)/2}$, $\nu>0$, is tight for each $n\in{\mathbb{N}}$. Thus, $({\mathbb{P}}_\nu)_{\nu>0}$ is weakly sequentially compact, i.e., there exist weakly convergent subsequence $({\mathbb{P}}_{\nu_n})_{n\in{\mathbb{N}}}$ with $\nu_n\downarrow 0$. For each such sequence $\nu_n$ the limiting probability measure is easily seen to have the following two properties: (i) it is rotation invariant and (ii) it is concentrated on the boundary of the centred ball of radius $1$, that is, the radius $1$ sphere. In other words, the limit must coincide with the normalized spherical Lebesgue measure on that sphere. Now, as $\nu\downarrow 0$ and since $(x_1,\ldots,x_{r+1})\mapsto {\mathop{\mathrm{dist}}\nolimits}(0,{\mathop{\mathrm{aff}}\nolimits}(x_1,\ldots,x_{r+1}))$ is a bounded continuous function on the $(r+1)$-st cartesian power of the unit ball, the density in (d) is the limit of the density in (b). Cumulants, Berry-Esseen bounds and moderate deviations ====================================================== In this section we shall concentrate on the Gaussian, the Beta and the spherical model, for which the random variables ${\mathcal{V}}_{n,r}$ have finite moments of all orders for any $n\in{\mathbb{N}}$ and $r\leq n$. Cumulants for logarithmic volumes --------------------------------- For a random variable $X$ with ${\mathbb E}[|X|^m]<\infty$ for some $m\in{\mathbb{N}}$, we write $c^m[X]$ for the $m$th order cumulant of $X$, that is, $$\begin{aligned} \label{DefinitionCumulant} c^m[X] = (-\mathfrak{i})^{m}\,{{{\rm d}}^m\over{{\rm d}}t^m}\log{\mathbb{E}}[\exp(\mathfrak{i}tX)]\Big|_{t=0}\,,\end{aligned}$$ where $\mathfrak{i}$ stands for the imaginary unit. It is well known that sharp bounds for cumulants lead to fine probabilistic estimates for the involved random variables. For the volume of a random simplex with Gaussian or Beta distributed vertices we shall establish the following cumulant bound. In what follows we shall write $a_n\sim b_n$ for two sequences $(a_n)_{n\in{\mathbb{N}}}$ and $(b_n)_{n\in{\mathbb{N}}}$ if $a_n/b_n\to 1$, as $n\to\infty$. Let us define the random variable ${\mathcal{L}}_{n,r}:=\log(r!{\mathcal{V}}_{n,r})$. \[thm:Cumulants\] Let $X_1,\ldots,X_{r+1}$ be chosen according to one of the four models presented in the previous section, and let $\alpha\in(0,1)$. - For the Gaussian model we have $${\mathbb E}{\mathcal{L}}_{n,r}\sim \frac{r}{2} \log{n}, \qquad\qquad {\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} \sim \begin{cases} {r\over 2n} &: r=o(n)\\ {\frac 12 \log \frac 1 {1-\alpha}}&: r\sim \alpha n\\ \frac{1}{2} \log \left(\frac{n}{n-r+1}\right) &: n-r = o(n) \end{cases}$$ and, for $m\geq 3$, $$|c^m({\mathcal{L}}_{n,r})|\leq \begin{cases} C^m (m-1)! rn^{1-m} &: r=o(n) \text{ or } r\sim\alpha n\\ 2\,(m-1)!&: \text{for arbitrary r(n)}\,, \end{cases}$$ where $C\in(0,\infty)$ is a constant not depending on $n$ and $m$. - For the Beta model and the spherical model we have $$\begin{aligned} {\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} &\sim \frac 12 \log \frac n{n-r} - \frac {r^2}{2n (r+1)} \sim \begin{cases} \frac {r}{2(r+1)n}&: r=o(\sqrt n)\\ \frac 12 \log \frac 1 {1-\alpha} -\frac \alpha 2&: r\sim \alpha n\\ \frac{1}{2} \log \left(\frac{n}{n-r+1}\right) &: n-r = o(n) \end{cases}\end{aligned}$$ and, for all $m\geq 3$ and $n\geq 4$, $$|c^m({\mathcal{L}}_{n,r})|\leq \begin{cases} C^m m!rn^{1-m} &: r=o(n)\text{ or }r\sim \alpha n\\ 2 \cdot 4^m m! &: \text{for arbitrary r(n)}\,, \end{cases}$$ where $C\in(0,\infty)$ is a constant not depending on $n$ and $m$. The proof of Theorem \[thm:Cumulants\] is to some extent canonical and roughly follows [@DöringEichelsbacher]. In particular, it is based on an asymptotic analysis, as $|z|\to\infty$, of the digamma function $\psi(z)=\psi^{(0)}(z) : = \frac{{{\rm d}}}{{{\rm d}}z} \log\Gamma(z)$, and the polygamma functions $$\psi^{(m)}(z) : = \frac{{{\rm d}}^m}{{{\rm d}}z^m}\psi(z) = \frac{{{\rm d}}^{m+1}}{{{\rm d}}z^{m+1}} \log\Gamma(z),\qquad m \in {\mathbb{N}}.$$ We start with the following lemma. \[lem:AsymptoticPolygamma\] Let $m\in{\mathbb{N}}$. Then, as $|z|\to\infty$ in $|\arg z|<\pi-{\varepsilon}$, $$\label{eq:psi_asympt} \psi(z) = \log z + O(1/z)\quad\text{and}\quad \psi^{(m)}(z) =(-1)^{m-1}\,{(m-1)!\over z^m} + O(1/z^{m+1})\,.$$ Moreover, for all $z>0$, $$\label{eq:psi_ineq} |\psi^{(m)}(z)| \leq {(m-1)!\over z^m} + {m!\over z^{m+1}}\,.$$ The asymptotic relations can be found in [@Abramovitz], pp. 259–260. To prove the inequality, note that $$|\psi^{(m)}(z)| = \sum_{k=0}^\infty \frac { m!} {(z+k)^{m+1}} \leq \frac {m!}{z^{m+1}} + m!\int_z^\infty \frac {{\rm d}x}{x^{m+1}} = \frac {m!}{z^{m+1}} + \frac{(m-1)!}{z^m},$$ where we estimated the sum by the integral because the function $x\mapsto x^{-(m+1)}$, $x>0$, is decreasing. \[lem:Polygamma(const)\] As $n\to\infty$, one has $$\label{eq:lem:Polygamma_asympt} \frac{1}{2} \sum_{j=1}^{n} \psi\left(\frac{j}{2}\right) \sim {n\over 2} \log n, \quad \frac{1}{4} \sum_{j=1}^{n} \psi^{(1)}\left(\frac{j}{2}\right) = {1\over 2}\log {n} +c_1 + o(1),$$ where $c_1={1\over 2}(\gamma+1+{\pi^2\over 8})$ with the Euler-Mascheroni constant $\gamma$. Moreover, for all $m\ge 3$, $$\label{eq:lem:Polygamma_ineq} \frac{1}{2^{m}} \Big| \sum_{j=1}^{n} \psi^{(m-1)}\left(\frac{j}{2}\right) \Big| \le 2 (m-1)!\,.$$ The asymptotic relations  can essentially be found in [@DöringEichelsbacher] (where the constant $c_1$ has been computed explicitly). The first one follows from $\psi(z) = \log z + O(1/z)$ as $z\to\infty$ together with $\sum_{j=1}^n \log \frac j2 \sim n\log {n}$ as $n\to\infty$. To prove the second one, write $$\frac 14\sum_{j=1}^{n} \psi^{(1)}\left(\frac{j}{2}\right) -\frac 12 \log n = \frac 14\sum_{j=1}^{n} \left(\psi^{(1)}\left(\frac{j}{2}\right)-\frac 2j \right) + \frac 12 \left(\sum_{j=1}^n \frac 1j - \log n\right)$$ and observe that the series $\sum_{j=1}^{\infty} (\psi^{(1)}(\frac{j}{2})-\frac 2j)$ converges because $\psi^{(1)}(z)- \frac 1z = O(\frac 1 {z^2})$ as $z\to\infty$. The claim follows since $\sum_{j=1}^n \frac 1j -\log n$ converges to the Euler constant $\gamma$. To prove inequality , use Lemma \[lem:AsymptoticPolygamma\] to get $$\frac{1}{2^{m}} \Big| \sum_{j=1}^{n} \psi^{(m-1)}\left(\frac{j}{2}\right) \Big| \leq \frac 1 {2^m} \sum_{j=1}^{\infty} \left(\frac{(m-2)!}{(j/2)^{m-1}} + \frac{(m-1)!}{(j/2)^m}\right) \leq (m-1)! \left(\frac 14\zeta(2) + \zeta(3)\right)$$ for all $m\geq 3$, where we used the inequality $(m-2)! \leq \frac 12 (m-1)!$. The constant in the brackets is smaller than $2$. Since the moments of ${\mathcal{V}}_{n,r}$ both, for the Gaussian and the Beta model, involve the same product of fractions of Gamma functions, we prepare the proof of Theorem \[thm:Cumulants\] with the following lemma. We define $$S_{n,r}(z):=\sum_{j=1}^{r} \bigg[\log\Gamma\left({n-r+j+z\over 2}\right) - \log\Gamma\left({n-r+j\over 2}\right)\bigg],\qquad z>0.$$ \[lem:CumulantsPreparation\] - If $r=o(n)$ then, as $n\to\infty$, $${{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0} \sim \begin{cases} {r\over 2} \log {n} &: m=1 \\ {(-1)^{m}\over 2}\,(m-2)! \,r\, n^{-(m-1)} &: m\geq 2. \end{cases}$$ - If $r \sim \alpha n$ for some $\alpha\in(0,1)$ then, as $n\to\infty$, $${{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0} \sim \begin{cases} \frac {\alpha n}{2} \log n &: m=1\\ \frac 12 \log \frac 1{1-\alpha}&: m=2 \\ \frac{(-1)^{m} (m-3)!}{2\cdot n^{m-2}} \left(\frac 1 {(1-\alpha)^{m-2}} - 1\right)&: m\geq 3. \end{cases}$$ - If $n-r=o(n)$ then, as $n\to\infty$, $${{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0} \sim \begin{cases} {n\over 2}\log {n} &: m=1\\ {{1}\over 2}\log \frac {n}{n-r+1}&: m=2. \end{cases}$$ - For $m\ge 2$ and if $r=o(n)$ or $r\sim \alpha n$, $\alpha\in (0,1)$, then there is a constant $C$ which may depend on $\alpha$ (but does not depend on $m,n$) such that $$\begin{aligned} \left|{{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0}\right| \leq C^m (m-1)! r n^{1-m}.\end{aligned}$$ - Finally, for $m\ge 3$ and without any conditions on $r$, we have $$\begin{aligned} \left|{{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0}\right| \le 2(m-1)!\,.\end{aligned}$$ Let us prove $(a)$, $(b)$, $(c)$ for $m=1$. We have $${{{\rm d}}\over{{\rm d}}z}S_{n,r}(z)\Big|_{z=0} = {1\over 2}\sum_{j=1}^r\psi\left(\frac{n-r+j}{2}\right) ={1\over 2}\sum_{j=1}^{n}\psi\left(\frac{j}{2}\right) - {1\over 2}\sum_{j=1}^{n-r}\psi\left(\frac{j}{2}\right),$$ and all three statements follow easily from the relation $\frac{1}{2} \sum_{j=1}^{n} \psi\left(\frac{j}{2}\right) \sim {n\over 2} \log n$; see Lemma \[lem:Polygamma(const)\]. Next we prove $(a)$, $(b)$, $(c)$ for $m\geq 2$. We have $${{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0} = \frac{1}{2^m} \sum_{j=1}^{r} \psi^{(m-1)}\left(\frac{n-r+j}{2}\right)$$ and again we can conclude $(a)$ by using Equation  of Lemma \[lem:AsymptoticPolygamma\]. To prove $(b)$ for $m=2$, apply the second asymptotics in  of Lemma \[lem:Polygamma(const)\] to get $$\begin{aligned} {{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} &= \frac{1}{4} \sum_{j=1}^{r} \psi^{(1)}\left(\frac{n-r+j}{2}\right) = \frac 12 \log n +c_1 - \frac 12 \log (n-r)-c_1 + o(1)\\ &= \frac 12 \log \frac n {n-r} + o(1) = \frac 12 \log \frac 1 {1-\alpha} + o(1).\end{aligned}$$ To prove $(b)$ for $m\geq 3$, note that for $r\sim \alpha n$, $$\begin{aligned} &\frac{1}{2^m} \sum_{j=1}^{r} \psi^{(m-1)}\left(\frac{n-r+j}{2}\right) \sim \frac{1}{2^m} \sum_{k=n-r+1}^n\,\frac {(-1)^{m-2} (m-2)!} {(k/2)^{m-1}}\\ &= \frac{(-1)^m\,(m-2)!}{2} \bigg[\sum_{k=1}^n\,\frac {1} {k^{m-1}}-\sum_{k=1}^{n-r}\,\frac {1} {k^{m-1}}\bigg]\sim \frac{(-1)^{m} (m-3)!}{2\cdot n^{m-2}} \left(\frac 1 {(1-\alpha)^{m-2}} - 1\right),\end{aligned}$$ using the asymptotics for the tail of the Riemann zeta series. Finally, to prove $(c)$ for $m=2$ use the formula $\frac{1}{4} \sum_{j=1}^{r} \psi^{(1)}\left(\frac{n-r+j}{2}\right) = \frac 12 \log n + O(1)$ following from  to get $$\begin{gathered} {{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} = \frac{1}{4} \sum_{j=1}^{r} \psi^{(1)}\left(\frac{n-r+j}{2}\right) = \frac 12 \log n +O(1) - \frac 12 \log (n-r+1) - O(1)\\ = \frac 12 \log \frac n {n-r+1} + O(1) \sim \frac 12 \log \frac n {n-r+1}\end{gathered}$$ because $\frac n {n-r+1}\to\infty$. We added the term $+1$ to make the formula work in the case $r=n$. Let us prove $(d)$. Since the function $|\psi^{(m-1)}(z)| = \sum_{k=0}^\infty \frac {(m-2)!} {(z+k)^{m}}$ is decreasing, we can write $$\left| {{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0}\right| = \frac{1}{2^m} \sum_{j=1}^{r} \left|\psi^{(m-1)}\left(\frac{n-r+j}{2}\right)\right| \leq \frac {r}{2^m} \left|\psi^{(m-1)}\left(\frac{n-r+1}{2}\right)\right|,$$ and the claim follows from the estimates $|\psi^{(m-1)}(z)| \leq 2 \cdot (m-1)! z^{1-m}$, $z\geq 1$, which is a consequence of Lemma \[lem:AsymptoticPolygamma\], and $n-r+1 > n/C$ for sufficiently large $C$. Let us prove $(e)$. If $m\ge 3$ and $r$ is arbitrary, we observe that the function $\psi^{(m-1)}(z)$, $z>0$, has the same sign as $(-1)^m$ and hence $$\begin{aligned} \left| {{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0}\right| &= \frac{1}{2^m} \sum_{j=1}^{r} \left|\psi^{(m-1)}\left(\frac{n-r+j}{2}\right)\right| \leq \frac{1}{2^m} \sum_{j=1}^{n} \left|\psi^{(m-1)}\left(\frac{j}{2}\right)\right|. $$ Then, the result follows in view of inequality  of Lemma \[lem:Polygamma(const)\]. Thus, the proof is complete. Denote the moment generating function of ${\mathcal{L}}_{n,r}=\log (r! {\mathcal{V}}_{n,r})$ by $$M_{n,r}(z):={\mathbb{E}}[\exp(z{\mathcal{L}}_{n,r})]= {\mathbb E}[(r!{\mathcal{V}}_{n,r})^z].$$ We start with the Gaussian model. Recalling the moment formula from Theorem \[theo:vol\_Simplices\_Moments\](a), we see that $$\begin{aligned} \log M_{n,r}(z) = S_{n,r}(z)+\frac{z}{2} \log(r+1) + \frac{zr}{2} \log 2\end{aligned}$$ and hence $$\begin{aligned} \frac{{{\rm d}}^m}{{{\rm d}}z^m} \log M_{n,r}(z) = \frac{{{\rm d}}^m}{{{\rm d}}z^m}S_{n,r}(z)+{\bf 1}_{\{m=1\}}\frac{1}{2} \log(r+1) + {\bf 1}_{\{m=1\}} \frac{r}{2} \log 2\end{aligned}$$ for all $m\in{\mathbb{N}}$. By taking $z=0$ it follows that $$c^m[{\mathcal{L}}_{n,r}] = \frac{{{\rm d}}^m}{{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0} +{\bf 1}_{\{m=1\}}\frac{1}{2} \log(r+1) + {\bf 1}_{\{m=1\}} \frac{r}{2} \log 2.$$ Using Lemma \[lem:CumulantsPreparation\] we immediately get the required asymptotic formulae for ${\mathbb E}{\mathcal{L}}_{n,r} = c^1[{\mathcal{L}}_{n,r}]$ and ${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} = c^2[{\mathcal{L}}_{n,r}]$. The estimates for the cumulants $c^m[{\mathcal{L}}_{n,r}]$, $m\geq 3$, follow from Lemma \[lem:CumulantsPreparation\] (d),(e). Next, we consider the Beta model and prove part (b) of the theorem. Recalling the moment formula from Theorem \[theo:vol\_Simplices\_Moments\](b) and denoting by $M_{n,r}(z)$ again the moment generating function of ${\mathcal{L}}_{n,r}$, we see that $$\begin{gathered} \log M_{n,r}(z) = S_{n,r}(z)+\log\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+\frac{(r+1)z}{2}\Big) \\ +(r+1)\log \Gamma\left(\frac{n + \nu}{2}\right) -\log \Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+\frac{rz}{2}\Big)-(r+1)\log \Gamma\left(\frac{n+\nu}{2} + \frac{z}{2}\right).\end{gathered}$$ It follows that, for $m\in{\mathbb{N}}$, $\frac{{{\rm d}}^m}{{{\rm d}}z^m} \log M_{n,r}(z)$ equals $$\label{eq:AbleitungMnrZ} \begin{split} &\frac{{{\rm d}}^m}{{{\rm d}}z^m}S_{n,r}(z)+\Big({r+1\over 2}\Big)^{m}\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}+\frac{(r+1)z}{2}\Big)\\ &-\Big({r\over 2}\Big)^{m}\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}+\frac{rz}{2}\Big)-{r+1\over 2^{m}}\psi^{(m-1)}\left(\frac{n+\nu}{2} + \frac{z}{2}\right). \end{split}$$ Taking $z=0$, we obtain $$\begin{gathered} \label{eq:c_m_expression} c^m[{\mathcal{L}}_{n,r}] = \frac{{{\rm d}}^m}{{{\rm d}}z^m} S_{n,r}(z)\Big|_{z=0} + \Big({r+1\over 2}\Big)^{m}\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big)\\ -\Big({r\over 2}\Big)^{m}\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big)-{r+1\over 2^{m}}\psi^{(m-1)}\left(\frac{n+\nu}{2}\right).\end{gathered}$$ Let us compute the asymptotics of ${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r}= c^2[{\mathcal{L}}_{n,r}]$ in the case $r=o(n)$. First of all, using the formula $\psi^{(1)}(z) = 1/z + O(1/z^2)$ as $z\to\infty$, we obtain $${{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} = \frac{1}{4} \sum_{j=1}^{r} \psi^{(1)}\left(\frac{n-r+j}{2}\right) = \frac 14 \sum_{j=1}^r \frac {2}{n-r+j} + O\left(\frac {r}{n^2}\right) = \frac {H_n-H_{n-r}}2 + O\left(\frac {r}{n^2}\right),$$ where $H_n = \sum_{k=1}^n 1/k$ is the $n$-th harmonic number. Using the formula $H_n = \log n + \gamma + 1/(2n) + O(1/n^2)$ as $n\to\infty$, we arrive at $${{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} = \frac 12 \log \frac{n}{n-r} + \frac 12 \left(\frac 1n - \frac 1 {n-r}\right) + O\left(\frac 1 {n^2}\right) + O\left(\frac {r}{n^2}\right) = \frac 12 \log \frac{n}{n-r} + O\left(\frac {r}{n^2}\right).$$ Again using the formula $\psi^{(1)}(z) = 1/z + O(1/z^2)$ as $z\to\infty$, we obtain $$\psi^{(1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big) = \frac 2 {n(r+1) + O(r)} + O\left(\frac {1}{n^2r^2}\right) =\frac 2 {n(r+1)} + O\left(\frac {1}{n^2r}\right)$$ and $$\psi^{(1)}\left(\frac{n+\nu}{2}\right) = \frac {2}{n} + O\left(\frac 1 {n^2}\right).$$ Recalling  and taking everything together, we obtain $$\begin{gathered} {\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} = c^2[{\mathcal{L}}_{n,r}] = \frac 12 \log \frac{n}{n-r} + \frac {2r+1}{4}\frac 2 {n(r+1)} - \frac {r+1}{4}\frac {2}{n} + O\left(\frac r {n^2}\right) \\= \frac 12 \log \frac n{n-r} - \frac {r^2}{2n (r+1)}+ O\left(\frac r {n^2}\right).\end{gathered}$$ In the case $r\sim \alpha n$ we evidently have $$\lim_{n\to\infty} {\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} = \frac 12 \log \frac 1 {1-\alpha} - \frac {\alpha}{2}.$$ In the case $r= o(n)$ observe that $\log \frac{n}{n-r} \geq \frac rn$, so that $\frac 12 \log \frac n{n-r} - \frac {r^2}{2n (r+1)} \geq \frac {r}{2n (r+1)}$. Thus, we have $\frac r {n^2}=o(\frac 12 \log \frac n{n-r} - \frac {r^2}{2n (r+1)})$ and we can conclude that $${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} \sim \frac 12 \log \frac n{n-r} - \frac {r^2}{2n (r+1)}.$$ Finally, observe that in the case $r=o(\sqrt n)$ we can use the Taylor expansion of the logarithm to get ${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} \sim \frac {r}{2(r+1)n}$, but this formula breaks down if $r$ is of order $\sqrt n$. This completes the proof of the asymptotics of ${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r}$ in the cases $r=o(n)$ and $r\sim \alpha n$. Let us now compute the asymptotics of ${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r})= c^2[{\mathcal{L}}_{n,r}]$ in the case $n-r = o(n)$. Using the formula $\psi^{(1)}(z) = 1/z + O(1/z^2)$ as $z\to\infty$, we obtain $${{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} = \frac{1}{4} \sum_{j=1}^{r} \psi^{(1)}\left(\frac{n-r+j}{2}\right) = \frac {H_n-H_{n-r}}2 + O\left(\frac 1n\right),$$ Using the formulas $H_n = \log n + O(1)$ and $H_{n-r} = \log (n-r+1) +O(1)$ (where $+1$ is needed to make the expression well-defined in the case $r=n$), we arrive at $${{{\rm d}}^2\over{{\rm d}}z^2}S_{n,r}(z)\Big|_{z=0} = \frac 12 \log \frac{n}{n-r+1} +O(1).$$ By the formula $\psi^{(1)}(z) = O(1/z)$ as $z\to\infty$, we have $$\psi^{(1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big) = O\left(\frac 1{n^2}\right), \quad \psi^{(1)}\left(\frac{n+\nu}{2}\right) = O\left(\frac 1 {n}\right).$$ Plugging everything into yields $${\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r} = c^2[{\mathcal{L}}_{n,r}] = \frac 12 \log \frac{n}{n-r+1} +O(1) \sim \frac 12 \log \frac{n}{n-r+1}$$ because $\frac {n}{n-r+1} \to\infty$, thus proving the required asymptotics of the variance. Next we prove the bounds on the cumulants assuming that $r=o(n)$ or $r\sim \alpha n$. Recall from Lemma \[lem:CumulantsPreparation\](d) the estimate $$\begin{aligned} \left|{{{\rm d}}^m\over{{\rm d}}z^m}S_{n,r}(z)\Big|_{z=0}\right| \leq C^m (m-1)! r n^{1-m}.\end{aligned}$$ Further, since $\nu\geq 0$, we have $${r(n+\nu-2)+(n+\nu)\over 2} \geq \frac {r(n-2)}{2}.$$ Since the function $|\psi^{(m-1)}(z)|$ is non-increasing, we have, using also the estimate $|\psi^{(m-1)}(z)| \leq 2 \cdot (m-1)! z^{1-m}$, $$\left|\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big)\right| \leq \left|\psi^{(m-1)}\Big(\frac{r(n-2)}{2}\Big) \right| \leq 2^m (m-1)! r^{1-m} (n-2)^{1-m}.$$ By the mean value theorem, we also have $(r+1)^m - r^m \leq m (r+1)^{m-1}$, hence $$\frac{(r+1)^m - r^m}{2^m}\left|\psi^{(m-1)}\Big({r(n+\nu-2)+(n+\nu)\over 2}\Big)\right| \leq m! \left(\frac{r+1}{r}\right)^{m-1}(n-2)^{1-m} \leq 4^m m! n^{1-m}$$ because $n-2\geq n/2$ for $n\geq 4$. Similarly, by the non-increasing property of $|\psi^{(m-1)}(z)|$ and the estimate $|\psi^{(m-1)}(z)| \leq 2 \cdot (m-1)! z^{1-m}$, we have $${r+1\over 2^{m}}\left|\psi^{(m-1)}\left(\frac{n+\nu}{2}\right) \right| \leq {r+1\over 2^{m}} \left|\psi^{(m-1)}\left(\frac{n}{2}\right)\right| \leq 2r (m-1)! n^{1-m}.$$ Recalling and taking the above estimates together, we arrive at the required estimate $$|c^m[{\mathcal{L}}_{n,r}]| \leq C^m m! r n^{1-m}$$ for a sufficiently large constant $C>0$ not depending on $n$ and $m$. To prove the bound $|c^m[{\mathcal{L}}_{n,r}]| \leq 2 \cdot 4^m m!$ without restrictions on $r(n)$, we argue as above except that using Lemma (e) to bound the derivative of $S_{n,r}$: $$|c^m[{\mathcal{L}}_{n,r}]| \leq 2 (m-1)! + 4^m m! n^{1-m} + 2r (m-1)! n^{1-m} \leq 2 \cdot 4^m m!.$$ Finally, we consider the spherical model. Since the results for the Beta model are independent of the parameter $\nu$, they carry over to the spherical model which appears as a limiting case, as $\nu\downarrow 0$. Berry-Esseen bounds and moderate deviations for the log-volume {#SectionLDP} -------------------------------------------------------------- We introduce some terminology. One says that a sequence $(\nu_n)_{n \in {\mathbb{N}}}$ of probability measures on a topological space $E$ fulfils a large deviation principle (LDP) with speed $a_n$ and (good) rate function $I : E \rightarrow [0,\infty]$, if $I$ is lower semi-continuous, has compact level sets and if for every Borel set $B\subseteq E$, $$\begin{aligned} -\inf\limits_{x\in {\operatorname{int}}(B)} I(x) \leq \liminf\limits_{n \rightarrow \infty} a_n^{-1} \log \nu_n (B) \leq \limsup\limits_{n \rightarrow \infty} a_n^{-1} \log \nu_n (B) \leq -\inf\limits_{x\in {\operatorname{cl}}(B)} I(x)\,,\end{aligned}$$ where ${\operatorname{int}}(B)$ and ${\operatorname{cl}}(B)$ stand for the interior and the closure of $B$, respectively. A sequence $(X_n)_{n \in {\mathbb{N}}}$ of random elements in $E$ satisfies a LDP with speed $a_n$ and rate function $I : E \rightarrow [0,\infty]$, if the family of their distributions does. Moreover, if the rescaling $a_n$ lies between that of a law of large numbers and that of a central limit theorem, one usually speaks about a moderate deviation principle (MDP) instead of a LDP with speed $a_n$ and rate function $I$. We shall say that a sequence of real-valued random variables $(X_n)_{n\in{\mathbb{N}}}$ satisfying ${\mathbb E}|X_n|^2<\infty$ for all $n\in{\mathbb{N}}$ fulfils a Berry-Esseen bound (BEB) with speed $(\varepsilon_n)_{n\in{\mathbb{N}}}$ if $$\sup_{t\in{\mathbb{R}}}\Big|{\mathbb{P}}\Big({X_n-{\mathbb E}[X_n]\over\sqrt{{\mathop{\mathrm{Var}}\nolimits}X_n}}\leq t\Big)-\Phi(t)\Big| \leq c\,\varepsilon_n\,,$$ where $c>0$ is a constant not depending on $n$ and $\Phi(\,\cdot\,)$ denotes the distribution function of a standard Gaussian random variable. \[thm:CLTMDPSimplices\] Let $X_1,\ldots,X_{r+1}$ be chosen according to the Gaussian, the Beta or the spherical model. - For the Gaussian model define $$\varepsilon_n := {1\over\sqrt{rn}}\text{ if $r=o(n)$ or $r\sim\alpha n$}\quad\text{and}\quad \varepsilon_n:={1\over \sqrt{\log \left(\frac{n}{n-r+1}\right)}}\text{ if $n - r = o(n)$},$$ where $\alpha\in(0,1)$. Then ${\mathcal{L}}_{n,r}$ satisfies a BEB with speed $\varepsilon_n$. Further, let $(a_n)_{n\in{\mathbb{N}}}$ be such that $a_n\to\infty$ and $a_n\varepsilon_n\to 0$, as $n\to\infty$. Then ${\mathcal{L}}_{n,r}$ satisfies a MDP with speed $a_n$ rate function $I(x)={x^2\over 2}$. - For the Beta model and the spherical model define $$\varepsilon_n := {1\over\sqrt{n}}\text{ if $r=o(n)$},\quad \varepsilon_n:={1\over n}\text{ if $r\sim\alpha n$}\quad\text{and}\quad \varepsilon_n:={1\over \sqrt{\log \left(\frac{n}{n-r+1}\right)}}\text{ if $n - r = o(n)$}$$ with $\alpha\in(0,1)$. Then ${\mathcal{L}}_{n,r}$ satisfies a BEB with speed $\varepsilon_n$. Further, let $(a_n)_{n\in{\mathbb{N}}}$ be such that $a_n\to\infty$ and $a_n\varepsilon_n\to 0$, as $n\to\infty$. Then ${\mathcal{L}}_{n,r}$ satisfies a MDP with speed $a_n$ rate function $I(x)={x^2\over 2}$. Let us define the normalized random variable $\widetilde{{\mathcal{L}}}_{n,r}:=({\mathcal{L}}_{n,r}-{\mathbb E}[{\mathcal{L}}_{n,r}])/\sqrt{{\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r}}$. From Theorem \[thm:Cumulants\] we conclude that, for $m\geq 3$, $$|c^m(\widetilde{{\mathcal{L}}}_{n,r})| = \frac{|c^m({\mathcal{L}}_{n,r})|}{({\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r})^{m/2}} \leq \begin{cases} \frac{c_1^m (m-1)!}{(\sqrt {rn})^{m-2}} &: r=o(n) \text{ or } r\sim\alpha n\\ \frac{c_2^m (m-1)!}{\left(\sqrt{\log \frac{n}{n-r+1}}\right)^{m}} &: n - r = o(n) \end{cases}$$ in the Gaussian case and $$|c^m(\widetilde{{\mathcal{L}}}_{n,r})| \leq \begin{cases} {|c^m[{\mathcal{L}}_{n,r}]|\over(r/(2(r+1)n))^{m/2}} &: r=o(n)\\ {|c^m[{\mathcal{L}}_{n,r}]|\over(\frac{1}{2}\log(\frac{1}{1-\alpha}) - \frac{\alpha}{2})^{m/2}} &: r\sim\alpha n\\ {|c^m[{\mathcal{L}}_{n,r}]|\over(\frac{1}{2} \log \left(\frac{n}{n-r+1}\right) )^{m/2}} &: n - r = o(n) \end{cases} \leq \begin{cases} c_4^m\,m!\, \big({1\over\sqrt{n}}\big)^{m-1} &: r=o(n)\\ c_5^m\,m!\, \big({1\over n}\big)^{m-1} &: r\sim\alpha n\\ c_6^m\,m!\, \left(\frac{1}{\sqrt{\log \left(\frac{n}{n-r+1}\right)}}\right)^{m - 2} &: n - r = o(n) \end{cases}$$ for the Beta and the spherical model with constants $c_1,\ldots,c_6>0$ not depending on $m$ and $n$. The result follows now from [@DoeringEichelsbacher Theorem 1.1] and [@SaulisBuch Corollary 2.1]. Starting with the cumulant bounds presented in Theorem \[thm:Cumulants\] one can also derive - concentration inequalities, - bound for moments of all orders, - Cramér-Petrov type results concerning the relative error in the central limit theorem, - strong laws of large numbers for the random variables $\widetilde{{\mathcal{L}}}_{n,r}$ from the results presented in [@SaulisBuch Chapter 2] (see also [@GroteThäle; @GroteThäle2]). While in the three cases $r=o(n)$, $r\sim\alpha n$ and $n-r=o(n)$ we were able to derive precise Berry-Esseen bounds by using cumulant bounds, we can state a ‘pure’ central limit theorem for the log-volume in an even more general setup. The following result can directly be concluded by extracting subsequence and then by applying the result of Theorem \[thm:CLTMDPSimplices\]. \[cor:log\_volume\_CLT\] Let $r=r(n)$ be an arbitrary sequence of integers such that $r(n)\leq n$ for any $n\in{\mathbb{N}}$. Further, let for each $n\in{\mathbb{N}}$, $X_1,\ldots,X_{r+1}$ be independent random points chosen according to the Gaussian, the Beta or the spherical model, and put ${\mathcal{L}}_{n,r}:=\log(r!{\mathcal{V}}_{n,r})$. Then, $$\begin{aligned} \frac{{\mathcal{L}}_{n,r} - {\mathbb{E}}[{\mathcal{L}}_{n,r}]}{\sqrt{{\mathop{\mathrm{Var}}\nolimits}{\mathcal{L}}_{n,r}}} {\overset{d}{\underset{n\to\infty}\longrightarrow}}Z,\end{aligned}$$ where $Z\sim{\mathcal{N}}(0,1)$ is a standard Gaussian random variable. Central and non-central limit theorem for the volume ---------------------------------------------------- After having investigated asymptotic normality for the log-volume of a random simplex, we turn now to its actual volume, that is, the random variable ${\mathcal{V}}_{n,r}$. \[Volume\] Let $X_1,\ldots,X_{r+1}$ be chosen according to the Gaussian model, the Beta model or the spherical model and let $\alpha\in(0,1)$. Let $Z\sim{\mathcal{N}}(0,1)$ be a standard Gaussian random variable. 1. If $r=o(n)$, then for suitable normalizing sequences $a_{n,r}$ and $b_{n,r}$ the following convergence in distribution holds, as $n\to\infty$: $$\frac{{\mathcal{V}}_{n,r} - a_{n,r}}{b_{n,r}}{\overset{d}{\underset{n\to\infty}\longrightarrow}}Z.$$ 2. If $r\sim \alpha n$ for some $\alpha\in (0,1)$, then for a suitable normalizing sequence $b_{n,r}$ we have $$\frac{{\mathcal{V}}_{n,r}}{b_{n,r}} {\overset{d}{\underset{n\to\infty}\longrightarrow}}\begin{cases} e^{\sqrt{\frac 12 \log \frac 1 {1-\alpha}}\, Z} &: \text{in the Gaussian model}\\ e^{\sqrt{\frac 12 \log \frac 1 {1-\alpha} - \frac \alpha 2}\, Z} &: \text{in the Beta or spherical model}.\\ \end{cases}$$ In the third case, i.e., if $n-r=o(n)$, there is no non-trivial distributional limit theorem for the random variable ${\mathcal{V}}_{n,r}$ under affine re-scaling. The reason is that the variance of $\log {\mathcal{V}}_{n,r}$ tends to $+\infty$ in this case. The main ingredient in the proof of Theorem \[Volume\] in the case where $r=o(n)$ is the so-called ’Delta-Method’, which is well known and commonly used in statistics, cf. [@BickelDoksum Lemma 5.3.3]. From Corollary \[cor:log\_volume\_CLT\] we know that with the sequences $c_{n,r}= {\mathbb E}\log {\mathcal{V}}_{n,r}$ and $d_{n,r}= \sqrt{{\mathop{\mathrm{Var}}\nolimits}\log {\mathcal{V}}_{n,r}}$ it holds that $$\frac{\log {\mathcal{V}}_{n,r} - c_{n,r}}{d_{n,r}} {\overset{d}{\underset{n\to\infty}\longrightarrow}}Z.$$ By the Skorokhod–Dudley lemma [@Kallenberg Theorem 4.30], we can construct random variables ${\mathcal{V}}_{n,r}^*$ and $Z^*$ on a different probability space such that ${\mathcal{V}}_{n,r}^* \overset{d}{=} {\mathcal{V}}_{n,r}$, $Z^* \overset{d}{=} Z$, and $$Z_n^* := \frac{\log {\mathcal{V}}_{n,r}^* - c_{n,r}}{d_{n,r}} {\overset{a.s.}{\underset{n\to\infty}\longrightarrow}}Z^*.$$ So, we have ${\mathcal{V}}_{n,r}^* = e^{d_{n,r} Z_n^* + c_{n,r}}$, where $Z_n^*\to Z^*$ a.s., as $n\to\infty$. Consider first the Gaussian model in the case $r\sim \alpha n$. Then, by Theorem \[thm:Cumulants\](a) we have $$d_{n,r} =\sqrt{{\mathop{\mathrm{Var}}\nolimits}\log {\mathcal{V}}_{n,r}} \sim \sqrt{\frac 12 \log \frac 1 {1-\alpha}}.$$ With the aid of Slutsky’s lemma it follows that $$\frac{{\mathcal{V}}_{n,r}^*}{e^{c_{n,r}}} = e^{d_{n,r} Z_n^*} {\overset{a.s.}{\underset{n\to\infty}\longrightarrow}}e^{\sqrt{\frac 12 \log \frac 1 {1-\alpha}}\, Z^*}.$$ Passing back to the original probability space, we obtain the distributional convergence $$\frac{{\mathcal{V}}_{n,r}}{e^{c_{n,r}}}{\overset{d}{\underset{n\to\infty}\longrightarrow}}e^{\sqrt{\frac 12 \log \frac 1 {1-\alpha}}\, Z}.$$ The proof for the Beta or spherical model in the case $r\sim \alpha n$ is similar, only the expression for the asymptotic variance being different. Consider now the Gaussian model in the case $r=o(n)$. Then, by Theorem \[thm:Cumulants\](a), $$d_{n,r} =\sqrt{{\mathop{\mathrm{Var}}\nolimits}\log {\mathcal{V}}_{n,r}} {\overset{}{\underset{n\to\infty}\longrightarrow}}0.$$ Using the formula $\lim_{x\to 0} (e^x-1)/x = 1$ and the Slutsky lemma, we obtain $$\frac{\frac{{\mathcal{V}}_{n,r}^*}{e^{c_{n,r}}} - 1}{d_{n,r}} = \frac{ e^{d_{n,r} Z_n^*} -1}{d_{n,r}Z_n^*} \cdot Z_n^* {\overset{a.s.}{\underset{n\to\infty}\longrightarrow}}Z^*.$$ Passing back to the original probability space and taking $b_{n,r} = e^{c_{n,r}}d_{n,r}$ and $a_{n, r} =e^{c_{n,r}}$, we obtain the required distributional convergence. Mod-$\phi$ convergence {#sec:mod_phi} ====================== Definition ---------- Mod-$\phi$ convergence is a powerful notion that was introduced and studied in [@jacod_etal; @kowalski_nikeghbali; @delbaen_etal; @kowalski_najnudel_etal; @Feray1], to mention only some references. Once an appropriate version of mod-$\phi$ convergence has been established, one gets for free a whole collection of limit theorems including the central limit theorem, the local limit theorem, moderate and large deviations, and a Cramér–Petrov asymptotic expansion [@Feray1]. The aim of the present Section \[sec:mod\_phi\] is to establish mod-$\phi$ convergence for the log-volumes of the random simplices. Note that the mod-$\phi$ convergence we establish in the present section together with the general results from [@Feray1] also imply some of the results we proved in the previous section by means of the cumulant method. On the other hand, we would like to emphasize that this is not the case if $r\sim \alpha n$, for example.\ There are many definitions of mod-$\phi$ convergence. Here, we use one of the strongest ones, c.f. [@Feray1 Definition 1.1]. Consider a sequence of random variables $(X_n)_{n\in{\mathbb{N}}}$ with moment generating functions $\varphi_n(t) = {\mathbb E}[e^{tX_n}]$ defined on some strip $S= \{z\in \mathbb C: c_- < \text{Re}\, t < c_+\}$. The sequence $(X_n)_{n\in{\mathbb{N}}}$ *converges in the mod-$\phi$ sense*, where $\phi$ is an infinite-divisible distribution with moment generating function $\int_{-\infty}^{\infty} e^{tx}\phi(dx) = e^{\eta(t)}$, if $$\lim_{n\to\infty} \frac{{\mathbb E}[e^{tX_n}]}{e^{w_n \eta(t)}} = \psi(t)$$ locally uniformly on $S$, where $(w_n)_{n\in{\mathbb{N}}}$ is some sequence converging to $+\infty$, and $\psi(t)$ is an analytic function on $S$. As explained in references cited above, mod-$\phi$ convergence roughly means that $X_n$ has approximately the same distribution as the $w_n$-th convolution power of the infinitely divisible distribution $\phi$. The “difference” between these distributions is measured by the “*limit function*” $\psi$ that plays a crucial rôle in the theory. The Barnes $G$-function ----------------------- The Barnes function is an entire function of the complex argument $z$ defined by the Weierstrass product $$G(z) = (2\pi)^{z/2} {{\rm e}}^{-\frac 12 (z+ (1+\gamma) z^2)} \prod_{k=1}^\infty \left(1+\frac zk\right)^k {{\rm e}}^{\frac {z^2}{2k} - z},$$ where $\gamma$ is the Euler constant. The Barnes $G$-function satisfies the functional equation $$G(z+1) = \Gamma(z) G(z).$$ By induction, one deduces that for all $n\in{\mathbb{N}}_0$, $$\label{eq:prod_gamma_functions} \prod_{k=1}^n \Gamma(k+z) = \frac{G(z+n+1)}{G(z+1)}.$$ We shall need the Stirling-type formula for $G$, see [@barnes p. 285], $$\label{eq:stirling_for_G} \log G(z+1) = \frac 12 z^2 \log z - \frac 34 z^2 + \frac z2 \log (2\pi) - \frac 1 {12} \log z + \zeta'(-1) + O(1/z),$$ uniformly as $|z|\to +\infty$ such that $|\arg z| < \pi-{\varepsilon}$, where $\zeta'(-1)$ is the derivative of the Riemann $\zeta$-function. The value of $\zeta'(-1)$ can be expressed through the Glaisher–Kinkelin constant, but it cancels in all our calculations because we use  only via the following lemma. \[lem:barnes\_G\_diff\_asympt\] Let $|z|\to\infty$ such that $|\arg z| < \pi-{\varepsilon}$. Let also $a=a(z)\in {\mathbb{C}}$ be such that $a/z\to 0$. Then, we have $$\log G(z+a+1) - \log G(z+1) = a \left(z\log z - z + \log \sqrt{2\pi}\right) + \frac 12 a^2 \log z + O\left(\frac {|a|^3+1}{z}\right).$$ Applying  we obtain that $$\log G(z+a+1) - \log G(z+1) = \frac 12 A_n + B_n + C_n + D_n + O(1/z),$$ where $$\begin{aligned} A_n &= (z+a)^2 \log (z+a) -z^2 \log z\\ &= (z^2+a^2 + 2za) \left(\log z + \frac az - \frac {a^2}{2z^2} + O\left(\frac {a^3}{z^3}\right)\right) -z^2 \log z\\ &= za - \frac 12 a^2 + a \log z + 2za \log z + 2a^2 + O\left(\frac {a^3}{z}\right),\\ B_n &= -\frac 34 \left((z+a)^2 - z^2 \right) = -\frac 34 a^2 - \frac 32 za, \\ C_n &= \frac 12 a \log (2\pi),\\ D_n &= -\frac 1 {12} (\log(z+a) - \log z) = O\left(\frac az\right).\end{aligned}$$ Taking everything together we get $$\log G(z+a+1) - \log G(z+1) = a \left(z\log z - z + \log \sqrt{2\pi}\right) + \frac 12 a^2 \log z + O\left(\frac {|a|^3+1}{z}\right)$$ and complete the proof of the lemma. Mod-$\phi$ convergence for fixed $r\in {\mathbb{N}}$ ---------------------------------------------------- Recall that $\mathcal V_{n,r}$ denotes the volume of an $r$-dimensional random simplex in ${\mathbb{R}}^n$ whose $r+1$ vertices are distributed according to one of the four models presented in Section $1$. We define, as usual, ${\mathcal{L}}_{n,r} := \log (r! {\mathcal{V}}_{n,r})$. The next two propositions show that if $r\in{\mathbb{N}}$ is fixed, we have mod-$\phi$ convergence. \[prop:mod\_phi\_constant\_r\] Fix some $r\in{\mathbb{N}}$ and consider the Gaussian model. Then, as $n\to\infty$, the sequence $n({\mathcal{L}}_{n,r}- \frac r2 \log n-{1\over 2}\log(r+1))$ converges in the mod-$\phi$ sense with $\eta(t) =\frac{1}{2}( (t+1)\log (t+1) - t)$ and parameter $w_n = rn$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{tn ({\mathcal{L}}_{n,r}- \frac r2 \log n-{1\over 2}\log(r+1))}}{{{\rm e}}^{rn \eta(t)}} = (t+1)^{-\frac {r(r+1)}{4} }$$ uniformly as long as $t$ stays in any compact subset of ${\mathbb{C}}\setminus(-\infty,-1)$. An important formula we will often use describes the asymptotic behaviour of the Gamma function; it can be found in [@Abramovitz Eq. 6.1.39 on p. 257] or derived from the Stirling formula, and reads as follows. For fixed $a>0$, $b\in {\mathbb{R}}$ it holds that $$\begin{aligned} \label{GammaLimit1} \Gamma(az + b) \sim (2 \pi)^{1/2}\, \exp(-az)\, (az)^{az+b-1/2}, \quad \text{as} \quad |z| \rightarrow \infty, \; |\arg z| < \pi -{\varepsilon}.\end{aligned}$$ From the moment formula in Theorem \[theo:vol\_Simplices\_Moments\](a) we obtain $$\begin{aligned} {\mathbb E}e^{tn{\mathcal{L}}_{n,r}} = (r+1)^{tn\over 2}2^{tnr\over 2}\prod_{j=1}^r{\Gamma\Big({(t+1)n-r+j\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}.\end{aligned}$$ Using we deduce that $$\begin{aligned} \prod_{j=1}^r{\Gamma\Big({(t+1)n-r+j\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)} &\sim \prod_{j=1}^r e^{-{tn\over 2}}\Big({n\over 2}\Big)^{tn\over 2}(t+1)^{{(t+1)n\over 2}+{j-r-1\over 2}}\notag \\ &=e^{-{tnr\over 2}}\Big({n\over 2}\Big)^{rtn\over 2}(t+1)^{\big({(t+1)n\over 2}-{1\over 2}\big)r-{r(r-1)\over 4}}. \label{eq:tech1}\end{aligned}$$ Thus, $${\mathbb E}e^{tn {\mathcal{L}}_{n,r}} \sim (r+1)^{tn\over 2}e^{-{tnr\over 2}}n^{tnr\over 2}(t+1)^{\big({(t+1)n\over 2}-{1\over 2}\big)r-{r(r-1)\over 4}}.$$ Taking the logarithm and subtracting $\frac r2 \log n$ and ${1\over 2}\log(r+1)$, we conclude that $$\begin{aligned} \label{eq:GaussModelo(n)ModPhi} \log {\mathbb E}{{\rm e}}^{tn ({\mathcal{L}}_{n,r}- \frac r2 \log n-{1\over 2}\log(r+1))} = {nr\over 2}\Big((t+1)\log(t+1)-t\Big)-{r(r+1)\over 4}\log(t+1) + o(1)\end{aligned}$$ and the result follows. \[rem:ModPhiGaussr=o(n)\] From the previous proof it easily follows that the asymptotic relation is still valid if $r$ growths with $n$ in such a way that $r=o(n)$. This observation will be used below in the context of large deviation principles. \[prop:BetaMomentGeneratingFunction\] Fix some $r\in{\mathbb{N}}$ and consider the Beta or the spherical model. Then, $n{\mathcal{L}}_{n,r}$ converges in the mod-$\phi$ sense with $$\eta(t) = {(r+1)(t+1)\over 2}\log((r+1)(t+1))-{r(t+1)+1\over 2}\log(r(t+1)+1)-{t+1\over 2}\log(t+1)$$ and parameter $w_n=n$, namely $$\lim_{n\to\infty}{{\mathbb E}e^{tn{\mathcal{L}}_{n,r}}\over e^{n\eta(t)}} = (1+t)^{{1-\nu(r+1)\over 2}-{r(r-1)\over 4}}\bigg({(r+1)(t+1)\over r(t+1)+1}\bigg)^{\nu(r+1)-2r-1\over 2}$$ uniformly as long as $t$ stays in any compact subset of ${\mathbb{C}}\setminus(-\infty,-1)$. From the moment formula in Theorem \[theo:vol\_Simplices\_Moments\](b) we have $$\begin{aligned} {\mathbb E}e^{tn{\mathcal{L}}_{n,r}} = \prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+{tn\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}{\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+{tn\over 2}\Big)}\Bigg]{\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+{tn\over 2}\Big)}{\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+{(r+1)tn\over 2}\Big)\over\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+{rtn\over 2}\Big)}. \end{aligned}$$ First of all, by , $$\begin{aligned} {\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+{tn\over 2}\Big)} \sim (1+t)^{{1\over 2}-{\nu\over 2}-{(1+t)n\over 2}}\Big({n\over 2}\Big)^{-{tn\over 2}}\,e^{tn\over 2}. \end{aligned}$$ It follows from that the first product in the moment formula asymptotically behaves like $$\prod_{j=1}^r\Bigg[{\Gamma\Big({n-r+j\over 2}+{tn\over 2}\Big)\over\Gamma\Big({n-r+j\over 2}\Big)}{\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n+\nu\over 2}+{tn\over 2}\Big)}\Bigg] \sim (1+t)^{-{r\nu\over 2}-{r(r-1)\over 4}}.$$ Again using , we obtain $$\begin{aligned} {\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+{(r+1)tn\over 2}\Big)\over\Gamma\Big({r(n+\nu-2)+(n+\nu)\over 2}+{rtn\over 2}\Big)}& \sim ((r+1)(t+1))^{{n(r+1)(t+1)\over 2}+{\nu(r+1)-2r-1\over 2}}\Big({n\over 2}\Big)^{{tn\over 2}}\,e^{-{tn\over 2}}\\ &\qquad\qquad\times (r(t+1)+1)^{-{n(r(t+1)+1)\over 2}-{\nu(r+1)-2r-1\over 2}}\,. \end{aligned}$$ Thus, as $n\to\infty$, we get $$\begin{aligned} \log {\mathbb E}e^{tn{\mathcal{L}}_{n,r}} &= \bigg({1-\nu(r+1)\over 2}-{r(r-1)\over 4}-{(1+t)n\over 2}\bigg)\log(1+t) \\ &\qquad +\bigg({n(r+1)(t+1)\over 2}+{\nu(r+1)-2r-1\over 2}\bigg)\log((r+1)(t+1))\\ &\qquad -\bigg({n(r(t+1)+1)\over 2}+{\nu(r+1)-2r-1\over 2}\bigg)\log(r(t+1)+1) + o(1) \end{aligned}$$ and the result follows. Mod-$\phi$ convergence for the ExpGamma distribution ---------------------------------------------------- Many examples of mod-$\phi$ convergence are known in probability, number theory, statistical mechanics and random matrix theory. The most basic cases are probably the mod-Gaussian and mod-Poisson convergence, which can be found in [@jacod_etal; @kowalski_nikeghbali; @Feray1], but there are also examples of mod-Cauchy [@delbaen_etal; @kowalski_najnudel_etal] and even mod-uniform [@Feray1 §7.4] convergence. The aim of the present section is to add one more item to this list by proving a convergence modulo a tilted $1$-stable totally skewed distribution. Let $X_n$ be a random variable having a Gamma distribution with parameters $(n,1)$, that is the probability density of $X_n$ is $\frac 1 {\Gamma(n)} x^{n-1} {{\rm e}}^{-x}$, $x>0$. The distribution of $\log X_n$ is called the ExpGamma distribution. The probability density of $-\log X_n$ is given by $$\frac{1}{\Gamma(n)} {{\rm e}}^{-{{\rm e}}^{-x}} {{\rm e}}^{-x n}, \quad x\in{\mathbb{R}},$$ and is the limiting probability density of the $n$-th order upper order statistic in an i.i.d. sample of size $N\to\infty$ from the max-domain of attraction of the Gumbel distribution, or, equivalently, the density of the $n$-th upper order statistic in the Poisson point process with intensity ${{\rm e}}^{-x}{{\rm d}}x$, $x\in{\mathbb{R}}$; see [@leadbetter_book Theorem 2.2.2 on p. 33]. It is easy to check that ${\mathbb E}\log X_n = \Gamma'(n)/\Gamma(n) =\psi(n)$ is the digamma function. \[theo:exp\_gamma\_mod\_phi\] The sequence of random variables $n(\log X_n - \psi(n))$ converges in the mod-$\phi$ sense with $\eta(t) = (t+1)\log (t+1) - t$ and parameter $w_n=n$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{tn(\log X_n - \psi(n))}}{ {{\rm e}}^{n ((t+1)\log (t+1) - t)}} = \frac {{{\rm e}}^{t/2}}{ \sqrt{t+1}}$$ uniformly as long as $t$ stays in any compact subset of ${\mathbb{C}}\backslash (-\infty,-1)$. By the properties of the Gamma distribution, we have $${\mathbb E}{{\rm e}}^{tn(\log X_n - \psi(n))} = {{\rm e}}^{-tn \psi(n)} {\mathbb E}X_n^{tn} = {{\rm e}}^{-tn \psi(n)} \frac{\Gamma(tn+n)}{\Gamma(n)}.$$ The Stirling formula states that $\Gamma(z) \sim \sqrt{2\pi/z}\, (z/{{\rm e}})^z$ uniformly as $|z|\to\infty$ in such a way that $|\arg z| < \pi -{\varepsilon}$. Using the Stirling formula together with the asymptotics $\psi(n) = \log n - \frac 1{2n} + o(\frac 1n)$, we obtain $${{\rm e}}^{-tn \psi(n)} \frac{\Gamma(tn+n)}{\Gamma(n)} \sim {{\rm e}}^{-tn(\log n - \frac 1 {2n})} \frac{\sqrt{\frac {2\pi}{tn+n}}\left(\frac{tn+n}{{{\rm e}}}\right)^n}{\sqrt{\frac {2\pi}{n}}\left(\frac{n}{{{\rm e}}}\right)^n} = \frac{{{\rm e}}^{t/2}}{\sqrt{t+1}} {{\rm e}}^{n ((t+1)\log (t+1) - t)},$$ which proves the claim. Consider an $\alpha$-stable random variable $Z_1\sim S_1 (\pi/2, -1,0)$ with $\alpha=1$, skewness $\beta= -1$, and scale $\sigma= \pi/2$, where we adopt the parametrization used in the book of [@samorodnitsky_taqqu_book]. It is known [@samorodnitsky_taqqu_book Proposition 1.2.12] that the cumulant generating function of this random variable is given by $$\log {\mathbb E}{{\rm e}}^{t Z_1} = t \log t, \quad {\operatorname{Re}}t \geq 0.$$ Note that ${\mathbb E}{{\rm e}}^Z_1 = 1$ and consider an exponential tilt of $Z_1$, denoted $Z_2$, whose probability density is $${\mathbb{P}}[Z_2 \in {{\rm d}}x] = {{\rm e}}^x {\mathbb{P}}[Z_1\in {{\rm d}}x], \quad x\in{\mathbb{R}}.$$ Finally, observe that ${\mathbb E}Z_2 = {\mathbb E}[{{\rm e}}^{Z_1} Z_1] = (t^t)'|_{t=1} = 1$ and consider the centered version $Z:= Z_2-1$. The cumulant generating function of $Z$ is given by $$\log {\mathbb E}{{\rm e}}^{t Z} = (t+1) \log (t+1) - t, \quad {\operatorname{Re}}t\geq -1.$$ As an exponential tilt of an infinitely divisible distribution, $Z$ is itself infinitely divisible. Thus, in Theorem \[theo:exp\_gamma\_mod\_phi\] and Proposition \[prop:mod\_phi\_constant\_r\] we have a mod-$\phi$ convergence modulo a tilted totally skewed $1$-stable distribution. Mod-$\phi$ convergence in the full dimensional case --------------------------------------------------- In this section we consider the full-dimensional case $r=n$, i.e., we are interested in the random variable ${\mathcal{L}}_{n,n}$. \[prop:mod\_phi\_full\_dim\] Consider the Gaussian model and let $m_n = \frac 12 (n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi))$. Then, ${\mathcal{L}}_{n,n} - m_n$ converges in the mod-Gaussian sense (meaning that $\eta(t) = \frac 12 t^2$) and parameters $w_n = \frac 12 \log \frac n2$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,n} - m_n)}}{{{\rm e}}^{\frac 14 t^2\log \frac n2}} = \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{1}{2} + \frac t2\right) G\left(1 + \frac t2\right)}.$$ The convergence is uniform as long as $t$ stays in any compact subset of ${\mathbb{C}}\backslash\{-1,-2,\ldots\}$. In view of Theorem \[theo:vol\_Simplices\_Moments\](a) and , we can express the moment generating function of ${\mathcal{L}}_{n,n}$ in terms of the Barnes $G$-function as $$\label{eq:moment_gen_Y_n_n} {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,n}} = {\mathbb E}[(n! {\mathcal{V}}_{n,n})^t] = (n+1)^{\frac{t}{2}}\, 2^{\frac{tn}{2}}\, \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{n+1}{2}\right)} \cdot \frac{G(1)}{G\left(\frac{n+2}{2}\right)} \cdot \frac{G\left(\frac{n+1}{2} + \frac t2\right)}{G\left(\frac{1}{2} + \frac t2\right)} \cdot \frac{G\left(\frac{n+2}{2} + \frac t2\right)}{G\left(1 + \frac t2\right)},$$ where $G(1) = 1$. For the function $$\label{eq:PsiDef} \psi(t) := \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{1}{2} + \frac t2\right) G\left(1 + \frac t2\right)}$$ we have $$\begin{aligned} \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,n}} &= \frac{t}{2}\log(n+1) + \frac{tn}{2} \log 2 + \log \psi(t) + \log G\left(\frac{n-1}{2} +\frac t2 +1\right) - \log G\left(\frac{n-1}{2} +1\right)\\ &\qquad+ \log G\left(\frac{n}{2} +\frac t2 +1\right) - \log G\left(\frac{n}{2} +1\right).\end{aligned}$$ Applying Lemma \[lem:barnes\_G\_diff\_asympt\] two times and using the formula $$((n+b) \log (n+b) - (n+b)) - (n\log n - n) = b \log n + o(1),$$ where $b$ is any constant, we obtain $$\label{eq:GaussFallModPhiFull} \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,n}} = \log \psi(t) + \frac t2 \left(n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi)\right) +\frac 14 {t^2} \log \frac n2 +o(1).$$ This completes the argument. \[rm:DalBorgoEtAl\] We notice that in the full dimensional, Gaussian case $r=n$ our random variables are equivalent to those considered in [@dalBorgo_etal] and one can follow our result also from their Theorem 5.1. Nevertheless, we decided to include our independent and much shorter proof.\ Their paper deals with the determinant of certain random matrix models and has a completely different focus. On the other hand, let us emphasize that even in this special case the distributions appearing in [@dalBorgo_etal] are in fact different from (but very close to) those we obtain. \[prop:mod\_phi\_full\_dimBeta\] Consider the Beta model with parameter $\nu>0$ or the spherical model (in which case $\nu=0$) and let $\widetilde{m}_n = {1\over 2}({1\over 2}\log n-n+1-\nu+\log(2^{3/2}\pi))$. Then, ${\mathcal{L}}_{n,n} - \widetilde{m}_n$ converges in the mod-Gaussian sense (meaning that $\eta(t) = \frac 12 t^2$) and parameters $w_n = \frac 12 \log \frac n2-\frac 12$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,n} - \widetilde{m}_n)}}{{{\rm e}}^{\frac 14 t^2(\log \frac n2-1)}} = \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{1}{2} + \frac t2\right) G\left(1 + \frac t2\right)}.$$ The convergence is uniform as long as $t$ stays in any compact subset of ${\mathbb{C}}\backslash\{-1,-2,\ldots\}$. For the purposes of this proof let ${\mathcal{L}}_{n,n}^{\text{G}}$ denote the Gaussian analogue of ${\mathcal{L}}_{n,n}$. In view of the connection between the Gaussian and the Beta model, see Theorem \[theo:vol\_Simplices\_Moments\](a),(b), the moment generating function of ${\mathcal{L}}_{n,n}$ is given by $$\begin{gathered} \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,n}} = \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,n}^{\text{G}}} - \frac t2 \log (n+1) - \frac {tn}{2} \log 2 \\ + (n+1)\log\left({\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n\over 2}+{\nu + t\over 2}\Big)}\right) + \log \left({\Gamma\Big({n(n+\nu-1)+nt+ t + \nu \over 2}\Big)\over\Gamma\Big({n(n+\nu-1)+nt +\nu\over 2}\Big)}\right).\end{gathered}$$ Using a second-order Stirling approximation for the logarithms of the Gamma functions, we obtain $$\begin{aligned} (n+1)\log\left({\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n\over 2}+{\nu + t\over 2}\Big)}\right) = {(n+1)t\over 2}\log{2\over n}-{t\over 4}(t-2+2\nu) + o(1)\end{aligned}$$ and similarly $$\begin{aligned} \log \left({\Gamma\Big({n(n+\nu-1)+nt+ t + \nu \over 2}\Big)\over\Gamma\Big({n(n+\nu-1)+nt +\nu\over 2}\Big)}\right) = t\log n-{t\over 2}\log 2+o(1).\end{aligned}$$ Denoting by $\psi(t)$ the function defined at and using we conclude that, after simplification of the resulting terms, $$\begin{aligned} \log {\mathbb E}e^{t{\mathcal{L}}_{n,n}} = \log\psi(t)+t\widetilde{m}_n + {t^2\over 4}\Big(\log{n\over 2}-1\Big) +o(1)\end{aligned}$$ from which the result follows. Case of fixed codimension ------------------------- Consider the case in which the codimension of the simplex $n-r$ stays fixed, while $n\to\infty$. Of course, if $n-r=0$, we recover the full-dimensional case. \[prop:mod\_phi\_full\_dim\_almost\] Consider the Gaussian model and let $m_n$ be the same as in Proposition \[prop:mod\_phi\_full\_dim\]. Let $d\in {\mathbb{N}}$ be fixed and take $r=n-d$, where $n\to\infty$. Then, ${\mathcal{L}}_{n,r} - m_n$ converges in the mod-Gaussian sense (meaning that $\eta(t) = \frac 12 t^2$) with parameters $w_n = \frac 12 \log \frac n2$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,r} - m_n)}}{{{\rm e}}^{\frac 14 t^2\log \frac n2}} = \frac {G\left(\frac{d+1}{2}\right) \cdot G\left(\frac{d+2}{2}\right)} {2^{\frac{td}{2}}\, G\left(\frac{d+1}{2} + \frac t2\right) \cdot G\left(\frac{d+2}{2} + \frac t2\right) } .$$ The convergence is uniform as long as $t$ stays in any compact subset of ${\mathbb{C}}\backslash\{-d-1,-d-2,\ldots\}$. First, we observe that Theorem \[theo:vol\_distr\_affine\] implies the distributional representation $$\begin{aligned} \label{eq:DistributionalIdentityCL} {\mathcal{L}}_{n,n}-{1\over 2}\log(n+1) \overset{d}{=} \Big({\mathcal{L}}_{n-r,n-r}-{1\over 2}\log(n-r+1)\Big)+\Big({\mathcal{L}}_{n,r}'-{1\over 2}\log(r+1)\Big),\end{aligned}$$ where ${\mathcal{L}}_{n,r}'$ is a copy of ${\mathcal{L}}_{n,r}$ independent of ${\mathcal{L}}_{n-r,n-r}$. Since $n-r=d$, this implies that $$\begin{aligned} {\mathbb E}e^{t({\mathcal{L}}_{n,r}-m_n)} = \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,n} - m_n)}}{{\mathbb E}{{\rm e}}^{t {\mathcal{L}}_{d,d}}}\,e^{{t\over 2}\log(d+1)}\,e^{{t\over 2}\log\big({n-d+1\over n+1}\big)}.\end{aligned}$$ Applying Proposition \[prop:mod\_phi\_full\_dim\] to the numerator and  to the denominator, we conclude that $${\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,r} - m_n)} \sim \frac {{{\rm e}}^{\frac 14 t^2\log \frac n2} \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{1}{2} + \frac t2\right) G\left(1 + \frac t2\right)}} {(d+1)^{\frac{t}{2}}\, 2^{\frac{td}{2}}\, \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{d+1}{2}\right)} \cdot \frac{G(1)}{G\left(\frac{d+2}{2}\right)} \cdot \frac{G\left(\frac{d+1}{2} + \frac t2\right)}{G\left(\frac{1}{2} + \frac t2\right)} \cdot \frac{G\left(\frac{d+2}{2} + \frac t2\right)}{G\left(1 + \frac t2\right)} }\cdot (d+1)^{t\over 2}\,,$$ which implies the claim. \[propositionbeta\] Consider the Beta model with parameter $\nu>0$ or the spherical model (in which case $\nu=0$) and let $\tilde m_n$ be the same as in Proposition \[prop:mod\_phi\_full\_dimBeta\]. Let $d\in {\mathbb{N}}$ be fixed and take $r=n-d$, where $n\to\infty$. Then, ${\mathcal{L}}_{n,r} - \tilde{m}_n - \frac{d-1}{2} \log \frac n2$ converges in the mod-Gaussian sense (meaning that $\eta(t) = \frac 12 t^2$) and parameters $w_n = \frac 12 \log \frac n2-\frac 12$, namely $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,r} - \widetilde{m}_n - \frac{d-1}{2} \log \frac n2)}}{{{\rm e}}^{\frac 14 t^2(\log \frac n2-1)}} = \frac {G\left(\frac{d+1}{2}\right) \cdot G\left(\frac{d+2}{2}\right)} {2^{\frac{td}{2}}\, G\left(\frac{d+1}{2} + \frac t2\right) \cdot G\left(\frac{d+2}{2} + \frac t2\right) }.$$ The convergence is uniform as long as $t$ stays in any compact subset of ${\mathbb{C}}\backslash\{-d-1,-d-2,\ldots\}$. The computations are similar to those in the proof of Proposition \[prop:mod\_phi\_full\_dimBeta\], but slightly more involved. Again we let ${\mathcal{L}}_{n,r}^{\text{G}}$ to be the Gaussian analogue of ${\mathcal{L}}_{n,r}$. By Theorem \[theo:vol\_Simplices\_Moments\](a),(b), the moment generating function of ${\mathcal{L}}_{n,r}$ is given by $$\begin{gathered} \label{eq:L_n_n_gauss_vs_beta} \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,r}} = \log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,r}^{\text{G}}} - \frac t2 \log (r+1) - \frac {tr}{2} \log 2 \\ + (r+1)\log\left({\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n\over 2}+{\nu + t\over 2}\Big)}\right) + \log \left({\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{(r+1)t\over 2}\Big)\over\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{rt\over 2}\Big)}\right).\end{gathered}$$ Using the Stirling series for the logarithm of the Gamma function, we obtain $$(r+1)\log\left({\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n\over 2}+{\nu + t\over 2}\Big)}\right) = {(n+1)t\over 2}\log{2\over n}-{t\over 4}(t-2+2\nu) + \frac{d-1}{2} t \log \frac n2 + o(1)$$ and $$\log \left({\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{(r+1)t\over 2}\Big)\over\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{rt\over 2}\Big)}\right) = t\log n-{t\over 2}\log 2+o(1).$$ Using the behavior of ${\mathcal{L}}_{n,r}^{\text{G}}$ stated in Proposition \[prop:mod\_phi\_full\_dim\_almost\], we obtain, after some transformations, $$\begin{aligned} &\log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,r}}\\ &\qquad = \log\left( \frac {G\left(\frac{d+1}{2}\right) \cdot G\left(\frac{d+2}{2}\right)} {2^{\frac{td}{2}}\, G\left(\frac{d+1}{2} + \frac t2\right) \cdot G\left(\frac{d+2}{2} + \frac t2\right)}\right) + t\widetilde{m}_n + {t^2\over 4}\Big(\log{n\over 2}-1\Big) + \frac{d-1}{2} t \log \frac n2+ o(1),\end{aligned}$$ which yields the claim. Case of diverging codimension ----------------------------- In this section we consider the case when the codimension of the simplex goes to $+\infty$. \[prop:mod\_phi\_div\_codimension\] Consider the Gaussian model and let $m_n$ be the same as in Proposition \[prop:mod\_phi\_full\_dim\]. If $r=r(n)$ is such that $n-r \to \infty$ as $n\to\infty$, then $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t\big({\mathcal{L}}_{n,r} - (m_n-m_{n-r})-{1\over 2}\log\big({(r+1)(n-r)\over n}\big)\big)}}{{{\rm e}}^{\frac 14 t^2 \log \frac{n}{n-r}}} = 1.$$ If, additionally, $n-r=o(n)$, then we have mod-Gaussian convergence (meaning that $\eta(t)={1\over 2}t^2$) with parameters $w_n = \frac 1 2 \log \frac {n}{n-r} \to\infty$ and limiting function identically equal to $1$. From Proposition \[prop:mod\_phi\_full\_dim\] we know that $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,n} - m_n)}}{{{\rm e}}^{\frac 14 t^2\log \frac n2}} = \lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n-r,n-r} - m_{n-r})}}{{{\rm e}}^{\frac 14 t^2\log \frac {n-r}2}} = \frac{G\left(\frac{1}{2}\right)}{G\left(\frac{1}{2} + \frac t2\right) G\left(1 + \frac t2\right)}.$$ Using the distributional identity it follows that $$\begin{aligned} {\mathbb E}{{\rm e}}^{t\big({\mathcal{L}}_{n,r} - (m_n-m_{n-r})-{1\over 2}\log\big({(r+1)(n-r+1)\over n+1}\big)\big)} = \frac{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n,n} - m_n)}}{{\mathbb E}{{\rm e}}^{t({\mathcal{L}}_{n-r,n-r} - m_{n-r})}} \sim \frac {{{\rm e}}^{\frac 14 t^2\log \frac n2}}{{{\rm e}}^{\frac 14 t^2\log \frac {n-r}2}} \sim {{\rm e}}^{\frac 14 t^2\log \frac n{n-r}},\end{aligned}$$ which implies the claim after observing that $\log (n+1) = \log n + o(1)$ and $\log (n-r+1) = \log (n-r)+o(1)$. Observe also that if $n-r=o(n)$, then $w_n\to\infty$, as $n\to\infty$, which is otherwise not the case. \[propositionbeta2\] Consider the Beta model with parameter $\nu>0$ or the spherical model (in which case $\nu=0$) and let $m_n$ be the same as in Proposition \[prop:mod\_phi\_full\_dim\]. If $r=r(n)$ is such that $n-r=o(n)$ as $n\to\infty$, then, $$\lim_{n\to\infty} \frac{{\mathbb E}{{\rm e}}^{t\big({\mathcal{L}}_{n,r} - (m_n-m_{n-r}-{r+1\over 4n}(t-2+2\nu))- \frac 12 \log \frac{(n-r)(1+r)}{n^{1+r}}\big)}}{{{\rm e}}^{\frac 14 t^2\log{n\over n-r}}} = 1.$$ That is, we have mod-Gaussian convergence (meaning that $\eta(t)={1\over 2}t^2$) with parameters $w_n={1\over 2}\log{n\over n-r}$ and limiting function identically equal to $1$. Denote by ${\mathcal{L}}_{n,r}^{\text{G}}$ the Gaussian analogue of ${\mathcal{L}}_{n,r}$. Observe that relation  still holds. Regarding the first term in this relation, we know from Proposition \[prop:mod\_phi\_div\_codimension\] that $$\log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,r}^{\text{G}}} = t(m_n-m_{n-r}) + {t\over 2}\log\left({(r+1)(n-r)\over n}\right) +\frac 14 t^2 \log \frac{n}{n-r} + o(1).$$ Again, a second-order Stirling expansion yields $$(r+1)\log\left({\Gamma\Big({n+\nu\over 2}\Big)\over\Gamma\Big({n\over 2}+{\nu + t\over 2}\Big)}\right) =(r+1) \frac t2 \log \frac 2n -{r+1\over n}{t\over 4}(t-2+2\nu)+o(r/n^2)$$ and $$\begin{aligned} \log \left({\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{(r+1)t\over 2}\Big)\over\Gamma\Big({r(n+\nu-2)+n+\nu\over 2}+{rt\over 2}\Big)}\right) &= \frac t2 \log \left({r(n+\nu-2)+n+\nu\over 2}+{rt\over 2}\right)+o(1)\\ &= \frac t2 \log \left(\frac {(r+1)n}{2}\right)+o(1).\end{aligned}$$ Taking everything together, we obtain $$\begin{aligned} &\log {\mathbb E}{{\rm e}}^{t{\mathcal{L}}_{n,r}}\\ &\qquad= t\Big(m_n-m_{n-r}-{r+1\over 4n}(t-2+2\nu)\Big) + \frac t2 \log \frac{(n-r)(1+r)}{n^{1+r}} + \frac 14 t^2 \log \left(\frac n {n-r}\right) + o(1).\end{aligned}$$ This yields the claim, since $w_n={1\over 2}\log{n\over n-r}\to\infty$, as $n\to\infty$ by the assumption that $n-r=o(n)$. Large deviations ================ The purpose of this section is to derive large deviation principles, recall the definition at the beginning of Section \[SectionLDP\]. Again, we restrict to the Gaussian, the Beta and the spherical model, which admit finite moments of all orders. The Gaussian model ------------------ We start with the Gaussian model and recall the notation ${\mathcal{L}}_{n,r} := \log(r! {\mathcal{V}}_{n,r})$. Using the Gärtner–Ellis theorem we derive large deviation principles from the following Proposition. \[prop:gaertner\_ellis\_cond\] - Let $r=o(n)$, as $n\to\infty$. Then, we have $$\lim_{n\to\infty} \frac 1 {rn} \log {\mathbb E}{{\rm e}}^{tn ({\mathcal{L}}_{n,r}- \frac r2 \log n - \frac{1}{2} \log (r+1))} = \frac 12 ((t+1)\log (t+1) - t).$$ - If $r\sim \alpha n$, $\alpha \in (0,1)$, we have $$\begin{aligned} \lim_{n\to\infty} \frac 1 {\alpha n^2} \log {\mathbb E}{{\rm e}}^{tn ({\mathcal{L}}_{n,r}- \frac{\alpha n}{2} (\log n + \log(1-\alpha)))} = \frac{2+ 2t - \alpha}{4} \log\left(\frac{1 + t - \alpha}{1 - \alpha}\right) - \frac{t}{2}.\end{aligned}$$ - Let $d\in {\mathbb{N}}$ and assume that $d = n - r$, as $n \rightarrow \infty$, and $m_n = \frac 12 (n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi))$ as in Proposition \[prop:mod\_phi\_full\_dim\_almost\]. Then, we have $$\begin{aligned} \lim_{n\to\infty} \frac 1 {\frac 12 \log \frac{n}{2}} \log {\mathbb E}{{\rm e}}^{t ({\mathcal{L}}_{n,r}- m_n)} = \frac 12 t^2.\end{aligned}$$ - Let $r=r(n)$ be such that $n-r = o(n)$, as $n\rightarrow \infty$. Then, we have $$\begin{aligned} \lim_{n\to\infty} \frac 1 {\frac 12 \log \frac{n}{n-r}} \log {\mathbb E}{{\rm e}}^{t \big({\mathcal{L}}_{n,r}- (m_n - m_{n-r})-{1\over}\log\big({(r+1)(n-r+1)\over n+1}\big)\big)} = \frac 12 t^2.\end{aligned}$$ Part (a) is a consequence of Remark \[rem:ModPhiGaussr=o(n)\]. The proofs of the (c) and (d) directly follow from the proofs of Propositions \[prop:mod\_phi\_full\_dim\_almost\] and \[prop:mod\_phi\_div\_codimension\] in the previous section, respectively. We turn now to the case that $r\sim\alpha n$. Due to the asymptotic formula we obtain for all $\alpha \in (0,1)$, $t\ge 0$ and $j\in \mathbb{N}$ that $$\begin{aligned} \log\left( {\Gamma\big({(1+t - \alpha)n+j\over 2}\big)\over\Gamma\big({(1-\alpha)n+j\over 2}\big)} \right) &\sim \log\left(\frac{\exp(-\frac{(1+t-\alpha)n}{2}) \left(\frac{(1+t-\alpha)n}{2}\right)^{\frac{(1+t-\alpha)n}{2} + \frac{j-1}{2}}}{\exp(-\frac{(1-\alpha)n}{2}) \left(\frac{(1-\alpha)n}{2}\right)^{\frac{(1-\alpha)n}{2} + \frac{j-1}{2}}}\right)\\ &=\log\left( \exp\left(-\frac{t n}{2}\right) \left(\frac n2 \right)^{\frac{tn}{2}} \frac{\left(1+t - \alpha\right)^{\frac{(1+t-\alpha)n}{2}}}{\left(1-\alpha\right)^{\frac{(1-\alpha)n}{2}}} \left(\frac{1+t-\alpha}{1-\alpha}\right)^{\frac{j-1}{2}} \right)\\ &= -\frac{t n}{2} + \frac{tn}{2} \log\left(\frac n2\right) + \frac{(1+t-\alpha)n}{2} \log\left(1+t-\alpha\right)\\ &\qquad \qquad - \frac{(1-\alpha)n}{2} \log\left(1-\alpha\right) + \frac{j-1}{2} \log\left( \frac{1+t-\alpha}{1-\alpha}\right),\end{aligned}$$ as $n \rightarrow \infty$, and thus $$\begin{aligned} &\frac{1}{\alpha n^2} \log {\mathbb E}{{\rm e}}^{tn {\mathcal{L}}_{n,r}} = \frac{1}{\alpha n^2} \left[\frac{tn}{2}\log (\alpha n + 1) + \frac{t \alpha n^2}{2} \log 2 + \sum_{j=1}^{\alpha n} \log\left( {\Gamma\big({(1+t-\alpha)n+j\over 2}\big)\over\Gamma\big({(1-\alpha)n+j\over 2}\big)} \right)\right]\\ &\sim -\frac{t}{2} + \frac{t}{2} \log\left(n\right) + \frac{1+t-\alpha}{2} \log\left(1+t-\alpha\right) - \frac{1-\alpha}{2} \log\left(1-\alpha\right) + \frac{\alpha}{4} \log\left( \frac{1+t-\alpha}{1-\alpha}\right)\\ &= -\frac{t}{2} + \frac{t}{2} \log\left(n\right) + \frac{2+2t-\alpha}{4} \log\left(1+t-\alpha\right) - \frac{2-\alpha}{4} \log\left(1-\alpha\right)\,.\end{aligned}$$ This directly yields the result in the case $r \sim \alpha n$ in view of the moment formula for Gaussian simplices stated in Section \[sec:SectionModels\]. We turn now to the large deviation principles for the log-volume of Gaussian simplices. \[LDPGaus\] - Let $r=o(n)$, as $n\to\infty$. Then, ${1\over r}({\mathcal{L}}_{n,r}- \frac r2 \log n - \frac{1}{2} \log (r+1))$ satisfies a LDP with speed $rn$ and rate function $$I(x) = \frac 12 (e^{2x}-1) - x\,,\qquad x\in{\mathbb{R}}\,.$$ - If $r\sim \alpha n$, $\alpha \in (0,1)$, then, ${1\over \alpha n}({\mathcal{L}}_{n,r}- \frac{\alpha n}{2} (\log n + \log(1-\alpha)))$ satisfies a LDP with speed $\alpha n^2$ and rate function $$I(x) = \sup_{t \in \mathbb{R}} \left\{t x - \frac{2+ 2t - \alpha}{4} \log \left(\frac{1 + t - \alpha}{1 - \alpha}\right) + \frac{t}{2}\right\} \,,\qquad x\in{\mathbb{R}}\,.$$ - Let $d\in {\mathbb{N}}$ and assume that $d = n - r$, as $n \rightarrow \infty$, and $m_n = \frac 12 (n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi))$. Then, $\frac{1}{\frac{1}{2} \log \frac{n}{2}}({\mathcal{L}}_{n,r}- m_n)$ satisfies a LDP with speed $\frac{1}{2} \log \frac{n}{2}$ and rate function $$I(x) = \frac 12 x^2\,,\qquad x\in{\mathbb{R}}\,.$$ - Let $r=r(n)$ be such that $n-r \rightarrow \infty$, as $n \rightarrow \infty$. If $n-r = o(n)$, as $n\rightarrow \infty$, then, $\frac{1}{\frac{1}{2} \log \frac{n}{n-r}}\big({\mathcal{L}}_{n,r}-(m_n - m_{n-r})-{1\over 2}\log\big({(r+1)(n-r+1)\over n+1}\big)\big)$ satisfies a LDP with speed $\frac{1}{2} \log \frac{n}{n-r}$ and rate function $$I(x) = \frac 12 x^2\,,\qquad x\in{\mathbb{R}}\,.$$ Let $r=o(n)$, as $n\to\infty$. Then, by the Gärtner–Ellis theorem (cf. Section 2.3 in [@Dembo]) and Proposition \[prop:gaertner\_ellis\_cond\], the random variables ${1\over r}({\mathcal{L}}_{n,r}-\frac r2 \log n - \frac{1}{2} \log (r+1))$ satisfy a LDP with speed $rn$ and rate function $$I(x) = \sup_{t\in{\mathbb{R}}}\big[tx-\frac 12 ((t+1)\log (t+1) - t)\big],$$ i.e., the Legendre-Fenchel transform of the function $\frac 12 ((t+1)\log (t+1) - t)$. For each $x\in{\mathbb{R}}$ the supremum is attained at $t=e^{2x}-1$, which yields the result of (a). The same argument implies the LDP for the other regimes of $r$ as well. The Beta and the spherical model -------------------------------- Now, we turn to the Beta model with parameter $\nu > 0$ and the spherical model, i.e., $\nu = 0$, and recall that ${\mathcal{L}}_{n,r}:=\log(r!{\mathcal{V}}_{n,r})$, where ${\mathcal{V}}_{n,r}$ is the volume of the $r$-dimensional simplex with vertices $X_1,\ldots,X_{r+1}$ chosen according to the Beta or the spherical distribution, respectively. Similar to the Gaussian case, we start with the following proposition that will imply the large deviation principles. \[prop:gaertner\_ellis\_cond2\] - Let $r\in{\mathbb{N}}$ be fixed. Then, we have $$\lim_{n\to\infty} \frac 1 {n} \log {\mathbb E}{{\rm e}}^{tn {\mathcal{L}}_{n,r}} = \begin{cases} \eta(t) &: t\geq -1\\ +\infty & \text{otherwise}, \end{cases}$$ where $\eta$ is the function from Proposition \[prop:BetaMomentGeneratingFunction\]. - If $r \sim \alpha n$, $\alpha \in (0,1)$, we have $$\begin{aligned} &\lim_{n\to\infty} \frac 1 {\alpha n^2} \log {\mathbb E}{{\rm e}}^{tn{\mathcal{L}}_{n,r}} = \begin{cases} \eta(t) &: t\geq -1\\ +\infty &: \text{otherwise}, \end{cases} \end{aligned}$$ where $\psi(t)$ is the function given by $$\eta(t):=\frac{2+2t-\alpha}{4} \log\left(1+t-\alpha\right) - \frac{2-\alpha}{4} \log\left(1-\alpha\right)- \frac{1+t}{2} \log(1+t)\,.$$ - Let $d\in {\mathbb{N}}$ and assume that $d = n - r$, as $n \rightarrow \infty$, and let $\widetilde{m}_n = {1\over 2}({1\over 2}\log n-n+1-\nu+\log(2^{3/2}\pi))$ as in Proposition \[prop:mod\_phi\_full\_dimBeta\]. Then, we have $$\begin{aligned} \lim_{n\to\infty} \frac 1 {\frac 12 (\log \frac{n}{2} - 1)} \log {\mathbb E}{{\rm e}}^{t ({\mathcal{L}}_{n,r}- \widetilde{m}_n - \frac{d-1}{2} \log \frac{n}{2})} = \frac{1}{2} t^2. \end{aligned}$$ - Let $r=r(n)$ be such that $n-r = o(n)$, and let $m_n = \frac 12 (n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi))$ be as in Proposition \[prop:mod\_phi\_full\_dim\_almost\]. Then $$\begin{aligned} \lim_{n\to\infty} \frac 1 {\frac 12 \log \frac{n}{n-r}} \log {\mathbb E}{{\rm e}}^{t\big({\mathcal{L}}_{n,r} - (m_n-m_{n-r}-{r+1\over 4n}(t-2+2\nu))- \frac 12 \log \frac{(n-r)(1+r)}{n^{1+r}}\big)} = {1\over 2}t^2. \end{aligned}$$ For $t\geq -1$ the assertions in (a) follows from Proposition \[prop:BetaMomentGeneratingFunction\]. Recall from Theorem \[theo:vol\_distr\_affine\] that the distribution of ${\mathcal{V}}_{n,r}$ involves Beta random variables $Z:=\beta_{{\nu+r-j\over 2},{n-r+j\over 2}}$ with $j\leq r$. Writing $${\mathbb E}e^{{tn\over 2}\log Z} = {\mathbb E}Z^{tn\over 2} = c\,\int_0^{1} z^{{n-r+j\over 2}+{tn\over 2}-1}(1-z)^{{\nu+r-j\over 2}-1}\,{{\rm d}}z$$ we see that the exponent at $z$ is less than $-1$ for sufficiently large $n$ if $t<-1$. This implies that ${\mathbb E}e^{{tn\over 2}\log Z}\to+\infty$ and completes the proof of (a). Now, let us turn towards the case $r\sim\alpha n$, $\alpha \in (0,1)$ in (b). Similar to what has been done in the Gaussian setting, we obtain by using the asymptotic formula for all $\nu > 0$, $$\begin{aligned} (\alpha n+1) \log\left( {\Gamma\big({n+\nu\over 2}\big)\over\Gamma\big({(1+t)n+\nu\over 2}\big)} \right) &\sim (\alpha n + 1) \left(\frac{t n}{2} - \frac{tn}{2} \log\left(\frac n2\right) - \frac{(1+t)n + \nu -1}{2} \log(1+t) \right)\\ &\sim \frac{t \alpha n^2}{2} - \frac{t\alpha n^2}{2} \log\left(\frac n2\right) - \frac{(1+t)\alpha n^2 + \alpha n (\nu -1)}{2} \log(1+t), \end{aligned}$$ as $n\rightarrow \infty$, and for all $t\ge 0$, $$\begin{aligned} &\log\left( {\Gamma\big({\alpha n(n+\nu-2) + n + tn(\alpha n +1) +\nu\over 2}\big)\over\Gamma\big({\alpha n(n+\nu-2) + n + tn\alpha n +\nu\over 2}\big)} \right)\\ &\sim -\frac{t n}{2} + \frac{tn}{2} \log\left(\frac n2\right) + \frac{\alpha n(n+\nu-2) + n + tn(\alpha n +1) +\nu}{2} \log\left(\alpha (n+\nu-2) + 1 + t(\alpha n +1)\right)\\ &\qquad \qquad - \frac{\alpha n(n+\nu-2) + n + tn\alpha n +\nu}{2} \log\left(\alpha (n+\nu-2) + 1+ t\alpha n\right). \end{aligned}$$ Thus, by using the calculations made in the Gaussian case above, we conclude that $$\begin{aligned} &\frac{1}{\alpha n^2} \log {\mathbb E}{{\rm e}}^{tn{\mathcal{L}}_{n,r}}\\ &= \frac{1}{\alpha n^2}\left[ (\alpha n+1) \log\left( {\Gamma\big({n+\nu\over 2}\big)\over\Gamma\big({(1+t)n+\nu\over 2}\big)} \right)\right.\\ &\left.\qquad + \log\left( {\Gamma\big({\alpha n(n+\nu-2) + n + tn(\alpha n +1) +\nu\over 2}\big)\over\Gamma\big({\alpha n(n+\nu-2) + n + tn\alpha n +\nu\over 2}\big)} \right) + \sum_{j=1}^{\alpha n} \log\left( {\Gamma\big({(1+t-\alpha)n+j\over 2}\big)\over\Gamma\big({(1-\alpha)n+j\over 2}\big)} \right)\right]\\ &\sim \frac{t}{2} - \frac{t}{2} \log\left(\frac n2\right) - \frac{1+t}{2} \log(1+t) + \frac{1+t}{2} \log\left(\alpha (n+\nu-2) + 1 + t(\alpha n +1)\right)\\ &\qquad - \frac{1+t}{2} \log\left(\alpha (n+\nu-2) + 1 + t\alpha n\right) -\frac{t}{2} + \frac{t}{2} \log\left(\frac n2\right) + \frac{2+2t-\alpha}{4} \log\left(1+t-\alpha\right)\\ &\qquad - \frac{2-\alpha}{4} \log\left(1-\alpha\right)\\ &\sim - \frac{1+t}{2} \log(1+t) + \frac{2+2t-\alpha}{4} \log\left(1+t-\alpha\right) - \frac{2-\alpha}{4} \log\left(1-\alpha\right), \end{aligned}$$ as $n\rightarrow \infty$. This directly yields the result in the case where $r \sim \alpha n$, again taking into account the moment representation in the Beta model stated in Section \[sec:SectionModels\]. Since there is no dependence on the parameter $\nu$ in the result concerning the Beta model, the one regarding the spherical model is implied by considering the limiting case $\nu \downarrow 0$ as seen several times before.\ The proofs of the (c) and (d) directly follow from the proofs of Propositions \[propositionbeta\] and \[propositionbeta2\] in the previous section, respectively. Now, we are able to state the large deviation principles for the Beta and the spherical model. Their proofs follow the same lines as the ones in the Gaussian case presented above by using the Gärtner–Ellis theorem. For this reason we have decided to skip them. \[LDPBeta\] - Let $r\in{\mathbb{N}}$ be fixed. Then, ${\mathcal{L}}_{n,r}$ satisfies a LDP with speed $n$ and rate function $$I(x) = \sup_{t\in{\mathbb{R}}}\big\{tx-\eta(t)\big\},$$ where $\eta$ is the function from Proposition \[prop:BetaMomentGeneratingFunction\]. - If $r\sim \alpha n$, $\alpha \in (0,1)$, then, ${1\over \alpha n}{\mathcal{L}}_{n,r}$ satisfies a LDP with speed $\alpha n^2$ and rate function $$I(x) = \sup_{t \in{\mathbb{R}}} \left\{t x - \eta(t) \right\} ,$$ where $\eta$ is the function from Proposition \[prop:gaertner\_ellis\_cond2\] (b). - Let $d\in {\mathbb{N}}$ and assume that $d = n - r$, as $n \rightarrow \infty$, and $\widetilde{m}_n = {1\over 2}({1\over 2}\log n-n+1-\nu+\log(2^{3/2}\pi))$. Then, $\frac{1}{\frac{1}{2}(\log \frac{n}{2} - 1)}({\mathcal{L}}_{n,r}- \widetilde{m}_n - \frac{d-1}{2} \log \frac{n}{2})$ satisfies a LDP with speed $\frac{1}{2}(\log \frac{n}{2} - 1)$ and rate function $$I(x) = \frac{1}{2} x^2 \,,\qquad x\in{\mathbb{R}}\,.$$ - Let $r=r(n)$ be such that $n-r = o(n)$, and let $m_n = \frac 12 (n \log n - n + \frac 12 \log n + \log (2^{3/2}\pi))$ be defined as in Proposition \[prop:mod\_phi\_full\_dim\_almost\]. Then, ${1\over {1\over 2}\log{n\over n-r}}\big({\mathcal{L}}_{n,r} - (m_n-m_{n-r}-{r+1\over 4n}(t-2+2\nu))- \frac 12 \log \frac{(n-r)(1+r)}{n^{1+r}}\big)$ satisfies a LDP with speed ${1\over 2}\log{n\over n-r}$ and rate function $$I(x) = {1\over 2}x^2 \,,\qquad x\in{\mathbb{R}}\,.$$ One can combine Theorem \[LDPBeta\] with the contraction principle from large deviation theory to obtain a LDP for ${\mathcal{V}}_{n,r}$, that is, for the volume of the random simplex itself in the cases that $r=o(n)$ and $r\sim \alpha n$ for some $\alpha\in(0,1)$. Acknowledgement {#acknowledgement .unnumbered} --------------- JG has been supported by the German Research Foundation (DFG) via Research Training Group RTG 2131 *High dimensional Phenomena in Probability – Fluctuations and Discontinuity*. ZK and CT were supported by the DFG Scientific Network *Cumulants, Concentration and Superconcentration*. [99]{} M. Abramowitz and I.A. Stegun (1964). [*Handbook of Mathematical Functions*]{}. Dover Publications. W. J. Anderson (1986). *On certain random simplices in $\mathbb{R}^n$*, J. Mult. Ana. **19**, 265–272. I. Bárány (2007). Random polytopes, convex bodies, and approximation. In Weil, W. (Ed.) *Stochastic Geometry*, Lecture Notes in Mathematics **1892**, Springer. E.W. Barnes (1900). The theory of the G-function. Quart. J. Pure Appl. Math. **31**, 264–314. P.J. Bickel and K.A. Doksum (2001). [*Mathematical Statistics: Basic Ideas and Selected Topics. Vol. I*]{}. 2nd edition, Prentice Hall. D.P.T. Chu (1993). [Random $r$-content of an $r$-simplex from beta-type-$2$ random points]{}. Canad. J. Statist. [**21**]{}, 285–293. M. Dal Borgo, E. Hovhannisyan, and A. Rouault. Mod-Gaussian convergence for random determinants and random characteristic polynomials (2017). arXiv:1707.00449. F. Delbaen, E. Kowalski and A.  Nikeghbali (2015). Mod-[$\varphi$]{} convergence. Int. Math. Res. Not. IMRN **11**, 3445–3485. A. Dembo and O. Zeitouni (1993). [*Large Deviations. Techniques and Applications*]{}. Springer H. Döring and P. Eichelsbacher (2013). Moderate deviations for the determinant of Wigner matrices. In: *Limit Theorems in Probability, Statistics and Number Theory*, Springer Proc. Math. Stat. **42**, Springer. H. Döring and P. Eichelsbacher (2013). Moderate deviations via cumulants. J. Theor. Probab. **26**, 360–385. V. Féray, P. Meliot and A. Nikeghbali (2017). *Mod-phi Convergence – Normality Zones and Precise Deviations*. SpringerBriefs in Probability and Mathematical Statistics. J. Grote and C. Thäle (2017+). Concentration and moderate deviation for Possion polytopes and polyhedra. To appear in Bernoulli. J. Grote and C. Thäle (2016). Gaussian polytopes: a cumulant-based approach. arXiv:1602.06148. D. Hug (2013). Random Polytopes. In: Spodarev, E. (Ed.), *Stochastic Geometry, Spatial Statistics and Random Fields. Asymptotic Methods*, Lecture Notes in Mathematics **2068**, Springer. J. Jacod and E. Kowalski and A. Nikeghbali (2011). Mod-[G]{}aussian convergence: new limit theorems in probability and number theory. Forum Math. **23**(4), 3549–3587. O. Kallenberg (2002). *Foundations of [M]{}odern [P]{}robability*. Springer-Verlag, New York. E. Kowalski, J.  Najnudel, A. Nikeghbali (2015). A characterization of limiting functions arising in [M]{}od-[$^*$]{} convergence. Electron. Commun. Probab. **20**, No. 79. E. Kowalski and A. Nikeghbali (2010). Mod-[P]{}oisson convergence in probability and number theory. Int. Math. Res. Not. IMRN **18**, 3549–3587. R. Leadbetter and G. Lindgren and H. Rootz[é]{}n (1982). *Extremes and related properties of random sequences and processes*. Springer Berlin Heidelberg New York. H. Maehara (1980). *On random simplices in product distributions*, J. Appl. Probab. **17**, 553–558. A.M. Mathai (1982). On a conjecture in geometric probability regarding asymptotic normality of a random simplex. Ann. Probab. **10**, 247–251. A.M. Mathai (1999). Random $p$-content of a $p$-parallelotope in Euclidean $n$-space. Adv. in Appl. Probab. **31**, 343–354. A.M. Mathai (2001). Distributions of random volumes without using integral geometry techniques. In: *Probability and Statistical Models with Applications*, edited by Ch.A. Charalambides, M.V. Koutras, N. Balakrishnan. Chapman and Hall/CRC. R.E. Miles (1971). Isotropic random simplices. Adv. Appl. Probab. **3**, 353–382. Z. Kabluchko, D. Temesvari and C. Thäle (2017). Expected intrinsic volumes and facet numbers of random beta-polytopes. Preprint. M. Reitzner (2010). Random Polytopes. In: Kendall, W.S.; Molchanov, I. (Eds.), *New Perspectives in Stochastic Geometry*, Oxford University Press. A. Rényi and R. Sulanke (1963). Über die konvexe Hülle von $n$ zufällig gewählten Punkten. Z. Wahrsch. Verw. Geb. **2**, 75–84. H. Ruben (1977). The volume of a random simplex in an $n$-ball is asymptotically normal. J. Appl. Probab. **14**, 647–653. H. Ruben (1979). The volume of an isotropic random parallelotope. J. Appl. Probab. **16**, 84–94. H. Ruben and R.E. Miles (1980). A canonical decomposition of the probability measure of sets of isotropic random points in $\mathbb R^n$. J. Multivar. Anal. **10**, 1–18. G. Samorodnitsky and M. Taqqu (1994). [*Stable non-Gaussian Random Processes: Stochastic Models with Infinite Variance*]{}. Chapman and Hall/CRC. L. Saulis and V. A. Statulevičius (1991). *Limit Theorems for Large Deviations*. Kluwer Academic Publishers. R. Schneider and W. Weil (2008). [*Stochastic and Integral Geometry*]{}. Springer.
--- author: - 'Michael Duerr,' - 'Kai Schmidt-Hoberg,' - Sebastian Wild bibliography: - 'SIDM\_StableMediator.bib' title: 'Self-interacting dark matter with a stable vector mediator' --- Introduction {#sec:Introduction} ============ Decades of experimental efforts aiming at a discovery of dark matter (DM) via its non-gravitational interactions with particles of the Standard Model (SM) have led to stringent constraints on such couplings, in particular for the popular class of weakly interacting massive particles (WIMPs) [@Aprile:2017iyp; @Ackermann:2015zua; @Sirunyan:2017hci]. In contrast, DM self-interactions are largely unconstrained, potentially leading to significant changes in the astrophysical behaviour of DM [@Spergel:1999mh]. In fact large DM self-interactions may even be desirable to address a number of discrepancies found in comparing $N$-body simulations of collisionless cold DM with astrophysical observations at small scales (for a recent review see [@Tulin:2017ara]). In light of this, scenarios in which the DM dominantly couples to particles belonging to a *dark sector* have gained significant attention over the last years (see e.g. [@Feng:2008mu; @Foot:2014uba; @Berlin:2016gtr; @Evans:2017kti]). Interestingly, even a fully decoupled dark sector can lead to falsifiable predictions, e.g. to a change in the primordial abundances of elements produced during Big Bang Nucleosynthesis (BBN) [@Scherrer:1987rr; @Hufnagel:2017dgo] or to changes in the Cosmic Microwave Background (CMB) [@Poulin:2016nat; @Bringmann:2018jpr]. While large DM self-interactions at small relative velocities are required to address the small-scale problems, there exist rather strong constraints on the DM self-scattering cross section in high-velocity systems such as galaxy clusters [@Markevitch:2003at; @Randall:2007ph; @Peter:2012jh; @Rocha:2012jg; @Kahlhoefer:2013dca; @Harvey:2015hha; @Kaplinghat:2015aga]. A scattering cross section which increases towards smaller velocities is therefore preferred observationally. This behaviour is naturally achieved if a light scalar or vector particle mediates this interaction [@Ackerman:mha; @Feng:2009mn; @Buckley:2009in; @Feng:2009hw; @Loeb:2010gj; @Aarssen:2012fx; @Tulin:2013teo; @Kaplinghat:2015aga]. At the same time the DM relic abundance can naturally be set via thermal freeze-out of DM into these mediators. However, in their simplest forms, these light mediator scenarios are under strong pressure from observations: a vector mediator $Z_\text{D}$ leads to $s$-wave annihilation and if it predominantly decays into SM states such as electrons or photons, the energy injection from late-time annihilations $\psi \bar \psi \rightarrow Z_\text{D} Z_\text{D} \rightarrow \text{SM}$ generically violates the stringent bounds obtained from the CMB [@Bringmann:2016din; @Cirelli:2016rnw]. For a scalar mediator, on the other hand, the annihilation is $p$-wave suppressed such that bounds from the CMB are avoided. Nevertheless, strong bounds from direct detection experiments on the coupling to SM states imply late decays of the scalar, which in turn can spoil the successful predictions of standard BBN [@Kaplinghat:2013yxa; @Kainulainen:2015sva; @BBNfuture]. A number of possibilities to circumvent these bounds have been discussed for both the vector and scalar cases. To avoid constraints for the vector mediator one possibility is to have decays into light hidden sector states such as sterile neutrinos, which do not lead to reionisation. In such a setup where DM is converted to [*dark*]{} radiation, bounds from both BBN [@Hufnagel:2017dgo] as well as the CMB [@Bringmann:2018jpr] can be avoided. Another option would be to have asymmetric DM [@Baldes:2017gzu] or to avoid thermalisation of the visible and hidden sectors, in which case freeze-in production [@Bernal:2015ova] can set the relic abundance and constraints can be circumvented. Suppressing the scattering cross section relevant for direct detection allows to have viable models also for scalar mediators [@Blennow:2016gde; @Kahlhoefer:2017umn]. In this work we study the possibility that the vector mediator $Z_\text{D}$ is stable, in which case the annihilation process $\psi \bar \psi \rightarrow Z_\text{D} Z_\text{D}$ obviously does not lead to energy injection during recombination. The stability can be achieved either by simply postulating that the kinetic mixing of $Z_\text{D}$ with the SM gauge fields is highly suppressed, or in fact by demanding a dark charge conjugation symmetry [@Ma:2017ucp]. However, in this minimal setup $Z_\text{D}$ freezes out while still being relativistic and, being stable, would overclose the Universe. Recently, it has been pointed out [@Ma:2017ucp] that the abundance of a stable vector mediator $Z_\text{D}$ could be sufficiently reduced via annihilations into a lighter state long after the freeze-out of $\psi$. In fact, there is a natural motivation to introduce one more particle in the dark sector: if $Z_\text{D}$ obtains its mass from the breaking of a local $U(1)$ symmetry, the theory contains a *dark Higgs boson* $h_\text{D}$, which (at least at tree-level) has a mass similar to the corresponding gauge boson. For $m_{h_\text{D}} < m_{Z_\text{D}}$, the annihilation $Z_\text{D} Z_\text{D} \rightarrow h_\text{D} h_\text{D}$ can then suppress the late-time $Z_\text{D}$ abundance, and for non-zero mixing between the SM and the dark Higgs boson the latter may decay before dominating the energy density of the Universe. By construction, the CMB constraints arising from $\psi \bar \psi \rightarrow Z_\text{D} Z_\text{D}$ are avoided; furthermore, the coupling structure of the theory does not permit the annihilation of $\psi \bar \psi$ into a pair of (unstable) dark Higgs bosons $h_\text{D}$ at tree-level. However, the presence of the annihilation channel $\psi \bar \psi \rightarrow Z_\text{D} h_\text{D}$ with the subsequent decay of $h_\text{D}$ still leads to the injection of SM energy into the CMB, and depending on the values of the different couplings involved, this potentially reintroduces the corresponding constraints. Furthermore, the late-time annihilation of the subdominant DM component $Z_\text{D}$ into a pair of dark Higgs bosons can also leave its imprint on the CMB, which is actually well-known to be highly sensitive to even very small annihilation cross sections for DM particles with masses in the MeV range [@Slatyer:2015jla]. In light of these considerations, we perform a detailed and comprehensive study of the phenomenological viability of this scenario, i.e. a weak-scale DM particle $\psi$ coupled to a stable vector mediator $Z_\text{D}$, which itself acts as a subdominant DM component. After describing the model in section \[sec:Model\], we discuss the relevant annihilation channels of the two DM species and the corresponding calculation of thermal freeze-out in section \[sec:DMAnnihilation\]. In particular, we point out the importance of the conversion processes between the two DM species $\psi$ and $Z_\text{D}$, influencing their cosmological abundances. In section \[sec:constraints\], we first discuss bounds from CMB spectral distortions and BBN on the late-time decay of the dark Higgs boson $h_\text{D}$, before examining the impact of the energy injection during recombination induced by the annihilations of $\psi$ and $Z_\text{D}$. We present our results in section \[sec:Results\], where we pay special attention to the question whether it is possible to have sufficiently strong self-interactions of DM to resolve the small-scale problems mentioned previously, while being consistent with all constraints from the CMB and BBN. Finally, we conclude in section \[sec:conclusions\]. Additional material can be found in appendices \[app:FullLagrangian\] and \[app:relicdensity\]. A simple model {#sec:Model} ============== We extend the SM gauge group by a ‘dark’ gauge symmetry $U(1)_\text{D}$, and introduce a vector-like Dirac fermion $\psi$ as well as a complex scalar $\sigma$ charged under this new symmetry. These dark sector particles are singlets under the SM gauge group, and all SM fields are assumed to transform trivially under $U(1)_\text{D}$. The dark gauge symmetry is then spontaneously broken by a vacuum expectation value (vev) of $\sigma$, resulting in a massive dark gauge boson $Z_\text{D}$ as well as a real scalar $h_\text{D}$. More precisely, prior to symmetry breaking of the SM and dark gauge group, the Lagrangian of the model is given by $$\mathcal{L} = \mathcal{L}_{\widetilde{\text{SM}}} + \mathcal{L}_\text{D} \left( \psi, Z_\text{D}^\mu, \sigma \right) - V \left( \sigma, \Phi \right) \,, \label{eq:L}$$ with $\mathcal{L}_{\widetilde{\text{SM}}}$ denoting the SM Lagrangian excluding the Higgs potential. The term containing the fermion and gauge boson interactions is given by $$\mathcal{L}_\text{D} \left( \psi, Z_\text{D}^\mu, \sigma \right) = i \bar{\psi} \gamma_\mu D^\mu \psi - m_\psi \bar{\psi} \psi + \left( D^\mu \sigma \right)^\ast \left( D_\mu \sigma \right) - \frac{1}{4} F^{\mu\nu}_\text{D} F_{\mu\nu}^\text{D} , \label{eq:LD}$$ where $$\begin{aligned} D^\mu \psi &= \left(\partial^\mu - i g_\psi Z^\mu_\text{D} \right) \psi, \\ D^\mu \sigma &= \left(\partial^\mu - i g_\text{D} Z^\mu_\text{D} \right) \sigma, \\ F^{\mu\nu}_\text{D} &= \partial^\mu Z_\text{D}^\nu - \partial^\nu Z_\text{D}^\mu.\end{aligned}$$ The $U(1)_\text{D}$ charges (times the gauge coupling) $g_\psi$ and $g_\text{D}$ of the fields $\psi$ and $\sigma$ will be treated as independent parameters of the model. Notice that the mass term of the vector-like fermion $\psi$ is gauge invariant, and is thus already present prior to symmetry breaking. Crucially, we have not included a kinetic mixing term $\propto F_\text{D}^{\mu \nu} B_{\mu \nu}$ in eq. , where $B_{\mu \nu}$ denotes the SM hypercharge field strength tensor. After the breaking of $U(1)_\text{D}$ (see below), the presence of this term would allow the massive gauge boson $Z_\text{D}$ to decay into SM states such as $e^\pm$ pairs or photons; as already mentioned in the introduction and explained in more detail in section \[sec:CMB\], basically all of the parameter space of the model leading to significant self-interactions of DM would then be excluded due to constraints on energy injection from DM annihilations during recombination. As pointed out recently in [@Ma:2017ucp], DM self-interactions might still be viable in such a scenario if the light mediator is stable. From a purely phenomenological point of view, one can thus simply postulate that the dimensionless coupling parameter controlling the kinetic mixing is sufficiently small. For the range of masses of $Z_\text{D}$ considered in this work, a kinetic mixing of the order $\kappa \simeq 10^{-20}$ is necessary to achieve a lifetime equal to the age of the Universe, with stringent bounds from the CMB requiring even smaller values of $\kappa$ [@Poulin:2016anj]. Notice that the choice of the kinetic mixing being exactly zero is actually stable under quantum corrections: there are no fermions in the model which are charged both under $U(1)_\text{D}$ as well as under a SM gauge symmetry, and hence all loop-induced contributions to the mixing of $Z_\text{D}$ with the SM gauge bosons vanish. Alternatively, as pointed out recently in [@Ma:2017ucp], the kinetic mixing term can be forbidden by imposing a *dark charge conjugation symmetry*, rendering $Z_\text{D}$ absolutely stable (as long as $m_{Z_\text{D}} < 2 m_\psi$). In the same way as there is the familiar charge conjugation operator $\mathcal{C}$ associated with the SM $U(1)_\text{em}$ group, the dark charge conjugation operator $\mathcal{C}_\text{D}$ changes the signs of the $U(1)_\text{D}$ charges $g_\psi$ and $g_\text{D}$, and furthermore replaces $\sigma$ by $\sigma^\ast$, $Z_\text{D}^\mu$ by $-Z_\text{D}^\mu$ as well as $\psi$ by the charge-conjugated spinor $\psi^\text{C}$. If, in contrast to $\mathcal{C}$, nature is symmetric with respect to dark charge conjugation, the kinetic mixing operator $F_\text{D}^{\mu \nu} B_{\mu \nu}$ is forbidden. Notice that this symmetry is still present after the spontaneous breaking of $U(1)_\text{D}$ via a vev of $\sigma$. Finally, in the Lagrangian given by eq. , $V \left( \sigma, \Phi \right)$ denotes the most general scalar potential involving the SM singlet $\sigma$ and the SM Higgs doublet $\Phi$: $$V \left( \sigma, H \right) = -\mu_\text{D}^2 \sigma^\ast \sigma + \frac{1}{2} \lambda_\text{D} \left( \sigma^\ast \sigma \right)^2 - \mu_h^2 \Phi^\dagger \Phi + \frac12 \lambda_h \left( \Phi^\dagger \Phi \right)^2+ \lambda_{h\text{D}} \left( \sigma^\ast \sigma \right) \left( \Phi^\dagger \Phi \right) \,. \label{eq:scalarpotential_beforeSB}$$ After spontaneous breaking of the electroweak and dark gauge symmetry, the scalar fields can be parametrised in unitary gauge as $$\sigma = (v_\text{D} + H_\text{D})/\sqrt{2} \text{ and } \Phi = (0, (v_h + H)/\sqrt{2})^T. \label{eq:scalars_afterSB}$$ In the following, we eliminate $\lambda_\text{D}$ and $\lambda_h$ from the scalar potential  by using $v_h \simeq \unit[246]{GeV}$ and treating the dark Higgs vev $v_\text{D}$ as a free parameter. For a given choice of the gauge coupling $g_\text{D}$, the latter is in one-to-one correspondence with the gauge boson mass $m_{Z_\text{D}} = g_\text{D} v_\text{D}$. The presence of the portal term proportional to $\lambda_{h \text{D}}$ in the scalar potential leads to a mixing of $H_\text{D}$ and $H$; we denote the corresponding mass eigenstates by $h_\text{D}$ and $h$. Assuming $\lambda_{h\text{D}} \ll 1$, $m_{h_\text{D}} \ll m_h$, the mixing angle is given by $\theta \simeq \lambda_{h\text{D}} v_\text{D} v_h/m_h^2$, where $m_h \simeq \unit[125]{GeV}$ is the mass of the SM Higgs boson $h$. While in the absence of the kinetic mixing term $Z_\text{D}$ is stable, the dark Higgs boson $h_\text{D}$ can decay into SM particles with a rate proportional to $\theta^2$. Further details, in particular the full Lagrangian including the scalar potential after symmetry breaking can be found in appendix \[app:FullLagrangian\]. For the purpose of our phenomenological analysis, a point in the parameter space of the model after symmetry breaking is then fully specified by the free parameters $$m_{Z_\text{D}}, m_\psi, m_{h_\text{D}}, g_\text{D}, g_\psi, \lambda_{h\text{D}} \,.$$ Note that as long as the dimensionless couplings $g_\text{D}$ and $\lambda_\text{D}$ are of order one, $m_{Z_\text{D}}$ and $m_{h_\text{D}}$ are expected to be of the same order of magnitude. On the other hand, the tree-level mass of $\psi$ is not related to the breaking of $U(1)_\text{D}$, and thus can be naturally at a different scale. Annihilation channels of dark matter and freeze-out calculation {#sec:DMAnnihilation} =============================================================== The scenario introduced in the previous section involves two stable neutral particles which contribute to the observed density of DM: the Dirac fermion $\psi$ as well as the massive gauge boson $Z_\text{D}$. In the following, we discuss the main qualitative aspects of the freeze-out process of these DM particles; additional technical details of our numerical implementation can be found in appendix \[app:relicdensity\]. ![Feynman diagrams visualising the annihilation channels $\psi \bar{\psi} \to Z_\text{D} Z_\text{D}$ (left) and $ \psi \bar{\psi} \to Z_\text{D} h_\text{D}$ (right). The corresponding cross sections are given in Eqs.  and .[]{data-label="fig:psi_annihilation"}](./figs/psipsitoZDZD.pdf "fig:") ![Feynman diagrams visualising the annihilation channels $\psi \bar{\psi} \to Z_\text{D} Z_\text{D}$ (left) and $ \psi \bar{\psi} \to Z_\text{D} h_\text{D}$ (right). The corresponding cross sections are given in Eqs.  and .[]{data-label="fig:psi_annihilation"}](./figs/psipsitoZDhD.pdf "fig:") We focus our analysis on regions in parameter space where $m_{Z_\text{D}} \ll m_\psi$: this is a necessary condition for obtaining a self-interaction cross section of $\psi$ which is large enough to lead to interesting astrophysical signatures. The heavy DM particle $\psi$ can then self-annihilate via two possible channels (see Fig. \[fig:psi\_annihilation\] for the corresponding Feynman diagrams): $$\begin{aligned} \psi \bar{\psi} \to Z_\text{D} Z_\text{D} \quad \text{with } (\sigma v)_{\psi \bar{\psi} \to Z_\text{D} Z_\text{D}}^\text{tree} \simeq \frac{g_\psi^4}{16 \pi m_\psi^2} \,, \label{eq:psipsiZDZD}\\ \psi \bar{\psi} \to Z_\text{D} h_\text{D} \quad \text{with } (\sigma v)_{\psi \bar{\psi} \to Z_\text{D} h_\text{D}}^\text{tree} \simeq \frac{g_\text{D}^2 g_\psi^2}{64 \pi m_\psi^2} \,, \label{eq:psipsiZDhD}\end{aligned}$$ where the tree-level expressions $(\sigma v)^\text{tree}$ for the annihilation cross sections are given in the limit $m_{Z_\text{D}} \ll m_\psi$ and $v \ll 1$. Note that the latter process leads to significant constraints from the CMB via the decay of the dark Higgs into SM states (see Section \[sec:CMB\]), which have not been considered in [@Ma:2017ucp]. The annihilation of $\psi \bar \psi$ into a pair of dark Higgs bosons, on the other hand, is strongly suppressed as it only proceeds via a one-loop diagram and furthermore vanishes in the $s$-wave limit $v \to 0$. In our numerical calculation, we take into account Sommerfeld enhancement in the annihilation processes (\[eq:psipsiZDZD\]) and (\[eq:psipsiZDhD\]), arising from the multiple exchange of $Z_\text{D}$ bosons in the $\psi \bar \psi$ initial state (see appendix \[app:relicdensity\] for details). Moreover, for $m_{h_\text{D}} < m_{Z_\text{D}}$ the massive gauge boson $Z_\text{D}$ can annihilate via $$\begin{aligned} Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D} \quad \text{with } (\sigma v)_{ Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \simeq \frac{g_\text{D}^4 \sqrt{1-r} \left( 44 - 20 r + 9 r^2 - 8 r^3 + 2 r^4 \right)}{9 \pi m_{Z_\text{D}}^2 \left( 8 - 6 r + r^2 \right)^2 } \,, \label{eq:ZDZDhdhD}\end{aligned}$$ where $r = m_{h_\text{D}}^2 / m_{Z_\text{D}}^2$.[^1] The corresponding Feynman diagrams are shown in Fig. \[fig:ZD\_annihilation\]. ![Feynman diagrams depicting the annihilation of the massive gauge boson $Z_\text{D}$. The cross section for the process $ Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ is given in Eq. .[]{data-label="fig:ZD_annihilation"}](./figs/ZDZDtohDhD1.pdf "fig:") ![Feynman diagrams depicting the annihilation of the massive gauge boson $Z_\text{D}$. The cross section for the process $ Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ is given in Eq. .[]{data-label="fig:ZD_annihilation"}](./figs/ZDZDtohDhD2.pdf "fig:") ![Feynman diagrams depicting the annihilation of the massive gauge boson $Z_\text{D}$. The cross section for the process $ Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ is given in Eq. .[]{data-label="fig:ZD_annihilation"}](./figs/ZDZDtohDhD3.pdf "fig:") At large temperatures, these annihilation processes lead to chemical equilibrium between the dark sector particles $\psi$, $Z_\text{D}$ and $h_\text{D}$. Furthermore, in the following we assume the portal coupling $\lambda_{h \text{D}}$ to be sufficiently large such that the initial temperatures of the dark and visible sectors are identical; the precise choice for $\lambda_{h \text{D}}$ will be discussed in more detail in section \[sec:BBN\]. The cosmological evolution of the DM particles $\psi$ and $Z_\text{D}$ down to smaller temperatures is then described by a set of two coupled Boltzmann equations for the number densities $n_\psi$ and $n_{Z_\text{D}}$. As described in more detail in appendix \[app:relicdensity\], we compute the present-day abundances $\Omega_\psi h^2$ (defined to be the sum of the abundances of $\psi$ and $\bar \psi$) and $\Omega_{Z_\text{D}} h^2$ by solving these equations numerically using a modified version of `MicrOMEGAs v4.3.5` [@Belanger:2006is; @Belanger:2014vza], additionally taking into account the Sommerfeld enhancement as well as thermal decoupling of the dark and visible sector at a temperature $T_\text{dec}$. Qualitatively, the freeze-out process can be understood as follows: at $T \simeq m_\psi/25$, the annihilation processes given in eqs.  and  stop being efficient, and the heavy DM particle $\psi$ freezes out, i.e. $n_\psi/s$ becomes constant. However, the lighter DM particle $Z_\text{D}$ remains in chemical equilibrium with the dark Higgs boson down to much smaller temperatures $T \simeq m_{Z_\text{D}} / x_{f}$, with $x_f \simeq 15-50$. The precise value of $x_f$, and thus the final abundance of $Z_\text{D}$ depends on the strength of various annihilation channels: besides the usual self-annihilation $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$, also processes involving the already frozen-out DM particle $\psi$ have to be taken into account, leading to additional terms in the Boltzmann equation for $n_{Z_\text{D}}$. Concretely, these are the conversion process $\psi Z_\text{D} \to \psi h_\text{D}$ as well as the annihilation channels $\psi \bar \psi \to Z_\text{D} Z_\text{D}$ and $\psi \bar \psi \to Z_\text{D} h_\text{D}$. Notice that even though during the freeze-out of $Z_\text{D}$ the latter processes are already too weak in order to keep $\psi$ in equilibrium, they nevertheless can be important for the evolution of $n_{Z_\text{D}}$. A more detailed discussion of this point can be found in appendix \[app:relicdensity\]. Observational constraints {#sec:constraints} ========================= Bounds on the decay of hD from CMB spectral distortions and BBN {#sec:BBN} --------------------------------------------------------------- Being in thermal equilibrium with the SM heat bath at early times, the dark Higgs boson $h_\text{D}$ generically has a significant abundance prior to its decay. As we are interested in a scenario with $m_{h_\text{D}} < m_{Z_\text{D}} \lesssim \unit[100]{MeV}$, it decays either dominantly into $e^+ e^-$ (for $m_{h_\text{D}} > 2 m_e$) or into $\gamma \gamma$ (for $m_{h_\text{D}} < 2 m_e$), with a lifetime $\tau_\phi$ taken from [@Bezrukov:2009yw; @Alekhin:2015byh]. If these decay products are injected at redshifts $z \lesssim 2 \times 10^6$, they do not fully thermalise with the background photons, and thus lead to spectral distortions in the CMB [@Zeldovich:1969ff; @Hu:1993gc; @Chluba:2011hw]. In the context of our scenario, this excludes all regions of parameter space with $\tau_{h_\text{D}} \gtrsim \unit[10^5]{s}$ [@Poulin:2016anj].[^2] Even for a scalar portal coupling $\lambda_{h\text{D}}$ of order one, this bound is generically violated if the dark Higgs has a mass below $2 m_e$ and thus can only decay into a pair of photons at one loop. As we still want to keep $m_{h_\text{D}} < m_{Z_\text{D}}$ in order to allow for the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ to deplete the abundance of $Z_\text{D}$, in the following we will fix $m_{h_\text{D}} = \unit[1.5]{MeV} > 2 m_e$, and only consider vector boson masses $m_{Z_\text{D}} \gtrsim \unit[2]{MeV}$. Notice that as long as $m_{Z_\text{D}} \gtrsim m_{h_\text{D}}$, the precise value of the dark Higgs boson mass does not impact the phenomenology elsewhere, in particular neither the CMB constraints on energy injection from DM annihilation during recombination nor the self-interaction cross section of $\psi$. When presenting our results in section \[sec:Results\], we will indicate in which regions of parameter space the lifetime of $h_\text{D}$ for this choice of $m_{h_\text{D}}$ nevertheless exceeds $\unit[10^5]{s}$, and is thus excluded by the constraints on CMB spectral distortions. The decay of $h_\text{D}$ in the early Universe is also constrained by the excellent agreement of the observed primordial abundances of light elements with the predictions from BBN. In general, BBN can be affected by additional stable or decaying particles present at temperatures $T \lesssim \unit[10]{MeV}$ [@Cyburt:2015mya; @Patrignani:2016xqp]. More specifically, the scenario discussed in this work potentially modifies the primordial nuclear abundances in two ways: - If the dark Higgs $h_\text{D}$ decays well after BBN, its electromagnetic decay products can photo-disintegrate nuclei, in particular deuterium and helium. - If $Z_\text{D}$ and/or $h_\text{D}$ are still in thermal equilibrium at $T \simeq \unit[10]{MeV}$, they provide a contribution to $\Delta N_\text{eff}$ and thus enhance the expansion rate during BBN. The first bound potentially constrains regions of parameter space where $\tau_{h_\text{D}} \gtrsim \unit[10^4]{s}$; for smaller lifetimes, the cascade of the electromagnetic decay products caused by interactions with CMB photons leads to a cutoff of the corresponding photon spectrum below the photo-disintegration threshold $E_\text{dis} = \unit[2.2]{MeV}$ of deuterium [@Jedamzik:2006xz; @Berger:2016vxi]. However, for our choice $m_{h_\text{D}} = \unit[1.5]{MeV}$ as motivated above from the constraints on CMB spectral distortions, the electromagnetic cascade induced by the electrons and positrons produced in the decay of $h_\text{D}$ anyway only lead to photons with energies below $E_\gamma \simeq \unit[0.75]{MeV}$, which are unable to photo-disintegrate nuclei even for lifetimes $\tau_{h_\text{D}} \gg \unit[10^4]{s}$. Consequently, for our choice of $m_{h_\text{D}}$, the BBN bound (i) is automatically avoided. ![Reaction rate $\Gamma_{h_\text{D} h_\text{D} \rightarrow \text{SM SM}}$ for different choices of $\lambda_{h\text{D}}$. As explained in the text, choosing $\lambda_{h\text{D}} \lesssim 4 \times 10^{-4}$ leads to thermal decoupling of the dark and visible sectors prior to the QCD phase transition, and thus to a significantly reduced contribution of the dark sector particles to $\Delta N_\text{eff}$.[]{data-label="fig:hDhD_rates"}](./figs/reaction_rates.pdf) The constraint (ii) from the increased Hubble rate during BBN depends critically on the temperature of the dark sector $T_\text{D}$ at $T \simeq \unit[10]{MeV}$. The process most relevant for keeping the dark and visible sectors in thermal contact (leading to $T_\text{D} = T$) is the annihilation of the dark Higgs $h_\text{D}$ into SM particles. The corresponding reaction rate $\Gamma_{h_\text{D} h_\text{D} \to \text{SM SM}}(T)$ as a function of temperature is shown in Fig. \[fig:hDhD\_rates\] for different choices of the parameter $\lambda_{h\text{D}}$ appearing in the scalar potential . For $T \gtrsim m_h/2$, the dominant process establishing equilibrium is the production of an on-shell SM Higgs boson in the s-channel which, even for rather small values of $\lambda_{h\text{D}}$, guarantees chemical equilibrium at these temperatures. For smaller $T$, this process gets exponentially suppressed and the annihilation rate $\Gamma_{h_\text{D} h_\text{D} \rightarrow \text{SM SM}}$ rapidly decreases[^3], until eventually the dark and visible sectors decouple at a temperature $T_\text{dec}$, which we define via $\Gamma_{h_\text{D} h_\text{D} \rightarrow \text{SM SM}}(T_\text{dec}) = H(T_\text{dec})$. As can be seen from Fig. \[fig:hDhD\_rates\], by choosing $\lambda_{h\text{D}} \lesssim 4 \times 10^{-4}$, this decoupling happens prior to the QCD phase transition, i.e. $T_\text{dec} \gtrsim \unit[500]{MeV}$. The visible sector is then heated with respect to the dark sector, reducing the relative contribution of the dark sector particles to the energy density. Quantitatively, the impact of $Z_\text{D}$ and $h_\text{D}$ on the Hubble rate during BBN can be parametrised in terms of the equivalent number of additional neutrino species: $$\begin{aligned} \Delta N_\text{eff} (T \simeq \unit[10]{MeV}) \simeq 4 \cdot \left( \frac{g_\text{SM} (\unit[10]{MeV})}{g_\text{SM} (T_\text{dec})} \right)^{4/3} \lesssim 0.27 \,. \label{eq:DeltaNeff}\end{aligned}$$ Here we conservatively assumed that both $Z_\text{D}$ and $h_\text{D}$ are relativistic degrees of freedom during BBN; for $m_{Z_\text{D}} \gtrsim \unit[10]{MeV}$ the abundance of $Z_\text{D}$ during BBN is already Boltzmann suppressed, and the contribution to $\Delta N_\text{eff}$ is even smaller. Using the most recent information on the baryon-to-photon ratio inferred from the CMB as well as updated nuclear reaction rates, the upper limit on extra radiation during BBN is found to be $\Delta N_\text{eff} < 0.2 (0.36)$ at $2\sigma (3\sigma)$ [@Cyburt:2015mya]. Given the significant impact of systematic uncertainties on deriving this limit, we conclude that the maximal contribution to $\Delta N_\text{eff}$ predicted by our scenario, as given by eq. , might be in (mild) tension with BBN observations, but is certainly not robustly ruled out. A detailed analysis of BBN constraints on MeV-scale particles decaying into SM states, going beyond the simple estimate of $\Delta N_\text{eff}$ via eq.  will appear elsewhere [@BBNfuture]. As outlined above, this conclusion holds as long as $\lambda_{h\text{D}} \lesssim 4 \times 10^{-4}$, such that the dark and visible sectors decouple before the QCD phase transition. On the other hand, by choosing $\lambda_{h\text{D}}$ too small, the lifetime of the dark Higgs boson can get larger than $\tau_{h_\text{D}} \gtrsim \unit[10^{5}]{s}$, violating the bound from CMB spectral distortions as discussed at the beginning of this section. In order to weaken this constraint as much as possible, we fix $\lambda_{h\text{D}} = 4 \times 10^{-4}$ in the following, i.e. we choose the maximal value compatible with the constraint on the Hubble rate during BBN.[^4] With this value for $\lambda_{h\text{D}}$, the lifetime of the dark Higgs will exceed $\tau_{h_\text{D}} = \unit[10^{5}]{s}$ in some parts of the parameter regions considered in the numerical analysis in Sec. \[sec:Results\]. We indicate the corresponding regions in all plots, but note that they are independently excluded by other constraints. CMB constraints on energy injection during recombination {#sec:CMB} -------------------------------------------------------- The prime motivation for postulating the stability of $Z_\text{D}$ has been to avoid the constraints arising from energy injection during recombination due to the annihilation process $\psi \bar \psi \to Z_\text{D} Z_\text{D}$. However, in our scenario the heavy DM particle can also annihilate via $\psi \bar{\psi} \rightarrow Z_\text{D} h_\text{D}$, potentially reintroducing the CMB constraints due to the subsequent decays of the dark Higgs boson into SM states. Moreover, also late-time annihilations $Z_\text{D} Z_\text{D} \rightarrow h_\text{D} h_\text{D}$ lead to energy injection into the CMB, which, depending on the fraction of DM made up of $Z_\text{D}$, might also be in conflict with observations. The annihilation cross section for $\psi \bar{\psi} \rightarrow Z_\text{D} h_\text{D}$ during recombination is given by $$\begin{aligned} (\sigma v)_{\psi \bar{\psi} \to Z_\text{D} h_\text{D}}^\text{CMB} \equiv S_s(v) \cdot (\sigma v)_{\psi \bar{\psi} \to Z_\text{D} h_\text{D}}^\text{tree}, \label{eq:sigmavCMB_psipsi}\end{aligned}$$ where $(\sigma v)_{\psi \bar{\psi} \to Z_\text{D} h_\text{D}}^\text{tree}$ is the tree-level cross section given in eq. , and $S_s(v)$ is the $s$-wave Sommerfeld enhancement factor corresponding to the multiple exchange of $Z_\text{D}$ in the initial state, which is provided in eq. . The relative velocity $v$ during recombination entering eq.  can be conservatively estimated by using an upper bound on the kinetic decoupling temperature of DM from Lyman-$\alpha$ observations [@Croft:1997jf; @Croft:2000hs], resulting in [@Bringmann:2016din] $$v \lesssim 2 \times 10^{-7} \left( \frac{m_\psi}{\unit[100]{GeV}}\right)^{-1/2}.$$ We have explicitly confirmed that the precise value of $v$ does not affect our results as long as it satisfies this bound, since the Sommerfeld enhancement is already saturated for these velocities. A given point in parameter space is then excluded by CMB data if $$\begin{aligned} \label{eq:sigmav_NNX_bound} \frac12 \cdot (\sigma v)_{\psi \bar{\psi} \to Z_\text{D} h_\text{D}}^\text{CMB} > (\sigma v)_{4e^\pm}^{\text{(upper bound)}}\left( m_\psi \right) \cdot \left(\frac{\Omega_\text{DM} h^2}{\Omega_{\psi} h^2}\right)^2 \,.\end{aligned}$$ Here the factor $1/2$ on the left hand side accounts for the fact that due to the stability of $Z_\text{D}$ only half of the energy is transferred into electrons and positrons affecting reionisation. Furthermore, $(\sigma v)_{4e^\pm}^{\text{(upper bound)}}(m_\psi)$ is the upper bound on the annihilation cross section of DM into a final state containing two electrons and two positrons, obtained under the assumption that $\psi$ constitutes all of the observed DM. We take this bound as a function of $m_{\psi}$ from [@Slatyer:2015jla], after multiplying it by a factor of two due to the Dirac nature of $\psi$. Finally, the last factor in eq.  takes into account the suppression of the bound if $\psi$ does not constitute all of the observed DM, with $\Omega_\text{DM} h^2 \simeq 0.12$ being the total DM abundance [@Ade:2015xua]. Similarly, the energy injection during recombination due to annihilations of $Z_\text{D}$ excludes parts of the parameter space where $$\begin{aligned} (\sigma v)_{Z_\text{D} Z_\text{D} \rightarrow h_\text{D} h_\text{D}} > (\sigma v)_{4e^\pm}^{\text{(upper bound)}} \left( m_{Z_\text{D}} \right) \cdot \left(\frac{\Omega_\text{DM} h^2}{\Omega_{Z_\text{D}} h^2}\right)^2 \,, \label{eq:sigmav_ZDZDX_bound}\end{aligned}$$ with $(\sigma v)_{Z_\text{D} Z_\text{D} \rightarrow h_\text{D} h_\text{D}}$ given by eq. . Notice that in contrast to the self-annihilation of $\psi$, for the values of $m_{Z_\text{D}}/m_{h_\text{D}}$ considered in this work this process is not subject to Sommerfeld enhancement, and the corresponding cross section can simply be evaluated in the limit $v \rightarrow 0$. Self-interactions of dark matter {#sec:selfinteractions} -------------------------------- Via its coupling to the light mediator $Z_\text{D}$, the DM particle $\psi$ can experience significant rates of self-scattering, even for weak couplings $g_\psi \lesssim 1$ [@Buckley:2009in; @Feng:2009hw]. This process can have important consequences for the distribution of DM in various astrophysical systems: it can transform cuspy profiles of DM halos into cored ones [@Yoshida:2000uw; @Dave:2000ar] or more generally lead to a large diversity of DM profiles once baryonic effects are taken into account [@Kamada:2016euw]. It may even lead to spectacular displacement signatures in merging galaxy clusters [@Williams:2011pm; @Dawson:2011kf; @Kahlhoefer:2013dca] if the scattering cross section is only mildly suppressed at large velocities (see [@Tulin:2017ara] for a recent review on the subject). For a large class of astrophysical objects, a good proxy for the impact of DM self-interactions is the momentum transfer cross section $\sigma_\text{T}$, defined via [@Kahlhoefer:2013dca; @Kahlhoefer:2017umn] $$\begin{aligned} \label{eq:transferCrossSection} \sigma_\text{T} &\equiv \frac{1}{2} \left( \sigma_\text{T}^{\psi \psi} + \sigma_\text{T}^{\psi \bar \psi} \right) \, \, ,\text{ with } \nonumber \\ \sigma_\text{T}^{{\psi \psi},{\psi \bar \psi}} &\equiv 2 \pi \int_{-1}^1 \left(\frac{\text{d}\sigma}{\text{d}\Omega}\right)^{{\psi \psi},{\psi \bar \psi}} \left(1 - \left| \cos\theta \right| \right) \text{d}\cos\theta .\end{aligned}$$ Here, $(\text{d}\sigma/\text{d}\Omega)^{\psi \psi}$ and $(\text{d}\sigma/\text{d}\Omega)^{\psi \bar \psi}$ denote the differential cross sections for elastic scattering of $\psi \psi$ and $\psi \bar \psi$, respectively. We compute those by adapting the procedure outlined in [@Kahlhoefer:2017umn] for DM interacting with a scalar mediator to the case of a vector mediator. In particular, we take into account non-perturbative effects related to multiple exchange of $Z_\text{D}$ by solving the corresponding Schrödinger equation for a Yukawa-like scattering potential, properly taking into account the quantum indistinguishability of identical particles participating in the scattering process. For $g_\psi^2 m_\psi/(4 \pi m_{Z_\text{D}}) \ll 1$, the non-perturbative effects are negligible and our results match the analytical expressions given in [@Kahlhoefer:2017umn] for the Born regime (which are identical for scalar and vector mediators). On the other hand, for $m_\psi v/m_{Z_\text{D}} \gtrsim 5$ solving the Schrödinger equation becomes not feasible, and we employ the results from [@Cyr-Racine:2015ihg] for the scattering cross section in the classical regime. Crucially, in the regime where non-perturbative effects are important, the momentum transfer cross section $\sigma_\text{T}$ typically is enhanced for small velocities $v$ of the DM particles. Hence, one naturally expects larger effects of the DM self-scattering process in systems with small velocity dispersions such as dwarf galaxies (where $v \simeq 30\,\text{km}\,\text{s}^{-1}$), and thus it is easier to satisfy the upper bounds on $\sigma_\text{T}$ from observations of galaxy clusters (where $v \simeq 1000\,\text{km}\,\text{s}^{-1}$). However, both the cross section required in order to transform cusps in dwarf galaxies into cored profiles [@Vogelsberger:2012ku; @Rocha:2012jg; @Zavala:2012us; @Tulin:2013teo; @Kaplinghat:2015aga], as well as the largest value of $\sigma_\text{T}/m_\psi$ compatible with constraints from merging galaxy clusters [@Randall:2007ph; @Kahlhoefer:2015vua; @Harvey:2015hha; @Wittman:2017gxn] are still under debate. In light of this, and in order to bracket all of the potentially interesting range of momentum transfer cross sections at small scales, in section \[sec:Results\] we will show which regions in parameter space lead to $0.1\,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10 \, \text{cm}^2/\text{g}$ at $v \simeq 30\,\text{km}\,\text{s}^{-1}$, and use the rather conservative upper bound $\sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ at the scale of galaxy clusters, $v \simeq 1000\,\text{km}\,\text{s}^{-1}$.[^5] Results {#sec:Results} ======= Impact of CMB constraints ------------------------- ![Constraints in the $g_\text{D}$–$m_{Z_\text{D}}$ plane for $m_\psi = \unit[1]{GeV}$ (upper left panel), $m_\psi = \unit[10]{GeV}$ (upper right panel), $m_\psi = \unit[100]{GeV}$ (lower left panel), and $m_\psi = \unit[1000]{GeV}$ (lower right panel). In each case we fix $m_{h_\text{D}} = \unit[1.5]{MeV}$, and choose the coupling $g_\psi$ such that $\Omega_{\psi} h^2 + \Omega_{Z_\text{D}} h^2 \simeq 0.12$. In the orange shaded regions the DM density exceeds the observed value irrespective of the value of $g_\psi$. The regions of parameter space excluded by CMB constraints on late-time energy injection are given in blue for the process $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$ and in red for $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$. In the grey shaded areas, the lifetime of the dark Higgs $h_\text{D}$ exceeds $\unit[10^{5}]{s}$, leading to significant spectral distortions in the CMB. Note that the range of $m_{Z_\text{D}}$ shown for $m_\psi = \unit[1]{GeV}$ (upper left plot) is smaller than in the rest of the panels. \[fig:gDmZDConstraints\]](./figs/gDmZDConstraints_mhD1p5MeV_mN1GeV.pdf "fig:") ![Constraints in the $g_\text{D}$–$m_{Z_\text{D}}$ plane for $m_\psi = \unit[1]{GeV}$ (upper left panel), $m_\psi = \unit[10]{GeV}$ (upper right panel), $m_\psi = \unit[100]{GeV}$ (lower left panel), and $m_\psi = \unit[1000]{GeV}$ (lower right panel). In each case we fix $m_{h_\text{D}} = \unit[1.5]{MeV}$, and choose the coupling $g_\psi$ such that $\Omega_{\psi} h^2 + \Omega_{Z_\text{D}} h^2 \simeq 0.12$. In the orange shaded regions the DM density exceeds the observed value irrespective of the value of $g_\psi$. The regions of parameter space excluded by CMB constraints on late-time energy injection are given in blue for the process $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$ and in red for $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$. In the grey shaded areas, the lifetime of the dark Higgs $h_\text{D}$ exceeds $\unit[10^{5}]{s}$, leading to significant spectral distortions in the CMB. Note that the range of $m_{Z_\text{D}}$ shown for $m_\psi = \unit[1]{GeV}$ (upper left plot) is smaller than in the rest of the panels. \[fig:gDmZDConstraints\]](./figs/gDmZDConstraints_mhD1p5MeV_mN10GeV.pdf "fig:")\ ![Constraints in the $g_\text{D}$–$m_{Z_\text{D}}$ plane for $m_\psi = \unit[1]{GeV}$ (upper left panel), $m_\psi = \unit[10]{GeV}$ (upper right panel), $m_\psi = \unit[100]{GeV}$ (lower left panel), and $m_\psi = \unit[1000]{GeV}$ (lower right panel). In each case we fix $m_{h_\text{D}} = \unit[1.5]{MeV}$, and choose the coupling $g_\psi$ such that $\Omega_{\psi} h^2 + \Omega_{Z_\text{D}} h^2 \simeq 0.12$. In the orange shaded regions the DM density exceeds the observed value irrespective of the value of $g_\psi$. The regions of parameter space excluded by CMB constraints on late-time energy injection are given in blue for the process $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$ and in red for $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$. In the grey shaded areas, the lifetime of the dark Higgs $h_\text{D}$ exceeds $\unit[10^{5}]{s}$, leading to significant spectral distortions in the CMB. Note that the range of $m_{Z_\text{D}}$ shown for $m_\psi = \unit[1]{GeV}$ (upper left plot) is smaller than in the rest of the panels. \[fig:gDmZDConstraints\]](./figs/gDmZDConstraints_mhD1p5MeV_mN100GeV.pdf "fig:") ![Constraints in the $g_\text{D}$–$m_{Z_\text{D}}$ plane for $m_\psi = \unit[1]{GeV}$ (upper left panel), $m_\psi = \unit[10]{GeV}$ (upper right panel), $m_\psi = \unit[100]{GeV}$ (lower left panel), and $m_\psi = \unit[1000]{GeV}$ (lower right panel). In each case we fix $m_{h_\text{D}} = \unit[1.5]{MeV}$, and choose the coupling $g_\psi$ such that $\Omega_{\psi} h^2 + \Omega_{Z_\text{D}} h^2 \simeq 0.12$. In the orange shaded regions the DM density exceeds the observed value irrespective of the value of $g_\psi$. The regions of parameter space excluded by CMB constraints on late-time energy injection are given in blue for the process $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$ and in red for $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$. In the grey shaded areas, the lifetime of the dark Higgs $h_\text{D}$ exceeds $\unit[10^{5}]{s}$, leading to significant spectral distortions in the CMB. Note that the range of $m_{Z_\text{D}}$ shown for $m_\psi = \unit[1]{GeV}$ (upper left plot) is smaller than in the rest of the panels. \[fig:gDmZDConstraints\]](./figs/gDmZDConstraints_mhD1p5MeV_mN1000GeV.pdf "fig:") The CMB constraints on energy injection during recombination as discussed in section \[sec:CMB\] are illustrated in Fig. \[fig:gDmZDConstraints\], where we show the parameter space spanned by the gauge coupling $g_\text{D}$ and the light DM mass $m_{Z_\text{D}}$ for different values of the mass of the heavy DM particle, $m_\psi =1$, 10, 100 and . Following the discussion in section \[sec:BBN\], in order to evade constraints from spectral distortions of the CMB as well as from BBN as much as possible, we fix the mass of the dark Higgs boson to $m_{h_\text{D}} = \unit[1.5]{MeV}$, with the precise value being irrelevant to the CMB constraints on energy injection during recombination. Notice that with this choice one has $m_{h_\text{D}} < m_{Z_\text{D}}$ in all regions of parameter space shown in Fig. \[fig:gDmZDConstraints\], as required for the annihilation channel $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ to be kinematically allowed. Lastly, the gauge coupling $g_\psi$ is fixed separately for each combination of $g_\text{D}$, $m_{Z_\text{D}}$ and $m_\psi$ by the requirement that $\psi$ and $Z_\text{D}$ together account for all of the observed DM, i.e. $\Omega_{\text{DM}} h^2 \equiv \Omega_{\psi} h^2 + \Omega_{Z_\text{D}} h^2 \simeq 0.12$, following the discussion in section \[sec:DMAnnihilation\]. The black dashed curves show contours of constant values of $\Omega_{Z_\text{D}} h^2/\Omega_{\text{DM}} h^2$, i.e. the fraction of DM composed of $Z_\text{D}$. This fraction grows towards smaller values of $g_\text{D}$, until at some point the cross section for $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ \[which scales as $g_\text{D}^4$, see eq. \] gets so small that irrespective of the choice of the gauge coupling $g_\psi$ controlling the relic density of $\psi$, the abundance of $Z_\text{D}$ alone overcloses the Universe. These regions of parameter space are shown as orange shaded in the different panels of Fig. \[fig:gDmZDConstraints\]. In the blue shaded regions in Fig. \[fig:gDmZDConstraints\], the energy injection from late-time annihilations $\psi \bar \psi \to Z_\text{D} h_\text{D}$ is excluded by CMB data, as defined in eq. . Analogously, we show in red which parts of the parameter space are excluded by the CMB constraint on the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$, c.f. eq. . Finally, the grey-shaded regions are excluded on the basis of the lifetime of the dark Higgs boson ($\tau_{h_\text{D}} > \unit[10^{5}]{s}$), assuming $\lambda_{h \text{D}} = 4 \times 10^{-4}$ as discussed at the end of section \[sec:BBN\]. For all values of $m_\psi$ shown in the different panels of Fig. \[fig:gDmZDConstraints\], we find that the annihilations from the heavy and light DM particle constrain complementary regions in parameter space: the energy injection induced by the annihilation process $\psi \bar \psi \to Z_\text{D} h_\text{D}$ constrains regions of parameter space with larger values of the gauge coupling $g_\text{D}$, while the bound derived from the annihilation of the lighter DM candidate $Z_\text{D}$ is most relevant for smaller $g_\text{D}$. This can be readily understood as follows: the annihilation cross section for $\psi \bar \psi \to Z_\text{D} Z_\text{D}$ (which does not lead to constraints from the CMB) scales with $g_\psi^4$, while $\psi \bar \psi \to Z_\text{D} h_\text{D}$ (which leads to the blue shaded exclusion regions in Fig. \[fig:gDmZDConstraints\]) is proportional to $g_\psi^2 g_\text{D}^2$. For sufficiently small values of $g_\text{D}$, the main annihilation channel of $\psi$ both during freeze-out and recombination is then given by the former process, and hence the CMB constraint from annihilations of $\psi$ becomes less and less important. On the other hand, for small values of $g_\text{D}$ the abundance of the lighter DM particle $Z_\text{D}$ is dominantly set by the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ (see appendix \[app:relicdensity\]), leading to $\Omega_{Z_\text{D}} h^2 \propto g_\text{D}^{-4}$. The corresponding bound from the CMB thus gets less important for larger values of $g_\text{D}$, as the suppression of the $Z_\text{D}$ abundance overcompensates the rise of the cross section towards larger values of the coupling: $(\Omega_{Z_\text{D}} h^2)^2 \times (\sigma v)_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \propto g_\text{D}^{-8} \times g_\text{D}^4 = g_\text{D}^{-4}$. Interestingly, for all values of $m_\psi$ considered in Fig. \[fig:gDmZDConstraints\], there remains a region of intermediate values of $g_\text{D}$ which is not constrained by either of the CMB constraints. Concretely, for $m_\psi = \unit[1]{GeV}$ (upper left panel), couplings in the interval $4 \times 10^{-3} \lesssim g_\text{D} \lesssim 10^{-2}$ are viable for $m_{Z_\text{D}} \sim \unit[2]{MeV}$, while all values of $g_\text{D}$ are excluded for $m_{Z_\text{D}} \gtrsim \unit[20]{MeV}$. Note that for this value of $m_\psi$, the region excluded by the process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ becomes independent of the value of the coupling $g_\text{D}$ for the largest $Z_\text{D}$ masses shown. As discussed in more detail in appendix \[app:relicdensity\], this is a result of additional annihilation channels significantly enhancing the abundance of $Z_\text{D}$ in this region of parameter space. For larger values of $m_\psi$, we start to observe that the CMB constraint from the annihilation of $\psi$ reaches out to significantly smaller values of $g_\text{D}$ for specific values of $m_{Z_\text{D}}$. This is due to the resonant Sommerfeld enhancement of the annihilation process $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$, occurring for $3 g_\psi^2 m_\psi = 2 \pi^3 n^2 m_{Z_\text{D}}$, with $n \in \mathbb{N}$ [@Cassel:2009wt] (see appendix \[app:relicdensity\] for more details). The larger the mass ratio $m_\psi/m_{Z_\text{D}}$, the closer these resonances are in parameter space, which becomes particularly visible in the lower right panel of Fig. \[fig:gDmZDConstraints\], corresponding to $m_\psi = \unit[1]{TeV}$. In this case the values of $m_{Z_\text{D}}$ which are excluded or allowed by CMB constraints are extremely close to each other.[^6] We also note that when $g_\text{D}$ approaches the smallest value $g_\text{D}^\text{(min)}$ compatible with $\Omega_\text{DM} h^2 = 0.12$, the resonance peaks of the CMB constraint on $\psi \bar{\psi} \to Z_\text{D} h_\text{D}$ bend upwards. This is because in the limit $g_\text{D} \to g_\text{D}^\text{(min)}$, one has to lower the abundance of $\psi$ to ever smaller values in order to match the total DM abundance, implying increasingly larger values of $g_\psi$. [^7] Thus, for fixed $m_{\psi}$, the resonance condition in this limit is satisfied for increasingly larger values of $m_{Z_\text{D}}$. ![Constraints and regions of significant DM self-interaction cross section in the $m_{Z_\text{D}}$–$m_\psi$ plane for $g_\text{D} = 10^{-3}$ (upper left panel), $g_\text{D} = 5 \times 10^{-3}$ (upper right panel), $g_\text{D} = 10^{-2}$ (lower left panel) and $g_\text{D} = 10^{-1}$ (lower right panel). The coupling $g_\psi$ is fixed to reproduce the relic density where possible. As in Fig. \[fig:gDmZDConstraints\], in the orange shaded regions one has $\Omega_\text{DM} h^2 > 0.12$, while the blue and red shaded regions indicate which parts of the parameter space are excluded by CMB constraints on energy injection from annihilation of $\psi$ and $Z_\text{D}$, respectively. In addition, we show in light (dark) green the combination of parameters leading to a self-interaction cross section of $\psi$ at the scale of dwarf galaxies in the range $0.1\, \text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ ($1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$). The bound $\sigma_\text{T}/m_\psi \lesssim 1\, \text{cm}^2/\text{g}$ on the scale of galaxy clusters is satisfied in all of the parameter space shown in this figure.[]{data-label="fig:mZDmNConstraints"}](./figs/mZDmNConstraints_mhD1_5MeV_gD0_001.pdf "fig:") ![Constraints and regions of significant DM self-interaction cross section in the $m_{Z_\text{D}}$–$m_\psi$ plane for $g_\text{D} = 10^{-3}$ (upper left panel), $g_\text{D} = 5 \times 10^{-3}$ (upper right panel), $g_\text{D} = 10^{-2}$ (lower left panel) and $g_\text{D} = 10^{-1}$ (lower right panel). The coupling $g_\psi$ is fixed to reproduce the relic density where possible. As in Fig. \[fig:gDmZDConstraints\], in the orange shaded regions one has $\Omega_\text{DM} h^2 > 0.12$, while the blue and red shaded regions indicate which parts of the parameter space are excluded by CMB constraints on energy injection from annihilation of $\psi$ and $Z_\text{D}$, respectively. In addition, we show in light (dark) green the combination of parameters leading to a self-interaction cross section of $\psi$ at the scale of dwarf galaxies in the range $0.1\, \text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ ($1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$). The bound $\sigma_\text{T}/m_\psi \lesssim 1\, \text{cm}^2/\text{g}$ on the scale of galaxy clusters is satisfied in all of the parameter space shown in this figure.[]{data-label="fig:mZDmNConstraints"}](./figs/mZDmNConstraints_mhD1_5MeV_gD0_005.pdf "fig:")\ ![Constraints and regions of significant DM self-interaction cross section in the $m_{Z_\text{D}}$–$m_\psi$ plane for $g_\text{D} = 10^{-3}$ (upper left panel), $g_\text{D} = 5 \times 10^{-3}$ (upper right panel), $g_\text{D} = 10^{-2}$ (lower left panel) and $g_\text{D} = 10^{-1}$ (lower right panel). The coupling $g_\psi$ is fixed to reproduce the relic density where possible. As in Fig. \[fig:gDmZDConstraints\], in the orange shaded regions one has $\Omega_\text{DM} h^2 > 0.12$, while the blue and red shaded regions indicate which parts of the parameter space are excluded by CMB constraints on energy injection from annihilation of $\psi$ and $Z_\text{D}$, respectively. In addition, we show in light (dark) green the combination of parameters leading to a self-interaction cross section of $\psi$ at the scale of dwarf galaxies in the range $0.1\, \text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ ($1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$). The bound $\sigma_\text{T}/m_\psi \lesssim 1\, \text{cm}^2/\text{g}$ on the scale of galaxy clusters is satisfied in all of the parameter space shown in this figure.[]{data-label="fig:mZDmNConstraints"}](./figs/mZDmNConstraints_mhD1_5MeV_gD0_01.pdf "fig:") ![Constraints and regions of significant DM self-interaction cross section in the $m_{Z_\text{D}}$–$m_\psi$ plane for $g_\text{D} = 10^{-3}$ (upper left panel), $g_\text{D} = 5 \times 10^{-3}$ (upper right panel), $g_\text{D} = 10^{-2}$ (lower left panel) and $g_\text{D} = 10^{-1}$ (lower right panel). The coupling $g_\psi$ is fixed to reproduce the relic density where possible. As in Fig. \[fig:gDmZDConstraints\], in the orange shaded regions one has $\Omega_\text{DM} h^2 > 0.12$, while the blue and red shaded regions indicate which parts of the parameter space are excluded by CMB constraints on energy injection from annihilation of $\psi$ and $Z_\text{D}$, respectively. In addition, we show in light (dark) green the combination of parameters leading to a self-interaction cross section of $\psi$ at the scale of dwarf galaxies in the range $0.1\, \text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ ($1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$). The bound $\sigma_\text{T}/m_\psi \lesssim 1\, \text{cm}^2/\text{g}$ on the scale of galaxy clusters is satisfied in all of the parameter space shown in this figure.[]{data-label="fig:mZDmNConstraints"}](./figs/mZDmNConstraints_mhD1_5MeV_gD0_1.pdf "fig:") Viability of significant dark matter self-interactions ------------------------------------------------------ Finally, in Fig. \[fig:mZDmNConstraints\] we present our results in the parameter space spanned by the masses $m_{Z_\text{D}}$ and $m_{\psi}$ of the two DM particles. From top left to bottom right, the four panels correspond to $g_\text{D} = 10^{-3}$, $5 \times 10^{-3}$, $10^{-2}$ and $10^{-1}$. Again, we fix $m_{h_\text{D}} = \unit[1.5]{MeV}$ and determine $g_\psi$ in each point of the parameter space by requiring the total DM density to be equal to the observed value. As in Fig. \[fig:gDmZDConstraints\], in the orange shaded regions the density of $Z_\text{D}$ is so large that $\Omega_{\text{DM}} h^2 > 0.12$ for all values of $g_\psi$. The blue and red shaded regions denote which combinations of parameters are excluded by the CMB constraint on energy injection from the annihilation of $\psi$ and $Z_\text{D}$, respectively.[^8] In addition, we show in light and dark green the regions of parameter space leading to a self-interaction cross section of $\psi$ at the scale of dwarf galaxies in the range of $0.1\, \text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ and $1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$, respectively. As outlined in section \[sec:selfinteractions\], those values of $\sigma_\text{T}/m_\psi$ can potentially address the shortcomings of collisionless cold DM at small scales. On the other hand, the bound $\sigma_\text{T}/m_\psi \lesssim 1\,\text{cm}^2/\text{g}$ on the scale of galaxy clusters as discussed in section \[sec:selfinteractions\] is satisfied for the complete range of parameters shown in Fig. \[fig:mZDmNConstraints\], and is thus not visible in the plots. From the upper left panel of Fig. \[fig:mZDmNConstraints\] (corresponding to $g_\text{D} = 10^{-3}$) it follows that for sufficiently small values of $g_\text{D}$, all of the parameter space leading to the interesting range of DM self-interaction cross sections at the scale of dwarf galaxies is excluded by CMB constraints on energy injection from the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$. As already discussed above, this is a consequence of $Z_\text{D}$ contributing in a non-negligible way to the observed amount of DM for small values of $g_\text{D}$; concretely, we find $\Omega_{Z_\text{D}}/\Omega_\text{DM} \gtrsim 0.2$ for $g_\text{D} = 10^{-3}$. On the other hand, the lower right panel of Fig. \[fig:mZDmNConstraints\] corresponding to $g_\text{D} = 0.1$ shows that if $g_\text{D}$ is sufficiently large, the bounds from the annihilation of $Z_\text{D}$ are irrelevant, but then most of the parameter space leading to the desired values of the self-interaction cross section $\sigma_\text{T}/m_\psi$ is excluded by CMB constraints arising from the annihilation process $\psi \bar \psi \to Z_\text{D} h_\text{D}$. However, for intermediate values of the gauge coupling, such as $g_\text{D} = 10^{-2}$ shown in the lower left panel of Fig. \[fig:mZDmNConstraints\], we indeed find regions in parameter space leading to $1 \,\text{cm}^2/\text{g} < \sigma_\text{T}/m_\psi < 10\, \text{cm}^2/\text{g}$ on the scale of dwarf galaxies and $\sigma_\text{T}/m_\psi < 1 \,\text{cm}^2/\text{g}$ on the scale of galaxy clusters, while being consistent with the CMB bounds on the energy injection from the annihilation of $\psi$ and $Z_\text{D}$. Concretely, for $g_\text{D} = 10^{-2}$ this requires $m_{Z_\text{D}} \lesssim \unit[10]{MeV}$,[^9] as well as a combination of $m_\psi$ and $m_{Z_\text{D}}$ sufficiently far away from one of the resonances corresponding to the narrow blue shaded regions in the plot. Let us remark again that even though for large values of $m_\psi$ the resonances are extremely dense in parameter space, the regions in between the resonance peaks are not excluded by CMB observations. Conclusions {#sec:conclusions} =========== After years of theoretical and experimental efforts aiming at a better understanding of the astrophysical behaviour of DM at small scales, self-interacting DM remains one of the most compelling explanations for the apparent discrepancies found between observations and $N$-body simulations of collisionless cold DM. Realising the desired self-interaction cross section $\sigma_\text{T}/m_\psi \simeq 1\,\text{cm}^2/\text{g}$ within a perturbative scenario of weak-scale DM requires the presence of a light mediator with a mass of $\unit[(0.1 - 100)]{MeV}$. However, two of the most basic incarnations of this general setup, a fermionic DM candidate coupled to an unstable scalar or vector mediator, are strongly disfavoured by the combination of data from direct detection experiments, CMB constraints on energy injection during recombination, as well as BBN constraints on late-time decaying particles. In this article, we considered a scenario in which a *stable vector mediator* $Z_\text{D}$ is responsible for the self-interactions of the fermionic DM particle $\psi$ [@Ma:2017ucp]. This immediately saves the model from CMB constraints on the annihilation process $\psi \bar \psi \to Z_\text{D} Z_\text{D}$. In order to suppress the cosmological abundance of the vector mediator to a level compatible with observations, we have introduced one more particle in the dark sector, a dark Higgs boson $h_\text{D}$ which is assumed to be lighter than $Z_\text{D}$. Besides being the natural by-product of the spontaneous breaking of a dark $U(1)$ gauge symmetry giving rise to the mass of the vector mediator, we have shown that the annihilation $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ can easily be efficient enough for $Z_\text{D}$ to only constitute a subdominant fraction of the observed DM. However, also this setup is subject to constraints from the CMB: the annihilation processes $\psi \bar \psi \to Z_\text{D} h_\text{D}$ as well as $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ together with the subsequent decay of the dark Higgs can lead to significant energy injection during recombination. Interestingly, we find that these two processes constrain complementary parts of the model parameter space, with the former being important only for sufficiently large values of the dark gauge coupling $g_\text{D}$ of the dark Higgs boson, and the latter for considerably smaller values. Most importantly, our results show that for a broad range of DM masses $m_\psi$ and $m_{Z_\text{D}}$, intermediate values of the gauge coupling $g_\text{D}$ ranging from $\sim 5 \times 10^{-3}$ to $\sim 10^{-1}$ are compatible with CMB constraints. Furthermore, we have discussed the constraints arising from the late-time decays of the thermally produced dark Higgs bosons. In order to evade the stringent bounds from CMB spectral distortions, the dark Higgs has to decay with a lifetime $\tau_{h_\text{D}} \lesssim \unit[10^5]{s}$, implying a mass $m_{h_\text{D}} > 2 m_e$. We have also discussed the possible impact of our scenario on the primordial abundances of light nuclei. For sufficiently small masses $m_{h_\text{D}} \lesssim \unit[4]{MeV}$, the decay products of the dark Higgs are not energetic enough to photo-disintegrate even the most weakly bound nucleus (deuterium), and consequently there are no constraints from BBN arising from late-time changes of the nuclear abundances. In addition, by setting the scalar coupling which is responsible for the mixing of the dark and SM Higgs boson to a value below $\simeq 4 \times 10^{-4}$, the dark and visible sector thermally decouple before the QCD phase transition, leading to a suppressed value of $\Delta N_\text{eff} \lesssim 0.27$ associated to the presence of $Z_\text{D}$ and $h_\text{D}$ in the thermal bath. Given all systematic uncertainties, this additional contribution to the energy density during BBN is still compatible with observations of primordial abundances. Finally, we investigated whether the parts of parameter space which are compatible with all these constraints can lead to the range of desired values of the self-interaction cross section of DM at small scales. Indeed we find that for a gauge coupling $g_\text{D} \simeq 10^{-2}$, it is possible to obtain $1\,\text{cm}^2/\text{g} \lesssim \sigma_\text{T}/m_\psi \lesssim 10\,\text{cm}^2/\text{g}$ at the scale of dwarf galaxies, $\sigma_\text{T}/m_\psi \lesssim 1\,\text{cm}^2/\text{g}$ at the scale of galaxy clusters, while simultaneously being consistent with all CMB constraints on late-time energy injection as well as with BBN observations. In summary, our results thus show that if the scenario of DM interacting via an MeV-scale vector mediator is (minimally) extended by a dark Higgs boson breaking the dark gauge symmetry, it is indeed possible to restore the phenomenological viability of this setup in addressing the small-scale problems of the standard cold DM paradigm at small scales. Interestingly, the allowed range of parameters is already significantly narrowed down by current CMB and BBN observations, and could be further probed by future improvements of upper limits on the DM annihilation cross section at late times. In fact, the recent EDGES observation of an absorption feature in the 21 cm spectrum [@Bowman:2018yin], if confirmed, might already be able to supersede the CMB constraints on the DM annihilation cross section [@DAmico:2018sxd; @Liu:2018uzy; @Cheung:2018vww]. Depending on the strength of the Sommerfeld enhancement at the relevant redshift $z \simeq 17$, the idea of DM self-interactions induced by the exchange of a light vector mediator as discussed in this article might thus be further probed in the near future. We thank Camilo Garcia-Cely for useful discussions and Felix Kahlhoefer for valuable comments on the manuscript. This work is supported by the German Science Foundation (DFG) under the Collaborative Research Center (SFB) 676 Particles, Strings and the Early Universe as well as the ERC Starting Grant ‘NewAve’ (638528). Full Lagrangian {#app:FullLagrangian} =============== In this appendix, we provide details of the Lagrangian  after the breaking of the dark and SM gauge symmetries by means of eq. . The portal term $\propto \lambda_{h \text{D}}$ appearing in eq.  leads to a mixing of the scalar degrees of freedom $H_\text{D}$ and $H$. We define the mass eigenstates $h_\text{D}$ and $h$ via $$\begin{aligned} h_\text{D} &= - H \sin \theta + H_\text{D} \cos \theta \, ,\nonumber \\ h &= H \cos \theta + H_\text{D} \sin \theta \,,\end{aligned}$$ with the mixing angle $\theta$ given by $$\theta \simeq \lambda_{h\text{D}} v_\text{D} v_h/m_h^2 \,,$$ assuming $\lambda_{h\text{D}} \ll 1$ and $m_{h_\text{D}} \ll m_h$. Trading the parameters $\mu_\text{D}$ and $\mu_h$ of the Higgs potential for the physical masses $m_{h_\text{D}}$ and $m_h$, and replacing $v_\text{D}$ by $m_{Z_\text{D}}/g_\text{D}$, the scalar potential including only the leading terms in an expansion in $\lambda_{h\text{D}}$ reads $$\begin{aligned} V_{\text{broken}} (h, h_\text{D}) &\simeq \frac12 m_h^2 h^2 + \frac{m_h^2}{2 v_h} h^3 + \frac{m_h^2}{8 v_h^2} h^4 + \frac12 m_{h_\text{D}}^2 h_\text{D}^2 + \frac{g_\text{D} m_{h_\text{D}}^2}{2 m_{Z_\text{D}}} h_\text{D}^3 + \frac{g_\text{D}^2 m_{h_\text{D}}^2}{8 m_{Z_\text{D}}^2} h_\text{D}^4 \nonumber \\ &\quad +\frac12 \lambda_{h\text{D}} v_h h h_\text{D}^2 - \frac{\lambda_{h\text{D}} m_{Z_\text{D}}}{g_\text{D}} h^2 h_\text{D} + \frac14 \lambda_{h\text{D}} h^2 h_\text{D}^2 \nonumber \\ &\quad + \frac{\lambda_{h\text{D}} g_\text{D} v_h m_{h_\text{D}}^2}{2 m_{Z_\text{D}} m_h^2} h h_\text{D}^3 - \frac{\lambda_{h\text{D}} m_{Z_\text{D}}}{2 g_\text{D} v_h} h^3 h_\text{D} \,.\end{aligned}$$ The full Lagrangian after symmetry breaking is then finally given by $$\begin{aligned} \mathcal{L} &\simeq \mathcal{L}_{\widetilde{\text{SM}}} \big|_{H \rightarrow h - \theta h_\text{D}} - \frac{1}{4} F^{\mu\nu}_\text{D} F_{\mu\nu}^D + \frac12 m_{Z_\text{D}}^2 Z_\text{D}^\mu Z_{D\mu} + i \bar{\psi} \gamma_\mu \partial^\mu \psi + g_\psi \bar{\psi} \gamma_\mu Z_\text{D}^\mu \psi - m_\psi \bar{\psi} \psi \nonumber \\ &\quad\quad + g_\text{D} m_{Z_\text{D}} (h_\text{D} + \theta h) Z_\text{D}^\mu Z_{D\mu} + \frac12 g_\text{D}^2 (h_\text{D} + \theta h)^2 Z_\text{D}^\mu Z_{D\mu} \nonumber \\ &\quad\quad+\frac12 (\partial^\mu h_\text{D}) (\partial_\mu h_\text{D}) - V_{\text{broken}} (h, h_\text{D}) \,. \label{eq:fullL_afterSB}\end{aligned}$$ Notice that here we neglect the modifications proportional to $\theta^2$ of couplings of $h$ to SM fields. The couplings of $h_\text{D}$ to the SM gauge bosons $Z$ and $W$, as well as to the SM fermions $f$ are given by $$\begin{gathered} \mathcal{L}_{\widetilde{\text{SM}}} \big|_{H \rightarrow h - \theta h_\text{D}} \supset \theta \left( \sum_f \frac{m_f}{v_h} \bar f f h_\text{D} \right) \\ + \frac{\theta m_Z^2}{2 v_h^2} \times \left( -2 v_h h_\text{D} - 2 h h_\text{D} + \theta h_\text{D}^2\right) \left( Z_\mu Z^\mu + 2 \cos^2 \theta_W W_\mu^+ W^{\mu-}\right),\end{gathered}$$ with $\theta_W$ the Weinberg angle. Relic density calculation {#app:relicdensity} ========================= In this appendix we describe in detail our method for calculating the relic abundances of the two DM particles $\psi$ and $Z_\text{D}$ for a given point in parameter space. In particular, we discuss the treatment of Sommerfeld enhancement during freeze-out, the importance of DM conversion and semi-annihilation processes, as well as the chemical decoupling of the dark and visible sector during or after DM freeze-out. We implemented the Lagrangian of the model with `FeynRules v2.3.24` [@Alloul:2013bka] and generated `CalcHEP` [@Belyaev:2012qa] model files to be imported into `MicrOMEGAs v4.3.5` [@Belanger:2006is; @Belanger:2014vza]. However, we find that due to the large mass hierarchy between the initial and final state particles, e.g. in the annihilation process $\psi \bar \psi \rightarrow Z_\text{D} Z_\text{D}$, the calculation of the annihilation cross sections using `CalcHEP` is facing numerical problems related to the polarisation sums over the light massive vector particles.[^10] We therefore compute all relevant annihilation cross sections analytically and pass them to `MicrOMEGAs` for further use in the numerical solution of the Boltzmann equations. In doing so, we also take into account the Sommerfeld enhancement in the annihilation processes $\psi \bar \psi \rightarrow Z_\text{D} Z_\text{D}$ and $\psi \bar \psi \rightarrow Z_\text{D} h_\text{D}$, arising from the multiple exchange of the light vector boson $Z_\text{D}$ in the initial state [@sommerfeld]. In practice, we compute the $s$- and $p$-wave contributions to the corresponding annihilation cross sections at tree level, and multiply them with enhancement factors $S_s$ and $S_p$, respectively. Following [@Cassel:2009wt; @Iengo:2009ni; @Slatyer:2009vg], we approximate the Yukawa potential generated by the exchange of $Z_\text{D}$ by a Hulthén potential, leading to $$\begin{aligned} S_s &= \frac{\pi}{a} \frac{\sinh (2 \pi a c)}{\cosh (2 \pi a c) - \cos (2 \pi \sqrt{c - a^2 c^2})} \; , \label{eq:Ss}\\ S_p &= \frac{(c-1)^2 + 4 \, a^2 c^2}{1 + 4 \, a^2 c^2} \times S_s \; ,\end{aligned}$$ where $a = 2 \pi v/g_\psi^2$ and $c = 3 g_\psi^2 m_\psi/(2 \pi^3 m_{Z_\text{D}})$. The Boltzmann equations for the number densities $n_\psi$ and $n_{Z_\text{D}}$ are then given by $$\begin{aligned} \left( \frac{\text{d}n_\psi}{\text{d}t} + 3 H n_\psi \right)\bigg|_{T \gg m_{Z_\text{D}}} \simeq &- \left( \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} Z_\text{D}} + \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} h_\text{D}} \right) \left( n_\psi^2 - \overline{n}_\psi^2\right)\,, \label{eq:Boltzmann_npsi}\\[0.3cm] \left( \frac{\text{d}n_{Z_\text{D}}}{\text{d}t} + 3 H n_{Z_\text{D}} \right) \bigg|_{T \lesssim m_{Z_\text{D}} \ll m_\psi} \simeq &- \langle \sigma v\rangle_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \left( n_{Z_\text{D}}^2 - \overline{n}_{Z_\text{D}}^2\right) \nonumber\\ &+ \left( \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} Z_\text{D}} + \frac12 \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} h_\text{D}} \right) n_{\psi}^2 \nonumber \\ &-\frac12 \langle \sigma v\rangle_{\psi Z_\text{D} \to \psi h_\text{D}} \left(n_{Z_\text{D}} - \overline{n}_{Z_\text{D}}\right) n_\psi \,, \label{eq:Boltzmann_nZD}\end{aligned}$$ with $\overline{n}_\psi$ and $\overline{n}_{Z_\text{D}}$ denoting number densities in equilibrium, and $H$ being the Hubble rate. For the sake of the following discussion, in these expressions (but not in our numerical calculation[^11]) we have set $\overline{n}_{Z_\text{D}} \simeq n_{Z_{\text{D}}}$ during freeze-out of $\psi$, as well as $\overline{n}_\psi \simeq 0$ during the freeze-out process of $Z_{\text{D}}$. Under these assumptions, which are fulfilled to good accuracy as long as $m_{Z_\text{D}} \ll m_\psi$, the Boltzmann equation for $n_\psi$ takes the same form as in the standard scenario of a single DM particle and can be solved independently of the evolution of $n_{Z_\text{D}}$. On the other hand, the final abundance of the lighter DM particle $Z_\text{D}$ can be significantly affected by the additional terms in eq.  involving the heavy DM particle $\psi$ (see also [@Ahmed:2017dbb]). For the case of the annihilation processes $\psi \bar \psi \to Z_{\text{D}} Z_{\text{D}} $ and $\psi \bar \psi \to Z_{\text{D}} h_{\text{D}}$, this can be qualitatively understood by considering the ratio of the second and first term in the Boltzmann equation, evaluated at the temperature $T_f$ where the annihilation process $Z_{\text{D}} Z_{\text{D}} \to h_{\text{D}} h_{\text{D}}$ falls out of equilibrium: $$\begin{aligned} \kappa_{\psi \bar \psi} &\equiv \frac{\left( \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} Z_\text{D}} + \frac12 \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} h_\text{D}} \right) \cdot n_{\psi}^2(T_f)}{\langle \sigma v\rangle_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \cdot n_{Z_\text{D}}^2(T_f)} \nonumber \\ &= \frac{\left( \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} Z_\text{D}} + \frac12 \langle \sigma v\rangle_{\psi \bar \psi \to Z_\text{D} h_\text{D}} \right) \cdot Y_{\psi}^2(T_f)}{\langle \sigma v\rangle_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \cdot Y_{Z_\text{D}}^2(T_f)} \,, \label{eq:kappa}\end{aligned}$$ where in the second line we replaced the number densities $n$ by the yields $Y = n/s$. If $\kappa_{\psi \bar \psi} \gtrsim 1$, the standard calculation for the freeze-out of $Z_\text{D}$ only taking into account the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ fails, as the residual annihilations of $\psi$ contribute significantly to the production of $Z_\text{D}$ around the freeze-out temperature $T_f$. In regions of parameter space where $\psi$ is the dominant component of DM, the numerator of eq.  can be estimated by setting the total annihilation cross section of $\psi$ to $\langle \sigma v \rangle_{\text{thermal}} \simeq 4.4 \times 10^{-26} \, \text{cm}^3\,\text{s}^{-1}$, and the yield $Y_\psi$ to the value corresponding to $\Omega_\psi h^2 \simeq 0.12$. Furthermore, an approximate expression for $Y_{Z_\text{D}}(T_f)$ can be obtained from the semi-analytical solution to the standard one-particle Boltzmann equation [@Kolb:1990vq]: $$\begin{aligned} Y_{Z_\text{D}}(T_f) &\simeq \frac{3.79}{\sqrt{g_\star} M_\text{P} m_{Z_\text{D}} \sigma_0} \log \left( \frac{0.11}{\sqrt{g_\star}} M_\text{P} m_{Z_\text{D}} \sigma_0 \right) \,,\end{aligned}$$ where $g_\star \simeq 10.75$ denotes the SM degrees of freedom at $T_f$, $M_\text{P}$ is the Planck mass, and $\sigma_0 \equiv (\sigma v)^{v \rightarrow 0}_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}}$. Finally, after inserting the analytical expression for $\sigma_0$ given in eq.  we obtain $$\begin{aligned} \kappa_{\psi \bar \psi} \simeq 0.029 \cdot \left( \frac{g_\text{D}}{10^{-2}} \right)^4 \left( \frac{m_\psi}{\text{GeV}} \right)^{-2} \cdot \left( 1 + 0.16 \log \left[ \frac{g_\text{D}}{10^{-2}} \right] - 0.040 \log \left[ \frac{m_{Z_\text{D}}}{\text{MeV}} \right] \right)^{-2}\,, \label{eq:kappa_eval}\end{aligned}$$ assuming $m_{h_\text{D}} \ll m_{Z_\text{D}}$. Clearly, for sufficiently large $g_\text{D}$ and small $m_\psi$ one has $\kappa_{\psi \bar \psi} \gtrsim 1$, indicating that the annihilation processes of the heavy DM particle $\psi$ should indeed be taken into account in the calculation of the relic abundance of $Z_\text{D}$. ![Relic abundance of $Z_\text{D}$ as a function of $g_\text{D}$, assuming $m_\psi = \unit[1]{GeV}$ and $m_{Z_\text{D}} = \unit[40]{MeV}$. The red dotted curve only takes into account the annihilation $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$, the blue dashed curve in addition the self-annihilation of $\psi$, and the black solid curve corresponds to the full calculation including all terms of eq. .[]{data-label="fig:OmegaPlot"}](./figs/OmegaPlot_1.pdf) These simple analytical considerations are confirmed using our full numerical approach of solving the Boltzmann equation via `MicrOMEGAs`. In Fig. \[fig:OmegaPlot\] we show the relic abundance of $Z_\text{D}$ as a function of the coupling $g_\text{D}$, fixing for concreteness $m_\psi = \unit[1]{GeV}$ and $m_{Z_\text{D}} = \unit[40]{MeV}$. The red dotted curve corresponds to a calculation where only the annihilation process $Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}$ is taken into account; as expected, the corresponding abundance scales as $\Omega_{Z_\text{D}} h^2 \propto 1/\langle \sigma v\rangle_{Z_\text{D} Z_\text{D} \to h_\text{D} h_\text{D}} \propto g_\text{D}^{-4}$. On the other hand, the blue dashed curve shows the abundance obtained by additionally including the terms in the Boltzmann equation accounting for the self-annihilation of $\psi$. The two calculations deviate significantly once $g_\text{D} \gtrsim 2\times 10^{-2}$, well compatible with the simple estimate based on eq. . Lastly, the solid black curve furthermore takes into account the conversion process $\psi Z_\text{D} \to \psi h_\text{D}$, which impacts the calculation mainly for intermediate values of $g_\text{D}$. Finally, we take into account the impact of the thermal decoupling of the visible and dark sector on the abundances of $\psi$ and $Z_\text{D}$. As explained in section \[sec:BBN\], in order to evade the bounds from BBN and CMB spectral distortions as much as possible, we fix $\lambda_{h \text{D}} \simeq 4 \times 10^{-4}$ in our analysis. Then, as shown in Fig. \[fig:hDhD\_rates\], the dark and visible sector decouple at $T_\text{dec} \simeq \unit[500]{MeV}$. Assuming separate entropy conservation in both sectors for $T < T_\text{dec}$, the dark sector temperature $T_\text{D}$ as a function of the photon temperature $T$ evolves according to $$\xi(T) \equiv \frac{T_\text{D}(T)}{T}=\left(\frac{g_{\ast S} (T)}{g_{\ast S} (T_\text{dec})} \, \frac{g_{\ast S}^\text{D} (T_\text{dec})}{g_{\ast S}^\text{D} (T_\text{D})}\right)^\frac{1}{3} \,,$$ where $g_{\ast S}(T)$ and $g_{\ast S}^\text{D}(T_\text{D})$ denote the entropy degrees of freedom in the visible and dark sector at a given temperature. For the range of particle masses considered in our analysis, $Z_\text{D}$ always freezes out after the decoupling of the two sectors, and so does $\psi$ for $m_\psi \lesssim \unit[12]{GeV}$. Following [@Feng:2008mu], we take this into account by applying separate correction factors $\xi(T_\text{fo})$ to the relic abundances of $\psi$ and $Z_\text{D}$ obtained from a calculation assuming equal temperatures in both sectors, where $T_\text{fo}$ is the freeze-out temperature of $\psi$ or $Z_\text{D}$, respectively. Note that we implicitly assume $h_\text{D}$ to be a relativistic degree of freedom to ensure $g_{\ast S}^\text{D} (T_\text{D}) > 0$; possible corrections to the abundance of $Z_\text{D}$ in situations where all particles in the dark sector have become non-relativistic during freeze-out (see e.g. [@Pappadopulo:2016pkp]) are left for future work. [^1]: This expression differs from the one given in Ref. [@Ma:2017ucp]. [^2]: This bound can be circumvented if the dark Higgs is stable on cosmological timescales and sufficiently light such that it does not contribute significantly to the present-day density of DM. In fact all CMB bounds from late time energy injection will be evaded in this case. In the following we do not further consider this part of the parameter space, and focus on the case where $m_{h_\text{D}}$ and $m_{Z_\text{D}}$ are of similar order of magnitude. [^3]: For $T \lesssim 5\,$GeV, the light SM quarks are no longer the appropriate degrees of freedom in the thermal bath. Following [@Cline:2013gha], in this regime the annihilation cross section for $h_\text{D} h_\text{D} \to \text{SM SM}$ at a given center-of-mass energy $\sqrt{s}$ can be expressed in terms of the width of a (hypothetical) scalar particle with mass $m_\star = \sqrt{s}$, which in turn we take from [@Alekhin:2015byh]. [^4]: This choice of $\lambda_{h\text{D}}$ leads to an invisible decay width $\Gamma_{h \rightarrow h_\text{D} h_\text{D}} = \lambda_{h\text{D}}^2 v_h^2/(16 \pi m_h) \simeq 3.8 \times 10^{-4} \times \Gamma_h^\text{tot}$ of the SM Higgs, which is well below the constraint from the latest LHC data [@Khachatryan:2016whc]. Furthermore, depending on the vev $v_\text{D}$ of the scalar field $\sigma$, the corresponding mixing angle $\theta$ of the dark Higgs boson can be in the range where it might significantly alter the duration of the neutrino pulse from SN1987a [@Raffelt:1987yu; @Krnjaic:2015mbs]. However, in view of the still large systematic uncertainties inherent in deriving the corresponding bounds, we do not consider them in the following discussion; a dedicated analysis of this point would certainly be interesting. [^5]: Both the preferred range for $\sigma_\text{T}/m_\psi$ at small scales as well as the upper bound at scales of galaxy clusters have been derived assuming that all of the observed DM is self-interacting, while in our scenario $Z_\text{D}$ does not experience significant self-interactions. However, as we will see in section \[sec:Results\], in all regions of the parameter space where the self-interaction cross section of $\psi$ is within the range of interest, one has $\Omega_{Z_\text{D}} h^2 \ll \Omega_\psi h^2 \simeq 0.12$, and hence the astrophysical behaviour of DM is dominated by the properties of $\psi$ alone. [^6]: We note that for parameter points precisely on top of one of the Sommerfeld resonances, the calculation of the DM relic abundance might be affected by late-time annihilations not taken into account in our analysis [@vandenAarssen:2012ag; @Binder:2017lkj]. [^7]: The required values of $g_\psi$ can become non-perturbative once $\Omega_{Z_\text{D}}$ is close to the observed DM relic density. While this may lead to a Landau pole below the Planck scale, this does not exclude further parts of the parameter space, as these regions are robustly excluded by the CMB constraints on $Z_\text{D}$ annihilation. [^8]: The small discontinuity of the orange and red shaded region at $m_{\psi} \simeq \unit[12]{GeV}$ visible in some of the panels of Fig. \[fig:mZDmNConstraints\] is an artefact of our approximate treatment of the impact of the chemical decoupling of the visible and dark sector on the relic density of $Z_\text{D}$, c.f. appendix \[app:relicdensity\]. A more precise treatment would lead to a smooth transition between the regions of different $m_\psi$, without affecting any of our conclusions. [^9]: In these regions the values of $g_\psi$ are always within the perturbative regime, $g_\psi \in [0.01,0.5]$. [^10]: See appendix C.2 of Ref. [@Belyaev:2012qa] for a detailed discussion of this point. [^11]: `MicrOMEGAs` solves the full Boltzmann equations in the temperature interval `[Tstart,Tend]`. In order to make sure that the freeze-out of $Z_\text{D}$ occurs within this range of temperatures even for the smallest values of $m_{Z_\text{D}}$ considered in this work, we lower `Tend` from the default value $ \unit[10^{-3}]{GeV}$ to $ \unit[10^{-6}]{GeV}$.
--- abstract: 'In this work we deal with parameter estimation in a latent variable model, namely the multiple-hidden i.i.d. model, which is derived from multiple alignment algorithms. We first provide a rigorous formalism for the homology structure of $k$ sequences related by a star-shaped phylogenetic tree in the context of multiple alignment based on indel evolution models. We discuss possible definitions of likelihoods and compare them to the criterion used in multiple alignment algorithms. Existence of two different Information divergence rates is established and a divergence property is shown under additional assumptions. This would yield consistency for the parameter in parametrization schemes for which the divergence property holds. We finally extend the definition of the multiple-hidden i.i.d. model and the results obtained to the case in which the sequences are related by an arbitrary phylogenetic tree. Simulations illustrate different cases which are not covered by our results.' author: - 'Ana Arribas-Gil [^1]' bibliography: - 'biblio\_mult\_align.bib' title: 'Parameter Estimation in multiple-hidden i.i.d. models from biological multiple alignment' --- Introduction ============ Biological sequence alignment is one of the fundamental tasks in bioinformatics. Sequences are aligned to identify regions of similarities that can be used to determine structural and functional motifs in a sequence, to infer gene functions or to derive evolutionary relationships between sequences. Aligning two sequences, which are supposed to descend from a common ancestor, consists in retrieving the places where substitutions, insertions and deletions have occurred during evolution. The first alignment methods, namely scored-based methods, used dynamic programming algorithms with fixed score parameters to find an optimal alignment (see Durbin [*et al.*]{}, 1998, for an overview). But since an alignment aims at reconstructing the evolution history of the sequences, choosing these score parameters in the most objective way to have an evolutionary meaning seems to be an important issue. @TKF1 proposed the first rigorous model of sequence evolution including [*indels*]{} (insertions and deletions), referred to as the TKF91 model. Based on this model, they were the first to provide a maximum likelihood approach to jointly estimate the alignment of a pair of DNA sequences and the evolution parameters. The alignment problem in this context fits into the pair hidden Markov model (pair-HMM), as first described in @Durbin, ensuring the existence of efficient algorithms based on dynamic programming methods to compute the likelihood of two sequences and retrieve an alignment. That is one of the reasons why TKF91 based alignment methods have become popular. Indeed, they have been further developed in @HeinWiuf, @Metzler1, @Metzler2 and @Miklos among others, and this despite the lack of theoretical support for the estimation procedures in this framework during years. @Argamat were the first to study the statistical properties of parameter estimation procedures in pair-HMMs. In the last years these methods have also been extended to the case of multiple alignment. In this context we deal with more than two sequences and we have to take into account the evolutionary relationships between the sequences, which are represented by a phylogenetic tree. Multiple alignment methods applying the TKF91 model on a tree are for instance those of @Steel, @HolmesBruno, @Hein03 and @Hein4. They generalize pair-HMMs to more complex hidden variable models and propose maximum likelihood or Bayesian approaches for the joint estimation of evolution parameters and multiple alignments given a phylogenetic tree. However, since both alignment and phylogenetic tree aims at reconstructing the evolutionary history of the sequences, estimating the alignment from a fixed phylogenetic tree may biased the result. The ideal procedure would consist in jointly estimating alignments and phylogenetic trees from a set of unaligned sequences. This problem has been recently tackled, in the context of indel evolution models, by @Metzler3, @Hein5 and @StatAlign. However, as it was the case during years for the pair-HMMs, no theoretical support is provided for the estimation procedures in any of these contexts. This work is concerned with the study of statistical properties of parameter estimation in latent variable models derived from multiple alignment algorithms where the phylogenetic tree relating the observed sequences is supposed to be known. The paper is organized as follows. In Section 2, we motivate the problem, discuss some models of sequence evolution and describe the homology structure in the context of multiple alignment of a set of sequences related by a star-shaped phylogenetic tree and evolving under the TKF91 model of sequence evolution. In Section 3 we present the multiple-hidden i.i.d. model on a star tree. We discuss possible definitions of likelihoods and compare them with the criterion which is actually considered in multiple alignment algorithms. We analyze the case in which only two sequences are considered to show that our model is consistent with the pair-HMM. In Section 4, we investigate asymptotic properties of estimators under the hidden i.i.d. model for the definitions of likelihoods that we have considered. We first prove the existence of [*Information divergence rates*]{}, which are the difference between the limiting values of the log-likelihoods at the (unknown) true parameter and at another parameter value. We then prove that they are uniquely minimized at the true value of the parameter (divergence property) for some parametrization schemes. Following classical arguments, this would yield consistency for the parameter in those cases in which the divergence property holds. In Section 5 we extend the definitions of the multiple-hidden i.i.d. model and the results obtained to the general case in which the sequences are related by an arbitrary phylogenetic tree. Finally, in Section 6, we illustrate via some simulations the behavior of the divergence rates in different cases in which the divergence property is not established. The paper ends with a discussion on this work. Motivation: models of sequence evolution and the homology structure =================================================================== In the multiple alignment problem the observations consist in $k$ ($k>2$) sequences $X^1_{1:n_1},..., X^k_{1:n_k}$, where $n_i$ is the length of sequence $i$ and $X^i_{1:n_i}=X^i_1\dots X^i_{n_i}$, with values in a finite alphabet ${\cal A}$ (for instance ${\mathcal{A}}=\{A,C,G,T\}$ for DNA sequences). It is assumed that the sequences are related by a phylogenetic tree, that is, a tree where the nodes represent the sequences and the edges represent the evolutionary relationships between them. The observed sequences are placed at the $k$ leaves of the tree, whereas the inner nodes stand for ancestral (non-observable) sequences. The most ancestral sequence is placed at the root, ${\cal R}$, of the tree. The choice of the root assigns to each edge a direction (from the root to the leaves) and to each inner node its descendants nodes, but since the evolutionary process between the sequences is usually assumed to be time reversible, the placement of the root node is irrelevant (cf. Thatte, 2006). A path from the root to a leaf represents the evolution through time and through a series of intermediate sequences of the ancestral sequence, leading to the corresponding observed sequence. The evolution on each edge (from its *parent* node to its *child* node) is described by some evolution process. We assume that the same evolution process works on every edge of the tree. A main hypothesis is that the evolution processes working on two edges with the same *parent* node are independent, i.e. a sequence evolves independently to each one of its descendants. Models of sequence evolution ---------------------------- Mutations in a sequence during the evolution process can be produced by many different factors. However, there are two evolutionary events that play a major role: substitutions of a nucleotide by a different one in a given position of a sequence, and insertions or deletions of single positions or sequence fragments. The process of substitutions has been studied in depth during years, and is usually taken to be a continuous time Markov chain on the state space of nucleotides (Felsenstein, 2004; Tavaré, 1986). The process of insertions and deletions has not received the same attention and there is more place for discussion. @TKF1 proposed in a pioneering paper the first indel evolution model, and since then many variants have been considered. The importance of this model is that it makes the alignment fit into the concept of pair-HMM, as we have already mentioned. In the pair-HMM for pairwise sequence alignment the indel process and the substitution process are combined to model the whole evolution process. Indeed, the hidden Markov chain corresponds to what we usually call the *bare* alignment, that is, an alignment without specification of the particular nucleotides at each position of the sequences. Conditionally on a realization of this hidden process, the observed sequences are emitted according to the substitution model (see Durbin [*et al.*]{}, 1998, and Arribas-Gil [*et al.*]{}, 2006, for details). So, in the pair-HMM the indel evolution process characterizes the hidden stochastic process of the alignment, whereas the substitution process corresponds to the emission functions of the observed sequences. As we will see, that is also the case for the multiple alignment model that we study in this paper. Since the asymptotic properties of estimators in such a model are more related to the structure of the hidden process than to the emission functions, which can take a general form (see Arribas-Gil [*et al.*]{}, 2006), we will focus our attention on the indel process. ### The TKF91 model Let us briefly recall how the TKF91 model works on pairwise alignments. This model is formulated in terms of *links* and associated letters. To each *link* is associated a letter that undergoes changes, independently of other letters, according to a reversible substitution process. The insertion and deletion process is described by a birth-death process on these *links*. Indeed, a *link* and its associated letter is deleted at the rate $\mu>0$. While a *link* is present it gives rise to new *links* at the rate $\lambda$. A new *link* is placed immediately to the right of the *link* from which it originated, and the associated letter is chosen from the stationary distribution of the substitution process. At the very left of the sequence is a so-called immortal *link* that never dies and gives rise to new *links* at the rate $\lambda$. We need the death rate per *link* to exceed the birth rate per *link* to have a distribution of sequence lengths. Indeed, if $\lambda < \mu$ then the equilibrium distribution of length sequence is geometric with parameter $\lambda / \mu$. Let $p^H_n(t)$ be the probability that a normal *link* survives and has $n$ descendants, including itself, after a time $t$. Let $p^N_n(t)$ be the probability that a normal *link* dies but leaves $n$ descendants after a time $t$. Finally let $p^I_n(t)$ be the probability that an immortal *link* has n descendants, including itself, after a time $t$. Here $H$ stands for homologous, $N$ for non-homologous and $I$ for immortal. We have: $$\label{TKFprocess} \begin{array}{llll} p^H_n(t) &= & e^{-\mu t}[1-\lambda \beta(t)][\lambda \beta(t)]^{n-1} & \mbox{for } \,\, n\geq 1 \\ p^N_n(t) &= & \mu \beta(t) & \mbox{for }\,\,n=0 \\ &=& [1-e^{-\mu t}-\mu \beta(t)][1-\lambda \beta(t)][\lambda \beta(t)]^{n-1} & \mbox{for } \,\,n\geq 1\\ p^I_n(t) &= & [1-\lambda \beta(t)][\lambda \beta(t)]^{n-1} & \mbox{for } \,\, n\geq 1 \end{array}$$ where $$\beta(t)=\frac{1-e^{(\lambda-\mu)t}}{\mu-\lambda e^{(\lambda-\mu )t}}.$$ Conceptually, $e^{-\mu t}$ is the probability of ancestral residue survival, $\lambda \beta(t)$ is the probability of more insertions given one or more existent descendants and $\kappa(t):=\frac{1-e^{-\mu t}-\mu \beta(t)}{1-e^{-\mu t}}$ is the probability of insertion given that the ancestral residue did not survive. See @TKF1 for details. If we want to investigate the asymptotic properties of parameter estimators we must consider observed sequences of growing lengths. However, this is not possible under the hypothesis of the TKF91 model. Indeed, the ancestral sequence length distribution depends on $\lambda/\mu$, and so, for a given value of these parameters we can not make the ancestral sequence length to tend to infinity. As one would expect (and as we will show later) the lengths of the observed sequences are equivalent to the length of the root sequence, so under this setup we can not expect to observe infinitely long sequences. Following the ideas in Metzler (2003), we will consider the case in which the TKF91 model can produce long sequences, that is, the case where $\lambda=\mu$. With this configuration, finite length sequences are to be considered as cut out of very much longer sequences between known homologous positions. The length of the ancestral sequence is now considered to be non random. We will note $q^H_n(t)$ and $q^N_n(t)$ the probability distributions of the number of descendants for a normal *link* under these assumptions. We do not need to consider the distribution for the immortal *link* anymore, since now all the positions on the observed sequences are descendants of normal *links*.\ Since $\lim_{\mu \rightarrow \lambda}\beta(t)=\frac{t}{1+\lambda t}$ we get $$\begin{aligned} \label{newprocess} q^H_n(t) =\lim_{\mu \rightarrow \lambda} p^H_n(t) & =& e^{-\lambda t}\frac{1}{1+\lambda t}\left(\frac{\lambda t}{1+\lambda t}\right)^{n-1}\hspace{2.3cm} \mbox{for } \,\, n\geq 1 \nonumber\\ q^N_n(t) =\lim_{\mu \rightarrow \lambda} p^N_n(t) &= & \frac{\lambda t}{1+\lambda t} \hspace{5.4cm} \mbox{for }\,\,n=0 \\ &= & \left(\frac{1}{1+\lambda t}-e^{-\lambda t}\right)\frac{1}{1+\lambda t} \left(\frac{\lambda t}{1+\lambda t}\right)^{n-1} \,\, \mbox{for } \,\,n\geq 1 \nonumber $$ The main drawback of the TKF91 model is that insertions and deletions can only be produced at one nucleotide at a time. More realistic indel evolution models based on the TKF91 model are, for instance, those of @TKF2, @Miklos or @Ana_Metzler. For the sake of simplicity, in this work we will just consider the TKF91 indel model. However, the homology structure and the multiple-hidden i.i.d. model presented here can be extended to the case in which other indel models are considered. A star tree ----------- Let us now consider a $k$-star phylogenetic tree, that is, a tree with a root, $k$ leaves and no inner nodes. See Figure \[star\] for an example. We will note $t_i$, $i=1,\dots,k$, the branches lengths, that is the evolutionary time separating each sequence to the root. In this context, an alignment of the $k$ sequences and the root consists in a composition of the $k$ pairwise alignments of the root with any of the observed sequences. This is done as follows. Two characters $X^i_j$ and $X^l_h$ will be aligned in the same column if and only if they are homologous to the same character of the root sequence. So there is a column for each nucleotide at the root containing all its homologous positions on the leaves, and between two columns of this kind, there is one column for each inserted position on the leaves between the two corresponding nucleotide positions at the root. Insertions to the root sequence occur independently on each sequence and we assume that the probability of having two insertions on different sequences at the same time is 0. That is why insertion columns are composed by one nucleotide position in some of the sequences and gaps in all the others. (60,35) (35,30)[(-4,-3)[31]{}]{} (35,30)[(-2,-3)[16]{}]{} (35,30)[(-1,-3)[8]{}]{} (35,30)[(1,-3)[8]{}]{} (35,30)[(2,-3)[16]{}]{} (35,30)[(4,-3)[31]{}]{} (34,32)[${\cal R}$: `ACCT`]{} (9,13)[$t_1$]{} (0,2)[$X^1$]{} (-4.5,-2)[`ACCGGT`]{} (19,13)[$t_2$]{} (15,2)[$X^2$]{} (13,-2)[`ACT`]{} (26,13)[$t_3$]{} (25,2)[$X^3$]{} (23,-2)[`ACT`]{} (42,13)[$t_4$]{} (41,2)[$X^4$]{} (34,-2)[`GCCAT`]{} (48,13)[$t_5$]{} (50,2)[$X^5$]{} (49.5,-2)[`CCT`]{} (58,13)[$t_6$]{} (64,2)[$X^6$]{} (61,-2)[`ACCT`]{} **Pairwise alignments:\ ** [rlrlrl]{} ${\cal R}$:&`ACCT`&${\cal R}$:&`ACCT`&${\cal R}$:&`ACCT`\ $X^1$:& `ACCT`&$X^2$:& `A-CT`&$X^3$:& `AC-T`\ \ ${\cal R}$:&`ACCT`&${\cal R}$:&`ACCT`&${\cal R}$:&`ACCT`\ $X^4$:& `GCCT`&$X^5$:& `-CCT`&$X^6$:& `ACCT`\ \ ---------------- **Multiple** **alignment:** ---------------- -------- -------- $X^1$: `ACCT` $X^2$: `A-CT` $X^3$: `AC-T` $X^4$: `GCCT` $X^5$: `-CCT` $X^6$: `ACCT` -------- -------- We know that under the TKF91 indel model the pairwise alignment is a Markov chain on the state space $\{\stackrel{\texttt{B}} {\texttt{{\scriptsize B}}},\stackrel{\_}{\stackrel{\tiny \phantom{.}}{\texttt{{\scriptsize B}}}}, \stackrel{\texttt{{\scriptsize B}}}{\stackrel{\texttt{{\tiny \phantom{.}}}}{\_}} \}$ (see Metzler [*et al.*]{}, 2001). Let us precise that from now on the word alignment will denote indistinctly the whole alignment, that is the reconstruction of the whole evolution process, including substitutions, of a set of sequences, or, as in this case, the *bare* alignment, that is, the reconstruction of the indel process only. We recall that when we model the alignment of a set of sequences as a hidden variable model, the *bare* alignment is which corresponds to the hidden process. In contrast to the pairwise alignment case, when we apply the TKF91 indel evolution model to multiple alignment we do not get a Markov chain on the set of all possible multiple alignment columns. In fact, Markov models for multiple alignment exist but states do not exactly correspond to alignment columns. Indeed, insertion states in these models describe not only an insertion on one sequence but also a kind of “memory" of what is happening in other sequences (see Holmes and Bruno, 2001, and Hein [*et al.*]{}, 2003, for instance). This is because the Markov dependence for pairwise root-leaf alignments applies independently on each sequence due to the branch independence of the evolution process. So that, an alignment column describing an insertion on sequence $i$ depends on the last column of the alignment describing any evolutionary event on sequence $i$, but there may be several alignment columns describing insertions on other sequences between these two columns. See Figure \[mapa\] for an illustration.\ ![*Alignment Markov chain for a star tree with two leaves and sequences evolving under the TKF91 evolution model as described by @HolmesBruno. The bubbles represent the states of the chain. stands for a base on the ancestral sequence and for a base on any of the observed sequences. Letters in brackets appear on insertion states and stand for the last event recorded on each sequence. The left column in insertion states is the true state whereas the right one is its representation in the alignment. There are two different states to represent an insertion on the second sequence since they have also to represent the fate (conservation or deletion) of the root nucleotide in the first sequence. That is because transitions from an insertion on the second sequence to an insertion on the first sequence depend on the fate of the last root nucleotide on the first sequence. Insertions on the second sequence are written in the alignment before insertions on the first sequence. The probabilities on the edges are those of (\[TKFprocess\]) and $\alpha(t)=e^{-\mu t}$.*[]{data-label="mapa"}](mapa_nuevo2.eps){width="9cm" height="7.4cm"} So one could say that insertions to the root sequence break the Markov dependence between alignment columns. Also, the order of the insertions between two homologous positions is irrelevant, the only important fact being which positions are homologous to which (see for instance the multiple alignment in Figure \[star\] where the insertion columns are completely exchangeable). Then, the interesting objet is not the alignment but the homology structure, essentially an alignment of homologous positions with specification of the number of insertions on each sequence between any two homologous positions. The homology structure can be described in terms of the nucleotides at the root sequence. Indeed the homology structure is just the sequence of root positions in which we specify, for each ancestral residue, its fate (whether it has survived or been deleted) and all the insertions occurred to its right in each one of the observed sequences (see Figure \[hom\_str\] for an example). The homology structure is, as the *bare* alignment, a reconstruction of the indel process of a set of sequences.\ ***Bare* alignment** **Homology structure**\ -------- -------- $X^1$: `BBBB` $X^2$: `B-BB` $X^3$: `BB-B` $X^4$: `BBBB` $X^5$: `-BBB` $X^6$: `BBBB` -------- -------- In the TKF91 indel model, evolution on each *link* is independent of evolution on other *links* (see Thorne [*et al.*]{}, 1991). That is why the homology structure under these models can be described as a sequence of i.i.d. random variables as we will see in the next section.\ The homology structure on a star tree ------------------------------------- Consider a $k$-star phylogenetic tree ${\mathcal{T}}$ with branches lengths $t_1,\dots,t_k$. The homology structure of the sequences related by ${\mathcal{T}}$ is a sequence of independent and identically distributed random variables $\{{\varepsilon}_n\}_{n\geq 1}$. The variable ${\varepsilon}_n$ represents the fate of the $n$-th ancestral sequence character (or fragment, if we consider fragment indel evolution models). Its distribution will depend on the chosen indel evolution model. Under the TKF91 indel evolution model $\{{\varepsilon}_n\}_{n\geq 1}$ is a sequence of i.i.d. random variables on $${\mathcal{E}}^k=\left\{ (e(1),e(2))=(\delta^{1:k},a^{1:k}) \,|\,\delta^i\in\{0,1\} ,\, a^i\geq 0,\, i=1,\dots,k \right\}.$$ The first column of ${\varepsilon}_n$ corresponds to the homologous positions to the $n$-th ancestral character. If it is conserved in sequence $i$, $i=1,\dots,k$, then ${\varepsilon}^i_n(1)=1$, else ${\varepsilon}^i_n(1)=0$. It is possible for an ancestral character to have been deleted in all the observed sequences (${\varepsilon}_n(1)=0_k$, where $0_k$ stands for the $k$-dimensional vector with all components equal to 0). The second column of ${\varepsilon}_n$ represents the number of insertions on the observed sequences between the $n$-th and the $(n+1)$-th ancestral sequence characters. It is possible to have none insertions in any of the observed sequences between two homologous positions (${\varepsilon}_n(2)=0_k$). See Figure \[hom\_str\] for an example of an homology structure. Due to the branch independence, the law of ${\varepsilon}_n$, $n\geq 1$, under the TKF91 indel model, is given by $$\label{loi_e} \mathbb{P}_{\lambda}\left({\varepsilon}_n\!=\!(\delta^{1:k},a^{1:k})\right)\!=\!\prod_{i=1}^k \!\left(q^H_{a^i+1}(t_i)\right)^{{1\! \mathrm{l}\{ \delta^i=1\}}}\! \left(q^N_{a^i}(t_i)\right)^{{1\! \mathrm{l}\{ \delta^i=0\}}}\!\!,\,\,\,\,\, (\delta^{1:k}\!,a^{1:k})\!\in \!{\mathcal{E}}^k\!.$$ Conditionally to the result of the indel process (the *bare* alignment), nucleotides on the observed sequences are emitted according to some substitution process. In practice, most nucleotide substitution processes are described by a continuous time Markov chain defined on ${\mathcal{A}}$ and depending on the branches lengths (see Felsenstein, 2004, for instance). Let us note $\nu$ the stationary law of this process and $p_t(\cdot,\cdot)$ the transition probability matrix for a transition time $t>0$. Then, for $n\geq1$, if ${\varepsilon}_n=(\delta^{1:k},a^{1:k})$, $r=\sum_{i=1}^k \delta^i$ nucleotides are emitted in the conserved positions according to the joint probability distribution $h_J$, $J=\{i|\delta^i=1\}$, on ${\cal A}^r$, with $$\label{subs_Markov} h_{\{i_1,\dots,i_r\}}(x^{i_1},\dots,x^{i_r})=\sum_{R\in {\mathcal{A}}} \nu(R)\prod_{j=1}^{r}p_{t_{i_j}} (R,x^{i_j}),$$ where $R$ represents the unknown ancestral nucleotide. Note that $h_J$ does not only depend on the cardinal of $J$, but also on its elements via the branches lengths $\{t_i\}_{i=1,\dots,k}$. In the inserted positions, $\sum_{i=1}^k a^i$ nucleotides are emitted independently and identically distributed according to the probability distribution $f(\cdot) =\nu(\cdot).$ In classical substitution processes there is independence between the different sites of the ancestral sequence. That means that conditionally on $\{{\varepsilon}_n\}_{n\geq 1}$, the emissions of nucleotides on the observed sequences at different instants (positions of the ancestral sequence) are independent and equally distributed as described below. The multiple-hidden i.i.d. model on a star tree =============================================== We present in this section the *multiple-hidden i.i.d.* model, where *multiple* refers to the number ($>2$) of observed sequences and *i.i.d.* to the nature of the hidden process, by analogy to the name of the pair-hidden Markov model. The homology structure of $k$ sequences evolving under the TKF91 indel evolution model and a particular substitution model, as described in the precedent section, is a particular parametrization of this model. Consider a sequence of i.i.d. random variables $\{{\varepsilon}_n\}_{n\geq 1}$ on the state space $${\mathcal{E}}^k=\left\{ (e(1),e(2))=(\delta^{1:k},a^{1:k}) \,|\,\delta^i\in\{0,1\} ,\, a^i\in{\mathbb{N}}\,\, i=1,\dots,k \right\}$$ with distribution $\pi$. The process $\{{\varepsilon}_n\}_{n\geq 1}$ generates a random walk $\{Z_n\}_{n \geq 0}$ with values on ${\mathbb{N}}^k$ by letting $Z_0 =0_k$ and $Z_n =\sum_{1\leq j\leq n}[{\varepsilon}_j(1)+{\varepsilon}_j(2)]$ for $n\geq 1$. The coordinate random variables corresponding to $Z_n$ at position $n$ are denoted by $(Z^1_n,\dots,Z^k_n)$ ([*i.e.*]{} $Z_n=(Z^1_n,\dots,Z^k_n)$). In the homology structure context they represent the length of each observed sequence up to position $n$ on the ancestral sequence. Let us now describe the emission of the observed sequences which take values on a finite alphabet ${\cal A}$. We distinguish to kinds of emissions, joint emissions across $k$ or a smaller number of sequences (corresponding to ${\varepsilon}_n(1)$) and single emissions (corresponding to ${\varepsilon}_n(2)$). For $n\geq1$, if ${\varepsilon}_n=(\delta^{1:k},a^{1:k})$ then a vector of $r=\sum_{i=1}^k \delta^i$ r.v. is emitted according to some probability distribution $h_J$, $J=\{i|\delta^i=1\}$, on ${\cal A}^r$ and $\sum_{i=1}^k a^i$ r.v. $\{ X^i_{1:a^i}, a^i \geq 1 \}$, $i=1,\dots,k$, are emitted according to the following scheme: $\{X^i_j\}^{i=1,k}_{j=1,a^i}$ are independent and identically distributed from some probability distribution $f$ on ${\cal A}$. Conditionally to the process $\{{\varepsilon}_n\}_{n\geq 1}$, the random variables emitted at different instants are independent. The whole multiple-hidden i.i.d. model is described by the parameter $\theta=(\pi,\, \{h_J\}_{J\subseteq K},\,f)\in \Theta$, where $K=\{1,\dots,k\}$. We do not consider the branches lengths as a component of the parameter and assume they are known. The conditional distribution of the observations given an homology structure $e_{1:n}=(e_j)_ {1\leq j\leq n}=((\delta^{1:k}_j,a^{1:k}_j))_{1\leq j\leq n}$, writes $$\begin{aligned} \label{conditional2} &&{\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n}| {\varepsilon}_{1:n}=e_{1:n}, \{{\varepsilon}_m\}_{m>n}, \{X^i_{n_i}\}_{i\in K, n_i>Z^{i}_n})= {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) \nonumber\\ &=& \prod_{j=1}^n {\mathbb{P}_\theta}({\mathbb{X}}_{Z_{j-1}+1_k:Z_j} | {\varepsilon}_j=e_j ) \nonumber \vspace{-0.2cm}\\ &=&\prod_{j=1}^n \Big\{ h_{\{i|\delta_j^i=1\}} \big(\{X^i_{Z^i_{j-1}+1}\}_{i|\delta^i_j=1}\big) \prod_{i=1}^k \prod_{s=1}^{a^i_j} f\big(X^i_{Z^i_{j-1} +\delta^i_j +s}\big) \Big\}\end{aligned}$$ where $1_k$ stands for the $k$-dimensional vector with all components equal to 1 and ${\mathbb{X}}_{1_k:Z_n}= (X^1_{1:Z^{1}_n}, \dots, X^k_{1:Z^{k}_n})$. This notation can be confusing since it is possible to have $Z^{i}_{j-1}+1_k > Z^{i}_{j}$ for some $i\in K$ and for some $j\geq 1$. However when writing ${\mathbb{X}}_{Z_{j-1}+1_k: Z_{j}}$ we will only be considering the variables corresponding to those sequences $i\in K$ for which $Z^{i}_{j-1}+1_k \leq Z^{i}_{j}$. The complete distribution ${\mathbb{P}_\theta}$ is given by $$\begin{aligned} &&{\mathbb{P}_\theta}({\varepsilon}_{1:n}=e_{1:n},{\mathbb{X}}_{1_k:Z_n})= {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) {\mathbb{P}_\theta}({\varepsilon}_{1:n}=e_{1:n} ) \\ &=& {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) \prod_{j=1}^n {\mathbb{P}_\theta}({\varepsilon}_j=e_j)= {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) \prod_{j=1}^n \pi(e_j)\end{aligned}$$ We denote by ${\mathbb{P}_\theta}$ (and ${\mathbb{E}_\theta}$) the induced probability distribution (and corresponding expectation) on ${\mathcal{E}}^{{\mathbb{N}}} \times ({\mathcal{A}}^{{\mathbb{N}}})^k $ and $\theta_0=(\pi_0,\, \{h_{0_J}\}_{J\subseteq K},\,f_0)$ the true parameter corresponding to the distribution of the observations (we shall abbreviate to ${\mathbb{P}_{0}}$ and ${\mathbb{E}_{0}}$ the probability distribution and expectation under parameter $\theta_0$). Observations and likelihoods ---------------------------- As in the pair-HMM (see Arribas-Gil [*et al.*]{}, 2006) there are different interpretations of what the observations represent on this model, and thus different definitions for the log-likelihood of the observed sequences $(X^1_{1:n_1},\dots,X^k_{1:n_k})$. However, the difference with the pair-HMM is that in the multiple-hidden i.i.d. model we suppose that the observed sequences are cut out of very much longer sequences between known homologous positions. This implies that any interpretation of what observations represent must assume that the underlying process $\{{\varepsilon}_n\}_{n \geq 1}$ passes through the points $0_k$ and $(n_1,\dots,n_k)$. One may consider that what we observe are sequences that have evolved from an ancestral sequence of length $n$ so that the likelihood should be ${\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n})$ $={\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n},Z_n)$. This term is computed by summing, over all possible homology structures from an ancestral sequence of length $n$, the probability of observing the sequences and a homology structure. Let us define ${\mathcal{E}}_{n_1,\dots,n_k}$ the set of all possible homology structures of $k$ sequences of lengths $n_1,\dots,n_k$: $${\mathcal{E}}_{n_1,\dots,n_k}=\{ e\in({\mathcal{E}}^k)^n ;\,\, n\in {\mathbb{N}}, \,\, \sum_{j=1}^{n}|e_j|=(n_1,\dots,n_k) \}.$$ For any homology structure $e\in {\mathcal{E}}_{n_1,\dots,n_k}$, if $e \in (\mathcal{E}^k)^n$, then $n$ is the length of the path $e$ and is denoted by $|e|$. In the homology structure context, $|e|$ stands for the length of the ancestral sequence. So we have $${\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n})={\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n},Z_n)=\sum_{e \in {\mathcal{E}}_{Z_n};|e|=n} {\mathbb{P}_\theta}({\varepsilon}_{1:n}=e,{\mathbb{X}}_{1_k:Z_n}).$$ Then, we would define the log-likelihood $\ell_n(\theta)$ as $$\label{lt} \ell_n (\theta) = \log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n}) , \quad n \geq 1 .$$ But since the underlying process $\{Z_n\}_{n \geq 0}$ is not observed, the quantity $\ell_n(\theta)$ is not a measurable function of the observations. More precisely, the length $n$ at which the observation is made is not observed itself. Though, if one decides that $(X^1_{1:n_1},\dots,X^k_{1:n_k})$ corresponds to the observation of the emitted sequences at a point of the hidden process $Z_n=(Z^i_n)_{i=1,\dots,k}$ and some [*unknown*]{} “ancestral length" $n$, one does not use $\ell_n(\theta)$ as a log-likelihood, but rather $$\label{qt} w_n (\theta) = \log {Q_\theta}({\mathbb{X}}_{1_k:Z_n}) , \quad n \geq 1$$ where for any integers $n_i, i=1,\dots,k$ $${Q_\theta}(X^1_{1:n_1},\dots,X^k_{1:n_k}) ={\mathbb{P}_\theta}( \exists m \geq 1, Z_m =(n_1,\dots,n_k) ; X^1_{1:n_1},\dots,X^k_{1:n_k} ).$$ In other words, ${Q_\theta}$ is the probability of the observed sequences under the assumption that the underlying process $\{{\varepsilon}_n\}_{n \geq 1}$ passes through the point $(n_1,\dots,n_k)$. But the length of the ancestral sequence remains unknown when computing ${Q_\theta}$. This gives the formula: $$\label{Qnm2} {Q_\theta}(X^1_{1:n_1},\dots,X^k_{1:n_k}) =\sum_{e \in {\mathcal{E}}_{n_1,\dots,n_k}} {\mathbb{P}_\theta}({\varepsilon}_{1:|e|}=e,X^1_{1:n_1},\dots,X^k_{1:n_k}).$$ Let us stress that we have $$w_n(\theta) =\log {\mathbb{P}_\theta}(\exists m\geq 1, Z_m =(Z^{i}_n)_{i=1,\dots,k}; X^1_{1:Z^{1}_n}, \dots, X^k_{1:Z^{k}_n}), \quad n\geq 1,$$ meaning that the length of the ancestral sequence is not necessarily $n$, but is in fact unknown. In the homology structure context, ${Q_\theta}$ is the quantity that is computed by the multiple alignment algorithms (see for instance Holmes and Bruno, 2001, Steel and Hein, 2001, or Lunter [*et al.*]{}, 2003) and which is used as likelihood in biological applications. The more extended application is to use this quantity to co-estimate alignments and phylogenetic trees in a Bayesian framework via MCMC calculations (cf. Fleissner [*et al.*]{}, 2005; Lunter [*et al.*]{}, 2005; Novák [*et al.*]{}, 2008). Indeed, algorithms that perform this joint estimation compute, at each iteration, the likelihood of sequences for a given phylogenetic tree. Thus, asymptotic properties of the criterion ${Q_\theta}$ and consequences on asymptotic properties of the estimators derived from ${Q_\theta}$ are of primarily interest. We will look for asymptotic results for $n\to \infty$. We need to establish some kind of relationship between $n$ and $n_1,\dots,n_k$, to derive asymptotic results for $n_i \to \infty$. From our definition of the multiple-hidden i.i.d. model, it is clear that it does not exist a deterministic relationship between the length of the hidden sequence and the lengths of the observed sequences. However, in the multiple alignment problem, a natural assumption is that very big insertions and deletions occur rarely and thus the length of the root sequence should be equivalent to the lengths of the observed sequences. In fact we have the following result.\ \[esp=1\] In the multiple-hidden i.i.d. model on a star tree under the TKF91 indel evolution process, that is, when $\pi$ is the distribution given by (\[loi\_e\]), for any $\lambda >0$ we have $Z_n^i \sim n$, $i=1,\dots,k$, $\mathbb{P}_{\lambda}$-almost surely. . For all $i=1,\dots,k$ and for all $n\geq 1$ we have that $$Z_n^i= \sum_{j=1}^n ({\varepsilon}^i_j(1) + {\varepsilon}^i_j(2))$$ where $\{{\varepsilon}^i_j\}_{j\geq 1}$ are i.i.d. Moreover, from (\[newprocess\]) we have, for any $\lambda >0$ $$\begin{gathered} \mathbb{E}_{\lambda}\,[{\varepsilon}^i_j(1) + {\varepsilon}^i_j(2)]\\ =\displaystyle\sum_{m\geq 1} m \!\left\{\mathbb{P}_{\lambda}({\varepsilon}^i_j(1) + {\varepsilon}^i_j(2)=m, {\varepsilon}^i_j(1)=0)\!+\!\mathbb{P}_{\lambda}({\varepsilon}^i_j(1) + {\varepsilon}^i_j(2)=m, {\varepsilon}^i_j(1)=1)\right\}\\ =\displaystyle\sum_{m\geq 1} m \left\{ q_m^N(t_i) + q_m^H(t_i)\right\}\hfill \phantom{w}\\ =\displaystyle\sum_{m\geq 1} m \left\{\!\!\left(\!\frac{1}{1+\lambda t_i}-e^{-\lambda t_i}\!\right)\frac{1}{1+\lambda t_i}\left(\!\frac{\lambda t_i}{1+\lambda t_i}\!\right)^{m-1} \!\!\!+\! e^{-\lambda t_i}\frac{1}{1+\lambda t_i}\left(\!\frac{\lambda t_i}{1+\lambda t_i}\!\right)^{m-1}\!\right\}\\=1.\hfill \phantom{w}\end{gathered}$$ Now the result holds from the strong law of large numbers. \ According to this lemma, under the TKF91 indel evolution model, asymptotic results for $n \to \infty$ will imply equivalent ones for $n_i \to \infty,\, i=1,\dots,k$. Let us establish an assumption to get the same result for the general multiple-hidden i.i.d. model. In the multiple-hidden i.i.d. model on a star tree ${\mathbb{E}_\theta}\,[{\varepsilon}_n(1)+{\varepsilon}_n(2)]=1_k$, for $n\geq 1$, for any $\theta\in\Theta$. The case of two sequences ------------------------- Let us consider the case in which $k=2$. It is clear that the general multiple-hidden i.i.d. model and the pair-HMM are different in this case. However, in the context of the alignment of two sequences evolving under the TKF91 model, the two models are equivalent. In fact, in the pairwise alignment we consider that one of the sequences is the ancestor of the other one, but since the TKF91 model is time reversible, this is equivalent to consider that both sequences evolve from a common unknown ancestor. First of all, let us remark that the likelihood (${Q_\theta}$) of two sequences $x_{1:n}$ and $y_{1:m}$ is the same under the two models. Let $t$ be the evolution time between both sequences, that is, the sum of the evolution times between the root and each one of the sequences, $t_1+t_2$, in the multiple alignment setup. Consider for the pair-HMM the following transition matrix: $$\begin{gathered} \label{mult_2seq} \begin{array}{ccccc} \qquad \qquad \quad D & &\qquad\qquad \quad \quad \quad H & &\qquad \quad \quad\quad V \end{array}\\ \begin{array}{c} D\vspace{0.425cm}\\ H\vspace{0.425cm}\\ V \end{array} \left( \begin{array}{ccccc} \displaystyle{\frac{\alpha(t)}{(1+\lambda t)}} & & \displaystyle{\frac{1-\alpha(t)}{(1+\lambda t)}}& & \displaystyle{\frac{\lambda t}{(1+\lambda t)}}\vspace{0.175cm}\\ (1-\kappa(t))\alpha(t)& &(1-\kappa(t))(1-\alpha(t)) & & \kappa(t)\vspace{0.175cm}\\ \displaystyle{\frac{\alpha(t)}{(1+\lambda t)}} & & \displaystyle{\frac{1-\alpha(t)}{(1+\lambda t)}}& & \displaystyle{\frac{\lambda t}{(1+\lambda t)}} \end{array}\right)\qquad\quad \phantom{Q}\end{gathered}$$\ where $D$, $H$ and $V$ stand for diagonal, horizontal and vertical movements respectively, with the notations of @Argamat, and $\alpha(t)=e^{-\lambda t}$, $\kappa(t)=1-\frac{\lambda t}{(1+\lambda t)(1-\alpha(t))}$. It is easy to show that the probability of an homology structure (under the multiple-hidden i.i.d. model) is just the sum of the probabilities of all possible alignments (under the pair-HMM) leading to that homology structure. Then, the sum over all possible alignments and all possible homology structures of two sequences is equivalent. Finally, note that for the transition matrix in (\[mult\_2seq\]) the stationary probabilities of insertions and deletions are the same, that is $p=q$ with the notations of @Argamat. That means that we are in the case where the *main direction* of the alignment, that is, its expectation under the pair-HMM, is always the straight line from $(0,0)$ to $(n,n)$ for every value of the parameter. This is also the case in the multiple-hidden i.i.d. model as we have shown in Lemma \[esp=1\]. Information divergence rates in the star tree model {#Inf_Div} =================================================== Definition of Information divergence rates ------------------------------------------ In this section we prove the convergence of the normalized *log-likelihoods* $\ell_n(\theta)$ and $\omega_n(\theta)$. Let us note $$\begin{gathered} \Theta_{0} =\left\{\theta \in \Theta \,\, | \,\, \pi(e)>0, \, \, h_J(x^{1:|J|})>0, \,\, f(y)>0, \right.\\ \left.\,\forall e \in {\mathcal{E}}^{k}, \,\,\forall x^{1:|J|} \in{\cal A}^{|J|},\,\, \forall J\subseteq K,\,\,\forall y\in{\cal A}\right\}.\end{gathered}$$ We shall always assume that $\theta_{0}\in \Theta_{0}$. \[thdivergence2\] The following holds for any $\theta\in\Theta_0$: - $ n^{-1} \ell_{n} (\theta)$ converges ${\mathbb{P}_{0}}$-almost surely and in $\mathbb{L}_1$, as $n$ tends to infinity to $$\ell (\theta ) = \lim_{n\rightarrow \infty}\frac{1}{n}{\mathbb{E}_{0}}\left(\log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n}) \right) = \sup_{n}\frac{1}{n}{\mathbb{E}_{0}}\left(\log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n}) \right).$$ - $ n^{-1} w_{n} (\theta)$ converges ${\mathbb{P}_{0}}$-almost surely and in $\mathbb{L}_1$, as $n$ tends to infinity to $$w (\theta ) = \lim_{n\rightarrow \infty}\frac{1}{n}{\mathbb{E}_{0}}\left(\log {Q_\theta}({\mathbb{X}}_{1_k:Z_n}) \right) = \sup_{n}\frac{1}{n}{\mathbb{E}_{0}}\left(\log {Q_\theta}({\mathbb{X}}_{1_k:Z_n}) \right).$$ Using the terminology of @Argamat we then define Information divergence rates: $\forall \theta \in \Theta_0, \; D (\theta \vert \theta_0)=w (\theta_0)-w (\theta) \quad \text{and} \quad D^{*}(\theta \vert \theta_0)=\ell (\theta_0)-\ell (\theta).$ We recall that $D^{*}$ is what is usually called the Information divergence rate in Information Theory: it is the limit of the normalized Kullback-Leibler divergence between the distributions of the observations at the true parameter value and another parameter value. However, we also call $D$ an Information divergence rate since ${Q_\theta}$ may be interpreted as a likelihood.\ \ [ **of Theorem \[thdivergence2\].**]{} This proof is similar to the proof of Theorem 1 in @Argamat. We shall use the following version of the sub-additive ergodic Theorem due to @Kingman to prove point [*i)*]{}. A similar proof may be written for [*ii)*]{} and is left to the reader.\ Let $(W_{s,t})_{0\leq s <t}$ be a sequence of random variables such that 1. For all $m<n$, $W_{0,n}\geq W_{0,m}+ W_{m,n}$, 2. For all $l>0$, the joint distributions of $(W_{m+l,n+l})_{0\leq m <n}$ are the same as those of $(W_{m,n})_{0\leq m <n}$, 3. ${\mathbb{E}_{0}}(W_{0,1}) > -\infty$. Then $\lim_{n}\! n^{\mbox{\tiny$-1$}} W_{0,n}$ exists almost surely. If moreover the sequences $(\!W_{m+l,n+l})_{l>0}$ are ergodic, then the limit is almost surely deterministic and equals $\sup_{n}\! n^{\mbox{\tiny$-1$}} \mathbb{E}_0(\!W_{0,n}\!)$. If moreover ${\mathbb{E}_{0}}(W_{0,n})\leq An$, for some constant $A\geq 0$ and all $n$, then the convergence holds in $\mathbb{L}_1$. We apply this theorem to the process $$W_{m,n}= \log {\mathbb{P}_\theta}({\mathbb{X}}_{Z_m+1_k:Z_n}), \quad 0\leq m<n.$$ Note that since $Z_0=0_k$ is deterministic, we have $W_{0,n} = \log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n})$. Super-additivity (namely point 1.) follows since for any $0\leq m<n$, $$\begin{gathered} {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n})= \sum_{\substack{e \in {\mathcal{E}}_{Z_n}\\|e|=n}} {\mathbb{P}_\theta}({\varepsilon}_{1:n}=e_{1:n},\,{\mathbb{X}}_{1_k:Z_n})\\ \phantom{w}\geq \sum_{\substack{e \in {\mathcal{E}}_{Z_m}\\|e|=m}} \sum_{\substack{e' \in {\mathcal{E}}_{Z_n-Z_m}\\|e'|=n-m}} {\mathbb{P}_\theta}({\varepsilon}_{1:m}=e_{1:m},\, {\varepsilon}_{m+1:n}=e'_{1:n-m}, \,{\mathbb{X}}_{1_k:Z_n})\hfill \phantom{w}\\ \geq \sum_{\substack{e \in {\mathcal{E}}_{Z_m}\\|e|=m}} \sum_{\substack{e' \in {\mathcal{E}}_{Z_n-Z_m}\\|e'|=n-m}} {\mathbb{P}_\theta}({\varepsilon}_{m+1:n}=e'_{1:n-m},\, {\mathbb{X}}_{Z_m+1_k:Z_n}) \times {\mathbb{P}_\theta}({\varepsilon}_{1:m}=e_{1:m},\, {\mathbb{X}}_{1_k:Z_m})\\ ={\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_m}) \times {\mathbb{P}_\theta}({\mathbb{X}}_{Z_m+1_k:Z_n})\end{gathered}$$ so that we get $ W_{0,n} \geq W_{0,m} +W_{m,n}$, for any $0\leq m<n$. To understand the distribution of $(W_{m,n})_{0\leq m <n}$, note that $W_{m,n}$ only depends on trajectories of the random walk going from the point $(Z^{1}_{m},\dots,Z^{k}_m)$ to the point $(Z^{1}_{n},\dots,Z^{k}_n)$ with length $n-m$. Since the variables $({\varepsilon}_n)_{n\geq 1}$ are i.i.d., one gets that the distribution of $(W_{m,n})$ is the same as that of $(W_{m+l,n+l})$ for any $l$, so that point $2.$ holds. Point $3.$ comes from: $$\begin{gathered} {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_1})=\sum_{\substack{e \in {\mathcal{E}}_{Z_1}\\|e|=1}} {\mathbb{P}_\theta}({\varepsilon}_{1}=e)\,{\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_1}|{\varepsilon}_{1}=e)\\ =\sum_{\substack{e \in {\mathcal{E}}_{Z_1}\\|e|=1}}\pi(e) \left\{h_{\{i|\delta_1^i=1\}} \big(\{X^i_1\}_{i|\delta^i_1=1}\big) \prod_{i=1}^k \prod_{s=1}^{a^i_1} f\big(X^i_{\delta^i_1 +s}\big) \right\}>0\end{gathered}$$ ${\mathbb{P}_{0}}$-almost surely, since $\theta \in \Theta_0$, provided that $Z^i_1\geq 1$ for some $i\in K$. So ${\mathbb{E}_{0}}(W_{0,1} ) = {\mathbb{E}_{0}}\log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_1})>-\infty$. Let us fix $0\leq m <n$. The proof that $W^{s,t}=(W_{m+l,n+l})_{l>0}$ is ergodic is the same as that of @Leroux (Lemma 1). Let $T$ be the shift operator, so that if $u=(u_l)_{l\geq 0}$, the sequence $Tu$ is defined by $(Tu)_{l}=(u)_{l+1}$ for any $l\geq 0$. Let $B$ be an event which is $T$-invariant. We need to prove that ${\mathbb{P}_{0}}(W^{m,n} \in B)$ equals $0$ or $1$. For any integer $i$, there exists a cylinder set $B_i$, depending only on the coordinates $u_l$ with $-j_i \leq l \leq j_i$ for some sub-sequence $j_i$, such that ${\mathbb{P}_{0}}(W^{m,n} \in B\Delta B_{j_i})\leq 1/2^i$. Here, $\Delta$ denotes the symmetric difference between sets. Since $W^{m,n}$ is stationary and $B$ is $T$-invariant: $$\begin{aligned} {\mathbb{P}_{0}}\left(W^{m,n} \in B\Delta B_{j_i}\right)= {\mathbb{P}_{0}}\left(T^{2j_i}W^{m,n} \in B\Delta B_{j_i}\right) ={\mathbb{P}_{0}}\left(W^{m,n} \in B\Delta T^{-2j_i}B_{j_i}\right).\end{aligned}$$ Let $\tilde{B}=\cap_{i\geq 1}\cup_{h\geq i}T^{-2j_h}B_{j_h} $. Borel-Cantelli’s Lemma leads to ${\mathbb{P}_{0}}(W^{m,n} \in B\Delta \tilde{B})=0$, so that ${\mathbb{P}_{0}}(W^{m,n} \in B)={\mathbb{P}_{0}}(W^{m,n} \in \tilde{B})={\mathbb{P}_{0}}(W^{m,n} \in B \cap \tilde{B})$. Now, conditional on $({\varepsilon}_n)_{n\in{\mathbb{N}}}$, the random variables $(W_{m+l,n+l})_{l> 0}$ are strongly mixing. Indeed $W_{m+l,n+l}$ only depends on a finite number of other $(W_{m+k,n+k})$, $k>0$, namely $(W_{m+k,n+k})_{k=max(1,m+l-n+1),\dots,n+l-m}$. Then the $0-1$ law for strongly mixing processes (see Sucheston, 1963) implies that for any fixed sequence $e$ with values in $({\mathcal{E}}^k)^{{\mathbb{N}}}$, the probability ${\mathbb{P}_{0}}(W^{m,n} \in \tilde{B}\vert ({\varepsilon}_n)_n=e)$ equals $0$ or $1$, so that $${\mathbb{P}_{0}}\left(W^{m,n} \in \tilde{B}\right)={\mathbb{P}_{0}}\left(({\varepsilon}_n)_n \in C\right)$$ where $C$ is the set of sequences $e$ such that ${\mathbb{P}_{0}}(W^{m,n} \in \tilde{B}\vert ({\varepsilon}_n)_n=e)=1$. But it is easy to see that $C$ is $T$-invariant. Indeed, if $e\in C$ then, since $W^{m,n}$ is stationary and $\tilde{B}$ invariant, $$\begin{gathered} 1={\mathbb{P}_{0}}(W^{m,n} \in \tilde{B}\vert ({\varepsilon}_n)_n=e)={\mathbb{P}_{0}}(TW^{m,n} \in \tilde{B}\vert ({\varepsilon}_n)_n=Te)\\={\mathbb{P}_{0}}(W^{m,n} \in \tilde{B}\vert ({\varepsilon}_n)_n=Te)\end{gathered}$$ so that $Te\in C$. Now, since $({\varepsilon}_n)_{n\geq 1}$ is an i.i.d. process, it is ergodic so ${\mathbb{P}_{0}}\left(({\varepsilon}_n)_n \in C\right)$ equals $0$ or $1$. This concludes the proof of ergodicity of the sequence $W^{m,n}$. To end with, note that for any $n\geq 0$, the random variable $W_{0,n}$ is non positive, ensuring the convergence of $\{n^{-1}W_{0,n}\}$ in $\mathbb{L}_1$. Divergence properties of Information divergence rates ----------------------------------------------------- Information divergence rates should be non negative: this is proved below. They also should be positive for parameters that are different than the true one: we only prove it in a particular subset of the parameter set. Let us define the set $$\Theta_{marg}=\left\{\theta\in\Theta_0\;:\;h^i_{J}=f, \,\forall J\subseteq K,\,\forall i\in J\right\}.$$ where $h^i_{J}$ denotes the $i$-th marginal of $h_J$. \[contrast2\] Information divergence rates satisfy: - For all $\theta \in \Theta_0$, $D(\theta \vert \theta_0) \geq 0$ and $D^{*} (\theta \vert \theta_0) \geq 0$. - If $\theta_0$ and $\theta$ are in $\Theta_{marg}$, $D(\theta \vert \theta_0) > 0$ and $D^{*}(\theta \vert \theta_0) > 0$ as soon as $f\neq f_0$. Note that from Assumption 1 the expectation of ${\varepsilon}_n(1)+{\varepsilon}_n(2)$, $n\geq 1$, is the same for any value of the parameter. Thus, we can not establish the positivity of the information divergence rates for values of $\theta$ for which the expectation of the hidden process is different than under $\theta_0$, as it is done for pair-HMMs (Theorem 2 of Arribas-Gil [*et al.*]{}, 2006). Also note that when we consider classical markovian substitution processes for the emission laws, as described in (\[subs\_Markov\]), the parameter always lies in $\Theta_{marg}$, since the marginal emission distributions are equal to the stationary distribution of the Markov process.\ \ . Since for all $n$, $${\mathbb{E}_{0}}\left(\log {\mathbb{P}_{0}}({\mathbb{X}}_{1_k:Z_{n}}) \right)- {\mathbb{E}_{0}}\left(\log {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_{n}}) \right)$$ is a Kullback-Leibler divergence, it is non negative, and the limit $D^{*}(\theta \vert \theta_0)$ is also non negative. Let us prove that $D(\theta \vert \theta_0)$ is also non negative. To compute the value of the expectation ${\mathbb{E}_{0}}[w_{n} (\theta )]$, note that the set of all possible values of $Z_n$ is ${\mathbb{N}}^k$. Then, $$\begin{gathered} {\mathbb{E}_{0}}[w_{n} (\theta)]\\=\!\sum_{(n_1,\dots,n_k)\in {\mathbb{N}}^k } \sum_{(x^i_{1:n_i})_{i=1,\dots,k}} \!\!{\mathbb{P}_{0}}\big(Z_{n} =(n_1,\dots,n_k), X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k} \big) \\ \phantom{=\sum_{(n_1,\dots,n_k)\in {\mathbb{N}}^k } \sum_{(x^i_{1:n_i})_{i=1,\dots,k}} {\mathbb{P}_{0}}\big(Z_{n} =(n_1,\dots,n_k), X^1_{1:n_1},,}\times\log {Q_\theta}(x^1_{1:n_1},\dots,x^k_{1:n_k} ).\end{gathered}$$ Now, by definition, $$D\left(\theta \vert \theta_0 \right ) = \lim_{n\rightarrow +\infty} \frac{1}{n} {\mathbb{E}_{0}}\left(\log \frac{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}\right).$$ By using Jensen’s inequality, $${\mathbb{E}_{0}}\!\left(\!\log \frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\!\right)\! \leq \log {\mathbb{E}_{0}}\! \left(\frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\right)\!= \log {\mathbb{E}_{0}}\! \left[ {\mathbb{E}_{0}}\!\left(\frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\right)\!\big| Z_n \right]\!.$$ Now, for all $(n_1,\dots,n_k )\in {\mathbb{N}}^k$ $$\begin{gathered} {\mathbb{E}_{0}}\left(\frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\big| Z_n=(n_1,\dots,n_k)\right)\\ =\!\sum_{(x^i_{1:n_i})_{i=1}^k}\!\!\!{\mathbb{P}_{0}}\big(Z_{n}\!=\!(n_1,\mbox{\tiny $\dots$},n_k), X^1_{1:n_1}\!\!=x^1_{1:n_1},\mbox{\tiny $\dots$},X^k_{1:n_k}\!\!=x^k_{1:n_k} \big) \frac{{Q_\theta}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k} )}{{Q_{\theta_0}}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k} )}\\ \stackrel{(a)}{\leq}\sum_{(x^i_{1:n_i})_{i=1}^k} {\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m =(n_1,\dots,n_k),\, X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k}\big)\phantom{\stackrel{(a)}{\leq}\sum_{(x^i_{1:n_i})_{i=1,\dots,k}} }\\ \phantom{\stackrel{(a)}{\leq}\sum_{(x^i_{1:n_i})_{i=1,\dots,k}} {\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m =xxxxx)}={\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m =(n_1,\dots,n_k)\big)\leq 1\end{gathered}$$ where $(a)$ comes from expression (\[Qnm2\]). Thus, ${\mathbb{E}_{0}}\left[ {\mathbb{E}_{0}}\left(\frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\right)\big| Z_n \right] \leq 1$, and $$\lim_{n\rightarrow +\infty}\frac{1}{n}\left(w_{n}(\theta)-w_{n}(\theta_0) \right)\leq \liminf_{n\rightarrow +\infty}\frac{1}{n} \log {\mathbb{E}_{0}}\left[ {\mathbb{E}_{0}}\left(\frac{{Q_\theta}({\mathbb{X}}_{1_k:Z_n})}{{Q_{\theta_0}}({\mathbb{X}}_{1_k:Z_n})}\big| Z_n \right)\right]\leq 0.$$ So finally $$\forall \theta \in \Theta_0, \;D(\theta \vert \theta_0) \geq 0.$$ Let us now consider the case where $\theta_0$ and $\theta$ are in $\Theta_{marg}$. Let us remark that for any $\theta \in \Theta_{marg}$ we have $$\begin{gathered} \label{re} {\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k),\,X^1_{1:n_1}=x^1_{1:n_1}\big)\\ =\sum_{(x^i_{1:n_i})_{i=2}^k}{\mathbb{P}_\theta}\big(Z_n =(n_1,\dots,n_k),\,X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k}\big) \phantom{{\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k),\,X^1_{1:n_1}=x^1_{1:n_1}\big)}\\ =\sum_{\substack{e \in {\mathcal{E}}_{n_1,\dots,n_k}\\|e|=n}} \sum_{(x^i_{1:n_i})_{i=2}^k}{\mathbb{P}_\theta}({\varepsilon}_{1:n} =e,\, X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k}) \phantom{{\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k),\,X^1_{1:n_1}=x^1_{1:n_1}\big)}\\ =\sum_{\substack{e \in {\mathcal{E}}_{n_1,\dots,n_k}\\|e|=n}}\sum_{(x^i_{1:n_i})_{i=2}^k}{\mathbb{P}_\theta}({\varepsilon}_{1:n} =e)\,{\mathbb{P}_\theta}( X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k}|{\varepsilon}_{1:n} =e)\phantom{wwwwww}\\ ={\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k)\big)f^{\otimes n_1} (x^1_{1:n_1})\end{gathered}$$ where the last equality comes from (\[conditional2\]). In the same way, for any $\theta \in \Theta_{marg}$ we have that ${\mathbb{P}_\theta}(\exists m\leq 1, Z_m=(n_1,\dots,n_k), X^1_{1:n_1}=x^1_{1:n_1})={\mathbb{P}_\theta}(\exists m\leq 1, Z_m=(n_1,\dots,n_k))f^{\otimes n_1} (x^1_{1:n_1})$. This is also true for any other sequence $X^i_{1:n_i}$, $i=1\dots,k$. Then, using Jensen’s inequality and definition , $$\begin{gathered} {\mathbb{E}_{0}}\left( \log \frac{{Q_\theta}({\mathbb{X}}_{1:Z_n}) } {{Q_{\theta_0}}({\mathbb{X}}_{1:Z_n})}\right) \\ = \sum_{(n_1\mbox{\tiny $\dots$},n_k) \in {\mathbb{N}}^k} \sum_{(x^i_{1:n_i})_{i=1,\mbox{\tiny $\dots$},k}} {\mathbb{P}_{0}}\big(Z_n =(n_1\mbox{\tiny $\dots$},n_k), \, X^1_{1:n_1}=x^1_{1:n_1},\mbox{\tiny $\dots$},X^k_{1:n_k}=x^k_{1:n_k}\big) \phantom{wwwwwwwwwwwwwww}\\\times \log \frac{{Q_\theta}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k}) } {{Q_{\theta_0}}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k})}\quad \!\!\leq \! \!\!\sum_{(n_1\mbox{\tiny $\dots$},n_k) \in {\mathbb{N}}^k} \sum_{x^1_{1:n_1}} {\mathbb{P}_{0}}\big(Z_n=(n_1,\mbox{\tiny $\dots$},n_k),\,X^1_{1:n_1}=x^1_{1:n_1}\big) \\ \times\log \mbox{{\fontsize{14.4}{18}\selectfont}$\left(\sum_{(x^i_{1:n_i})_{i=2}^k} \!\!\! \frac{{\mathbb{P}_{0}}\big(Z_n =(n_1,\mbox{\tiny $\dots$},n_k),\, X^1_{1:n_1}=x^1_{1:n_1},\mbox{\tiny $\dots$},X^k_{1:n_k}=x^k_{1:n_k}\big){Q_\theta}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k} ) } {{\mathbb{P}_{0}}\big(Z_n =(n_1\mbox{\tiny $\dots$},n_k),\,X^1_{1:n_1}=x^1_{1:n_1}\big) {Q_{\theta_0}}(x^1_{1:n_1},\mbox{\tiny $\dots$},x^k_{1:n_k}) }\! \right)$} \\ \leq \sum_{(n_1\mbox{\tiny $\dots$},n_k) \in {\mathbb{N}}^k} \sum_{x^1_{1:n_1}} {\mathbb{P}_{0}}\big(Z_n=(n_1,\mbox{\tiny $\dots$},n_k)\big) f_0^{\otimes n_1} (x^1_{1:n})\phantom{sssssssssssssssssssssssssssssssssssssssssss}\\ \phantom{sssssssssssssssssssssssss}\times \log \left(\frac{ {\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m=(n_1,\mbox{\tiny $\dots$},n_k)\big) f^{\otimes n_1} (x^1_{1:n_1}) } {{\mathbb{P}_{0}}\big(Z_n =(n_1,\mbox{\tiny $\dots$},n_k)\big) f_0^{\otimes n_1} (x^1_{1:n_1}) } \right)\!,\end{gathered}$$ where the last inequality comes from (\[re\]) and the fact that $${\mathbb{P}_{0}}\big(Z_n =(n_1,\dots,n_k),\, X^1_{1:n_1}=x^1_{1:n_1},\dots,X^k_{1:n_k}=x^k_{1:n_k}\big) \leq {Q_{\theta_0}}(x^1_{1:n_1},\dots,x^k_{1:n_k}).$$ Thus, we have $$\begin{gathered} -D(\theta|\theta_0)\leq \limsup_{n \to +\infty} \frac{1}{n} \sum_{(n_1\dots,n_k) \in {\mathbb{N}}^k} {\mathbb{P}_{0}}(Z_n=(n_1,\dots,n_k)) \phantom{sssssssssssssssssssssssssssssssssss}\\ \phantom{sssssssssssss}\times \Big\{\log \frac {{\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m=(n_1,\dots,n_k)\big)} {{\mathbb{P}_{0}}\big(Z_n =(n_1,\dots,n_k)\big)}+ n_1\sum_x f_0(x)\log \frac{f(x)}{f_0(x)} \Big\}\\ \leq \limsup_{n \to +\infty} \frac{1}{n} \Big\{\log \sum_{(n_1\dots,n_k) \in {\mathbb{N}}^k} {\mathbb{P}_\theta}\big(\exists m\geq 1, Z_m=(n_1,\dots,n_k)\big)\phantom{sssssssssssssssssssssssss}\\ \phantom{sssssssssssssssssss}+ \sum_{(n_1\dots,n_k) \in {\mathbb{N}}^k} {\mathbb{P}_{0}}\big(Z_n=(n_1,\dots,n_k)\big) n_1 \sum_x f_0(x)\log \frac{f(x)}{f_0(x)}\Big\}\\ \leq \limsup_{n \to +\infty} \frac{1}{n} \! \Big\{\! {\mathbb{E}_{0}}[Z^1_n]\!\sum_x \!f_0(x)\log \frac{f(x)}{f_0(x)}\! \Big\}\!=\! \limsup_{n \to +\infty} \frac{1}{n} \Big\{n\! \sum_x f_0(x)\log \frac{f(x)}{f_0(x)} \Big\}\!<\!0,\end{gathered}$$ as soon as $f\neq f_0$, since ${\mathbb{E}_{0}}[Z^1_n]=n$ from Assumption 1. The proof for $D^{*}$ follow the same lines. \ It would be interesting to prove the uniqueness of the maximum of the functions $\ell(\theta)$ and $w(\theta)$ at the true value of the parameter $\theta_0$. If that was true, the consistency of maximum likelihood and bayesian estimators would be obtained with classical arguments (see Arribas-Gil [*et al.*]{}, 2006). In Section \[simus\] we investigate the behavior of functions $\ell(\theta)$ and $w(\theta)$ via some simulations. Extension to the case of an arbitrary tree ========================================== Let us now consider an arbitrary phylogenetic tree, that is, a tree with inner nodes such as the one in Figure \[arb\_tree\] (a). Without loss of generality we can assume that we deal with a binary tree (the number of edges going out from every inner node is equal to two) in which the length of the path from the root to each leaf is the same for every leaf in the tree. There is an example of this kind of tree in Figure \[arb\_tree\] (b). Indeed, we will only use this fact to simplify notations, since it allow us to describe the evolutionary behavior of any internal node in a general way and define the model in a simpler manner. Otherwise, the state space of the hidden process would depend on the particular structure of the tree, but the results given in this section still hold. (60,30) (30,25)[(-5,-3)[15]{}]{} (15,16)[(-2,-3)[8]{}]{} (15,16)[(2,-3)[8]{}]{} (15,16)[(0,-1)[12]{}]{} (30,25)[(5,-3)[35.5]{}]{} (30,25)[(2,-3)[6]{}]{} (36,15.5)[(-2,-3)[7.7]{}]{} (36,15.5)[(2,-3)[3]{}]{} (39,11)[(-2,-3)[4.8]{}]{} (39,11)[(4,-3)[9.5]{}]{} (29,27)[${\cal R}$]{}(14,15)[$\bullet$]{} (9.4,15.6)[$N^1$]{} (35,15)[$\bullet$]{} (29.6,14.6)[$N^2$]{} (5,0)[$X^1$]{} (13,0)[$X^2$]{} (21,0)[$X^3$]{} (26.8,0)[$X^4$]{} (38,10.5)[$\bullet$]{} (40.1,10.9)[$N^3$]{} (32.5,0)[$X^5$]{} (47.1,0)[$X^6$]{} (64.2,0)[$X^7$]{} (28,32)[(a)]{} (60,30) (30,25)[(-5,-2)[14]{}]{} (16,19)[(-4,-3)[7]{}]{} (16,19)[(4,-3)[7]{}]{} (9,14)[(-2,-5)[4]{}]{} (9,14)[(2,-5)[4]{}]{} (23,14)[(-2,-5)[4]{}]{} (23,14)[(2,-5)[4]{}]{} (30,25)[(5,-2)[14]{}]{} (44,19)[(-4,-3)[7]{}]{} (44,19)[(4,-3)[7]{}]{} (51,14)[(-2,-5)[4]{}]{} (51,14)[(2,-5)[4]{}]{} (37,14)[(-2,-5)[4]{}]{} (37,14)[(2,-5)[4]{}]{} (29,27)[${\cal R}$]{}(15,18.5)[$\bullet$]{} (12.9,20.3)[$N^1$]{} (8.5,13)[$\bullet$]{} (6,15)[$N^3$]{} (22,13)[$\bullet$]{} (24,15)[$N^4$]{} (3,0)[$X^1$]{} (11,0)[$X^2$]{} (17,0)[$X^3$]{} (25,0)[$X^4$]{} (43,18.5)[$\bullet$]{} (45,20)[$N^2$]{} (50,13)[$\bullet$]{} (51.5,15)[$N^6$]{} (36.5,13)[$\bullet$]{} (34,15)[$N^5$]{} (53.6,0)[$X^8$]{} (45.6,0)[$X^7$]{} (39.5,0)[$X^6$]{} (31.6,0)[$X^5$]{} (28,32)[(b)]{} The multiple-hidden i.i.d. model on a binary tree with $k$ observed sequences, ${\cal T}_{2,k}$, is defined as follows. Consider a sequence of i.i.d. random variables $\{{\varepsilon}_n\}_{n\geq 1}$ on the state space $$\begin{gathered} {\mathcal{E}}^{{\cal T}_{2,k}}= \left\{ e \in {\cal M}_{(2^k-1),2m}\, ,m \in {\mathbb{N}};\,\, e^{h}_{p}\in \{(1,0), (0,0)\}, \phantom{{\mathcal{E}}^2}\right.\\ \left. e^{\{a_i,i,i'\}}_{p}\in \{(1,0)\!\times \!{\mathcal{E}}^2\!,\, 0_{3,2}\},\, \mbox{\small $p\!=\!1,\dots,m, \forall h\!\in \!I,\forall i,i'\!\in\!O, i\sim i'$}\right\}\end{gathered}$$ where ${\cal M}_{a,b}$ denotes the set of all $a$-by-$b$ natural matrices, $0_{3,2}$ denotes the $3$-by-$2$ null matrix, $I$ denotes the set of internal nodes (unobserved sequences) of the tree and $O$ denotes the set of external nodes (observed sequences) of the tree. For an observed sequence $i$, $a_i$ stands for its direct ancestor, that is, the sequence that is placed in its closest internal node. For two observed sequences $i$ and $i'$, we write $i\sim i'$ if they share the same direct ancestor (that is, $a_i=a_{i'}$). For $e$ in ${\mathcal{E}}^{{\cal T}_{2,k}}$, $e^{\{i,j,h\}}_{p}$ is the sub-matrix of $e$ composed by rows $i,j,h$ and columns $2(p-1)+1$ and $2p$. An element $e$ of ${\mathcal{E}}^{{\cal T}_{2,k}}$ represents the fate of a nucleotide in the root sequence and all the insertions produced at the different levels of the tree. It is a finite sequence of $(2^k-1)$-by-$2$ matrices, in which each row represents one node (sequence) of the tree. We will assume that the first row represents the sequence at the root. The first $(2^k-1)$-by-$2$ matrix represent the fate of the nucleotide at the root in the first column (whether it is conserved, 1, or deleted, 0, in each one of the sequences) and the number of insertions produced to its right in the observed sequences (second column). The difference with the star-tree case, is that now we may also have (non-observed) insertions in the internal sequences. They appear in the following $(2^k-1)$-by-$2$ matrices, represented by 1 in the corresponding position of the first column, where we also represent the fate of the inserted nucleotide and the number of insertions produced to its right in the corresponding descendant sequences. The rows of $e$ that are not concerned by that insertion (because the corresponding sequences are not descendant of that internal sequence) may represent the fate of another inserted nucleotide in a different internal sequence. That is why in the same $(2^k-1)$-by-$2$ matrix we may represent independent events in different rows. Indeed, the events represented in two different rows $i$ and $j$ of the same $(2^k-1)$-by-$2$ matrix are independent if the row corresponding to the closest common ancestor of $i$ and $j$ in that matrix takes the value $(0,0)$, and they are dependent if it takes the value $(1,0)$. There is an example of an element of ${\mathcal{E}}^{{\cal T}_{2,k}}$ in Figure \[Ex\_arb\_tree\]. [**Homology structure** ***Bare* alignment**]{}\ -------- ---- ---- ---- ---- ---- ---- ---- ---- $R$: 10 00 00 00 00 00 00 00 $N^1$: 10 00 00 10 00 00 10 00 $N^3$: 10 10 00 10 00 00 10 10 $X^1$: 12 10 00 10 00 00 10 10 $X^2$: 10 00 00 11 00 00 12 01 $N^4$: 10 00 00 10 10 10 10 00 $X^3$: 10 00 00 10 10 10 11 00 $X^4$: 11 00 00 11 11 01 00 00 $N^2$: 10 00 00 10 00 00 00 00 $N^5$: 10 10 10 10 00 00 00 00 $X^5$: 10 10 10 10 00 00 00 00 $X^6$: 01 10 01 11 00 00 00 00 $N^6$: 10 10 00 10 10 00 00 00 $X^7$: 10 10 00 00 10 00 00 00 $X^8$: 10 02 00 10 12 00 00 00 -------- ---- ---- ---- ---- ---- ---- ---- ---- For $e\in {\mathcal{E}}^{{\cal T}_{2,k}}$ such that $e \in {\cal M}_{(2^k-1),2m}$, $m \in {\mathbb{N}}$, we will note $|e|=m$. Also, for any $(2^k-1)$-by-$2$ submatrix $e_p$, $1\leq p \leq |e|$, we will note $\|e_p\|=e_p(1)+e_p(2)$, that is, the sum of the two columns of $e_p$. $e^{obs}$ will denote the $k$-by-$2|e|$ matrix whose rows are the rows on $e$ corresponding to the observed sequences. For any internal node (non-observed sequence) $i\in I$, $d_i$ will denote the set of the two direct descendants of $i$, and $D_i$ will denote the set of all the descendants of $i$ which are observed sequences. Also, for any sequence $i$, and any $p$, $1\leq p \leq |e|$, such that $e_p^{a_i}(1)=1$, we will denote $\overline{\|e_p^i\|}=\sum_{r=p}^q \|e_r^i\|$, where $q$ is such that $e_r^{a_i}(1)=0$ for $r=p+1,\dots,q-1$ and $e_q^{a_i}(1)=1$. $\overline{\|e_p^i\|}$ represents the total number of descendants in sequence $i$ of the given nucleotide from sequence $a_i$. If $i$ is one of the two direct descendants of the root then $\overline{\|e_1^i\|}=\sum_{r=1}^{|e|} \|e_r^i\|$. Note that if $i$ stands for an observed sequence (external node), for any $p$ such that $e_p^{a_i}(1)=1$, $\overline{\|e_p^i\|}=\|e_p^i\|$. The same notations apply to the random process $\{{\varepsilon}_n\}_{n\geq 1}$. In the case in which we consider the TKF91 indel model, due to the branch independence, the law of ${\varepsilon}_n$, is given by $$\label{arb_loi_e} \mathbb{P}_{\lambda}\left({\varepsilon}_n\!=\!e\right)\!=\!\prod_{p=1}^{|e|}\prod_{\substack{i=2;\\ \mbox{\tiny$e_p^{a_i}\!(1)\!\!=\!\!1$}}}^{2^k-1} \left(\!q^H_{\substack{\phantom{i}\\ \mbox{\tiny$\overline{\|e_p^i\|}$}}}(t_i)\!\right)^{\!\!{1\! \mathrm{l}\{ e_p^{i}(1)=1\}}}\!\! \left(\!q^N_{\substack{\phantom{i}\\ \mbox{\tiny$\overline{\|e_p^i\|}$}}}(t_i)\!\right)^{\!\!{1\! \mathrm{l}\{ e_p^{i}(1)=0\}}}\!\!, \, e\in {\mathcal{E}}^{{\cal T}_{2,k}}, n\geq 1$$\ where $t_i$ represents the evolutionary time between sequences $i$ and $a_i$. In the general case we will note $\pi$ the law of ${\varepsilon}_n$. As in the star tree case, the process $\{{\varepsilon}_n\}_{n\geq 1}$ generates a random walk $\{Z_n\}_{n \geq 0}$ with values on ${\mathbb{N}}^k$ by letting $Z_0 =0_k$ and $Z_n =\sum_{1\leq j\leq n} \sum_{1\leq p\leq |{\varepsilon}_j|}\|{\varepsilon}_{j_p}^{obs} \|$ for $n\geq 1$. The coordinate random variables corresponding to $Z_n$ at position $n$ are denoted by $(Z^1_n,\dots,Z^k_n)$ ([*i.e.*]{} $Z_n=(Z^1_n,\dots,Z^k_n)$). Let us now describe the emission of the observed sequences which take values on a finite alphabet ${\cal A}$. We distinguish to kinds of emissions, joint emissions across $k$ or a smaller number of sequences (corresponding to ${\varepsilon}_{n_p}(1)$, $1\leq p \leq |{\varepsilon}_n|$) and single emissions (corresponding to ${\varepsilon}_{n_p}(2)$, $1\leq p \leq |{\varepsilon}_n|$). For $n\geq1$, and for $1\leq p \leq |{\varepsilon}_n|$, if ${\varepsilon}_{n_p}^{\tau}(1)=1$ and ${\varepsilon}_{n_p}^{a_\tau}(1)=0$ for any $\tau \in I$, then a vector of $r=|\{i\in D_{\tau}|{\varepsilon}_{n_p}^{i}(1)=1\}|$ r.v. is emitted according to some probability distribution $h_J$, $J=\{i\in D_{\tau}|{\varepsilon}_{n_p}^{i}(1)=1\}$, on ${\cal A}^r$ and $\sum_{i\in D_{\tau}} {\varepsilon}_{n_p}^{i}(2)$ r.v. $\{ X^i_{1:{\varepsilon}_{n_p}^{i}(2)}\}$, $i\in D_{\tau}$, are emitted according to the following scheme: $\{X^i_j\}^{i\in D_{\tau}}_{1,{\varepsilon}_{n_p}^{i}(2)}$ are independent and identically distributed from some probability distribution $f$ on ${\cal A}$. In practice, the emission law $h$, may take into account the emissions in internal sequences. Consider, for instance, the emission in the first column of the homology structure of Figure \[Ex\_arb\_tree\]. If we deal with a classical markovian substitution model, with stationary distribution $\nu$ and transition probability matrix $p_t(\cdot,\cdot)$, the emission of nuleotides $x^1,\dots,x^5,x^7,x^8$ in sequences $X^1,\dots,X^5,X^7,X^8$ would have probability $$\begin{gathered} h_{\{1,\dots,5,7,8\}}(x^1,\dots,x^5,x^7,x^8) \\ =\sum_{R \in {\mathcal{A}}} \nu(R)\times \left\{ \left( \sum_{\tau_1 \in A} p_{{s_1}} (R,\tau_1) \left[\sum_{\tau_3 \in A} p_{{s_3}} (\tau_1,\tau_3) p_{t_1}(\tau_3,x^1) p_{t_2}(\tau_3,x^2) \right] \right.\right. \hspace{3cm}\\ \phantom{a}\hspace{6cm} \left. \times \left[\sum_{\tau_4 \in A} p_{{s_4}} (\tau_1,\tau_4) p_{t_3}(\tau_4,x^3) p_{t_4}(\tau_4,x^4) \right] \right)\\ \times \left( \sum_{\tau_2 \in A} p_{{s_2}} (R,\tau_2) \left[\sum_{\tau_5 \in A} p_{{s_5}} (\tau_2,\tau_5) p_{t_5}(\tau_5,x^5) \right] \right.\hspace{6cm}\\ \phantom{a}\hspace{5cm} \left. \left. \times \left[\sum_{\tau_6 \in A} p_{{s_6}} (\tau_2,\tau_6) p_{t_7}(\tau_6,x^7) p_{t_8}(\tau_6,x^8) \right] \right) \right\}\end{gathered}$$ where $R$ represents the nucleotide in the root, $\tau_i$ the nucleotide in internal sequence $N^i$, $s_i$ the evolution time to internal sequence $N^i$ from its direct ancestor and $t_i$ the evolution time to observed sequence $X^i$ from its direct ancestor. As in the star tree case, conditionally to the process $\{{\varepsilon}_n\}_{n\geq 1}$, the random variables emitted at different instants are independent. The whole multiple-hidden i.i.d. model is described by the parameter $\theta=(\pi,\,\{h_J\}_{J\subseteq K},\,f)\in \Theta$. The conditional distribution of the observations given an homology structure $e_{1:n}=(e_j)_ {1\leq j\leq n}$, writes $$\begin{aligned} \label{arbicondi} &&{\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) = \prod_{j=1}^n {\mathbb{P}_\theta}({\mathbb{X}}_{Z_{j-1}+1_k:Z_j} | {\varepsilon}_j=e_j ) \nonumber\\ &=&\prod_{j=1}^n \prod_{p=1}^{|e_j|} \Big\{ \prod_{\substack{\tau\in I;\\ \mbox{\tiny$e_{j_p}^{\tau}\!(1)\!\!=\!\!1$}\\ \mbox{\tiny $e_{j_p}^{a_{\tau}}\!(1)\!\!=\!\!0$}}} h_{\{i\in D_{\tau} | e_{j_p}^{i}(1)=1\}} \left(\{X^{i}_{Z^{i}_{j-1}+\sum_{r=1}^{p-1}\|e_{j_p}^{i}\|+1}\}_{\{i\in D_{\tau} | e_{j_p}^{i}(1)=1\}}\right) \Big\} \nonumber\\ && \hspace{1cm}\times \Big\{ \prod_{i\in O} \prod_{s=1}^{e_{j_p}^i(2)} f\big(X^i_{Z^i_{j-1}+\sum_{r=1}^{p-1}\|e_{j_p}^i\|+e_{j_p}^i(1) +s}\big) \Big\}.\end{aligned}$$ And the complete distribution ${\mathbb{P}_\theta}$ is given by $$\begin{aligned} &&{\mathbb{P}_\theta}({\varepsilon}_{1:n}=e_{1:n},{\mathbb{X}}_{1_k:Z_n})= {\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_n} | {\varepsilon}_{1:n}=e_{1:n} ) \prod_{j=1}^n \pi(e_j).\end{aligned}$$ At this point we can define the parameter set $\Theta_0$, likelihoods $\omega_n (\theta)$ and $\ell_n(\theta)$ and divergence rates $D (\theta \vert \theta_0)$ and $D^{*}(\theta \vert \theta_0)$ in the same way as in the star-tree case. Indeed Theorem 1 also holds in this case. Moreover, since we do not exploit any specific characteristic of $\pi$ or the emission laws to prove this result, the proof is exactly the same as the one given in Section \[Inf\_Div\]. The only slightly difference appears when proving point 3, but it is clear that ${\mathbb{P}_\theta}({\mathbb{X}}_{1_k:Z_1})>0$ also holds in this case for $\theta \in \Theta_0$. By analogy to the star tree case, we will establish an assumption to ensure that asymptotic results for $n \to \infty$ will imply equivalent ones for $n_i \to \infty,\, i=1,\dots,k$. It also guarantees that ${\mathbb{E}_\theta}[Z_n]=n$, for $n\in {\mathbb{N}}$, as it is required to prove Theorem 2. In the multiple-hidden i.i.d. model on a binary tree ${\mathbb{E}_\theta}\,\big[\displaystyle\sum_{p=1}^{|{\varepsilon}_n|}\|{\varepsilon}_{n_p}^{obs}\|\big]=1_k$, for $n\geq 1$, for any $\theta\in\Theta$. This assumption holds for the multiple-hidden i.i.d. model under the TKF91 indel evolution process as it is shown in the following lemma. \[esp=1arbitrary\] In the multiple-hidden i.i.d. model on a binary tree under the TKF91 indel evolution process, for any $\lambda >0$ we have $Z_n^i \sim n$, $i=1,\dots,k$, $\mathbb{P}_{\lambda}$-almost surely. . We have already proved this result in the case of a star phylogenetic tree (Lemma \[esp=1\]), that is, when we have a tree without internal nodes. Now, the idea of the proof, is that, if at each level of the tree the expectation of the number of nucleotides descending (conserved plus inserted) from a single nucleotide in the parent sequence is 1, the expectation of the total number of nucleotides at each observed sequence descending from a single nucleotide in the root sequence will also be 1. Let us show it recursively. Let $L$ be the total number of levels on the tree, that is the number of edges between the root and an observed sequence (in the case of a binary tree, $L=\ln_2 k$). For each observed sequence $i\in Obs$, we will note $a_i^l$, $l=1\dots,L$ the $l$-th ancestor of $i$, beginning at the direct ancestor and ending at the root of the tree. For all $i\in Obs$ and for all $n\geq 1$ we have that $$Z_n^i=\sum_{1\leq j\leq n} \sum_{1\leq p\leq |{\varepsilon}_j|}\|{\varepsilon}_{j_p}^i \|$$ where $\{{\varepsilon}^i_j\}_{j\geq 1}$ are i.i.d. Moreover, we have, for any $\lambda >0$ $$\begin{aligned} & &\!\!\!\!\!\!\!\!\!\!\!\mathbb{E}_{\lambda}\!\left[\sum_{p=1}^{|{\varepsilon}_j|}\|{\varepsilon}_{j_p}^i \|\right] \stackrel{(a)}{=}\mathbb{E}_{\lambda}\!\!\left[\sum_{p=1}^{\sum_{q=1}^{|{\varepsilon}_j|}\|{\varepsilon}^{a_i^1}_{j_q}\|}\!\!\!\|{\varepsilon}_{j_p}^i \|\right]=\mathbb{E}_{\lambda}\left\{ \mathbb{E}_{\lambda}\left[\sum_{p=1}^{\sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^1}_{j_q}\|}\!\!\!\|{\varepsilon}_{j_p}^i \| \,\left| \sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^1}_{j_q}\|\right.\right]\!\right\}\\ &=&\mathbb{E}_{\lambda}\left[ \sum_{p=1}^{\sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^1}_{j_q}\|} \mathbb{E}_{\lambda}\left[\|{\varepsilon}_{j_p}^i \| \right]\right]\stackrel{(b)}{=} \mathbb{E}_{\lambda}\left[ \sum_{p=1}^{\sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^1}_{j_q}\|} 1\right]=\mathbb{E}_{\lambda}\left[ \sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^1}_{j_q}\|\right]\\ &=&\mathbb{E}_{\lambda}\left[ \sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^2}_{j_q}\|\right]=\dots=\mathbb{E}_{\lambda}\left[ \sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^{L-1}}_{j_q}\|\right]\stackrel{(c)}{=}1\end{aligned}$$ where (a) comes from the fact that $\|e_{j_p}^{i}\|\neq 0$ only for those $p$ such that ${\varepsilon}_{j_p}^{a^1_i}(1)=1$, and (b) comes from Lemma \[esp=1\]. Finally, for any $i \in Obs$, $\sum_{q=1}^{|{\varepsilon}_j|} \|{\varepsilon}^{a_i^{L-1}}_{j_q}\|$ is just the number of descendants (conserved plus inserted nucleotides) of the nucleotide in the root in one of its direct children. The expectation of this quantity is again 1 by Lemma \[esp=1\]. The result holds from the strong law of large numbers. \ Finally, to prove that Theorem 2 also holds in the case in which we deal with an arbitrary tree, we need to show that for any $\theta \in \Theta_{marg}$ (same definition as in Section \[Inf\_Div\]) and for any observed sequence $i$ $${\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k),\,X^i_{1:n_i}=x^i_{1:n_i}\big)={\mathbb{P}_\theta}\big(Z_n=(n_1,\dots,n_k)\big)f^{\otimes n_i} (x^i_{1:n_i}).$$ But this can be easily shown from expression (\[arbicondi\]) in the same way that in (\[re\]). Then the asymptotic results obtained in Section \[Inf\_Div\] are also valid when the phylogenetic tree has a general form. Simulations {#simus} =========== We have considered for the simulations a 3-star phylogenetic tree, the most simple non trivial example of multiple alignment. The branches lengths, or evolutionary distance from the ancestral sequence to the observed sequences, are set to $1$ in all branches. Let us recall that this distance is not the real time of evolution between sequences but a measure given in terms of the number of expected evolutionary events per site. Indeed, under the TKF91 indel evolution model $\lambda t$ is the expected number of indels per site between two sequences at distance $t$. The distribution of the hidden process has been taken to be the distribution of the homology structure under the TKF91 indel evolution model, that is, $\{{\varepsilon}_n\}_{n\geq 1}$ are independent and identically distributed as in (\[loi\_e\]). However, we have used the equivalent multiple-HMM (see for instance Hein [*et al.*]{}, 2003, and Figure \[mapa\]) scheme to simulate the sequences. Indeed, in practice it is easier to simulate from a finite state Markov chain than from our i.i.d. variables on ${\mathbb{N}}^3$. The number of states for the Markov chain for three sequences is 15 ($2^4-1$). The simulated sequences have been used to compute the quantities $\ell(\theta)$ and $w(\theta)$. The log-likelihood $\omega_n(\theta)$ has been computed with the Forward algorithm for multiple-HMM (cf. Durbin [*et al.*]{}, 1998). Note that this algorithm computes the log-likelihood by summing over all possible alignments of the three sequences. However, since a homology structure is just a set of alignments, this is equivalent to sum over all possible homology structures, and the final result is exactly $\omega_n(\theta)$. The time complexity for a non-improved version of this algorithm is $O(15^2 n_1 n_2 n_3)$, where $n_1$, $n_2$ and $n_3$ are the lengths of the observed sequences. Computation of $\ell_n(\theta)$ is done with a modified version of the Forward algorithm that takes into account the length of the ancestral sequence. The time complexity grows now to $O(15\, n\, n_1 n_2 n_3)$. This is the reason for having limited the simulations to 3 sequences. The emission distributions chosen for the simulations, $\{h_{J}\}_{J\subseteq \{1,2,3\}}$ and $f$, are defined by the substitution model described below. The substitution model ---------------------- For the whole simulation procedure we consider the following pairwise markovian substitution model: $$p_t(x,y) =\left\{ \begin{array}{ll} (1-e^{-\alpha t})\nu(y) & \text{ if } x\neq y\\ \{ (1-e^{-\alpha t})\nu(x) +e^{-\alpha t} \} & \text{ otherwise, } \end{array} \right .$$ where $\alpha >0$ is called the substitution rate, $t$ is the evolutionary distance, and for every letter $x$, $\nu(x)$ equals the equilibrium probability of $x$. This model is known as the Felsenstein81 substitution model (Felsenstein, 1981). We will take $f(\cdot)=\nu(\cdot)$. We define the emission function $h$ as $$h_{J}((x_i)_{i\in J})=\sum_{R\in {\mathcal{A}}} \nu(R)\prod_{i \in J}p_{t_{i}} (R,x_i)$$ for all $J \subseteq \{1,2,3\}$. The equilibrium probability distribution $\nu(\cdot)$ is assumed to be known and will not be part of the parameter. Then we have $f(\cdot)=f_0(\cdot)$. We will set it to $\{\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4}\}$ for the whole simulation procedure. The unknown parameter is $\theta=(\lambda,\alpha)$. Simulation results ------------------ We have computed the functions $\ell(\theta)$ and $w(\theta)$ for two different values of $\theta_0$: - $\lambda_0=0.02$, $\alpha_0=0.1$ and - $\lambda_0=0.01$, $\alpha_0=0.08$. The substitution rate is much bigger than the insertion-deletion rate and both are quite small, as expected by biologists.\ ![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.02,\alpha_0=0.1$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w"}](w1.eps "fig:"){width="6.95cm"}![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.02,\alpha_0=0.1$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w"}](l1.eps "fig:"){width="6.95cm"}\ ![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.02,\alpha_0=0.1$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w"}](wlcortes1.eps "fig:"){width="13.9cm"} ![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.01,\alpha_0=0.08$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w2"}](w3.eps "fig:"){width="6.95cm"}![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.01,\alpha_0=0.08$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w2"}](l3.eps "fig:"){width="6.95cm"}\ ![*On top: $w$ and $\ell$ for parametrization ($\lambda_0=0.01,\alpha_0=0.08$). On bottom: cuts of $\ell$ and $w$ for $\alpha=\alpha_0$ fixed and for $\lambda=\lambda_0$ fixed.*[]{data-label="w2"}](wl_cortes3.eps "fig:"){width="13.9cm"} The graphs of $\ell(\theta)$ and $w(\theta)$ for these parameterizations are shown in Figures \[w\] and \[w2\]. For the first parametrization we can see that $w(\theta)$ seems to take its maximum at ($\lambda_0,\alpha_0$) (Figure \[w\], top left). For $\ell(\theta)$ this is not so evident. Neither for any of the two functions for the second parametrization. However, when looking at the cuts of $w(\theta)$ and $\ell(\theta)$ for $\alpha=\alpha_0$ and $\lambda=\lambda_0$ we appreciate that in both parameterizations both seem to take their maximums near $\lambda_0$ and $\alpha_0$ respectively. We remark that in the two examples, the functions $\ell(\theta)$ and $w(\theta)$ are very close to each other. Discussion ========== The main contribution of this work is to provide a probabilistic and statistical background to parameter estimation in the multiple alignment of sequences based on a rigorous model of evolution. We describe the homology structure of $k$ sequences related by a star-shaped phylogenetic tree as a sequence of i.i.d. random variables whose distribution is determined by the evolution process. Given the observed sequences, the homology structure is a latent (non-observable) process. We formally define the latent variable model that [*emits*]{} the observed sequences, namely the multiple-hidden i.i.d. model. We discuss possible definitions of likelihoods in comparison with the quantities computed by multiple alignment algorithms. Our main results are given in Theorems \[thdivergence2\] and \[contrast2\], where we first prove the convergence of normalized log-likelihoods and identify cases where a divergence property holds. We then extend the definition of the model and the results obtained to the case of an arbitrary phylogenetic tree. Despite the positive results that we obtain, it is not yet possible to validate the estimation of evolution parameters under the multiple-hidden i.i.d. model in every situation. However, the simulation studies that we present to investigate situations that are not covered by Theorem \[contrast2\] provide encouraging results. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank Elisabeth Gassiat from Université Paris-Sud (France) and Catherine Matias from Génopole, CNRS (France), for fruitful advice and helpful comments. The author was partially supported by the Spanish Ministerio de Ciencia e Innovación through project ECO2008-05080 and by Comunidad de Madrid - Universidad Carlos III (Spain) through project CCG08-UC3M/HUM-4467. [^1]: Departamento de Estadística. Universidad Carlos III de Madrid. C/ Madrid 126, 28903 Getafe, Spain. E-mail: aarribas@est-econ.uc3m.es
--- abstract: 'We prove a variant of the Beauville–Bogomolov decomposition for weakly ordinary, or generally globally $F$-split, varieties $X$ with $K_X \sim 0$, in characteristic $p>0$. If the assumption $K_X \sim 0$ is replaced by $-K_X$ being semi-ample, we show the weaker statement that all closed fibers of the Albanese morphism are isomorphic. In the appendix written jointly with Giulio Codogni, we also deduce the singular version of the latter statement, in characteristic zero. Finally, we apply our main theorem to draw consequences to the behavior of rational points and fundamental groups of weakly ordinary $K$-trivial varieties in positive characteristic.' address: - 'École Polytechnique Fédérale de Lausanne, Chair of Algebraic Geometry MA C3 625 (Bâtiment MA), Station 8, CH-1015 Lausanne' - 'École Polytechnique Fédérale de Lausanne, Chair of Algebraic Geometry MA C3 585 (Bâtiment MA), Station 8, CH-1015 Lausanne' author: - Zsolt Patakfalvi - Maciej Zdanowicz bibliography: - 'includeNice.bib' title: 'On the Beauville–Bogomolov decomposition in characteristic $p\geq 0$' --- Introduction ============ Statements for smooth varieties {#sec:results_smooth_case} ------------------------------- A weak version of the classical Beauville–Bogomolov decomposition (see [@Bogomolov_The_decomposition_of_Kahler_manifolds_with_a_trivial_canonical_class; @Beauville_Varietes_Kahleriennes_dont_la_premiere_classe_de_Chern_est_nulle]) was already observed by Calabi for Kähler manifolds in [@Calabi_On_Kahler_manifolds_with_vanishing_canonical_class]. It states that for every smooth projective variety $X$ defined over $\bC$ with $K_X \sim 0$ there exists a finite étale cover $$\label{eq:weak_BB_classical} V \times B \to X,$$ where $B$ is an abelian variety and $V$ is a smooth projective simply connected variety with $K_V \sim 0$. Already this statement has substantial corollaries concerning geometry of varieties with trivial canonical class. For example, it directly implies that the fundamental group of such variety is virtually abelian, that is, admits a finite index abelian subgroup. Instead of requiring $V$ to be simply connected, the decomposition can be alternatively also characterized up to an étale cover of $V$ by using the notion of *augmented irregularity*. By definition, the augmented irregularity of $X$ is $$\label{eq:augmented_irregularity} \wh{q}(X) := \max \left\{ \left. \dim \Alb_{X'} \right| X' \to X \text{ is a finite \'etale morphism}\right\},$$ for which one needs to prove that the above maximum exists. This follows from Calabi’s original statement, or from [@Kawamata_Characterization_of_abelian_varieties Thm 1] in a more general singular setting. Then, the above mentioned alternative characterization of is by requiring that $\wh{q}(X)=\dim B$ and $\wh{q}(V)=0$. In the present article, we show a positive characteristic version of the decomposition using the augmented irregularity. *Our base field $k$ is perfect and of characteristic $p>0$*, and we also need the following positive characteristic notion: a projective variety $X$ over $k$ is *weakly ordinary* if the action of the absolute Frobenius morphism $F_X$ on $H^{\dim X} ( X, \sO_X)$ is bijective. This is a genericity notion. That is, being weakly ordinary is an open condition in positive equicharacteristic and it is typically dense in moduli. Additionally, it is conjectured to be dense over mixed characteristic bases that are finite type over $\bZ$ [@Mustata_Srinivas_Ordinary_varieties_and]. For a smooth projective weakly ordinary variety $X$ with $K_X \sim 0$ we define the *augmented irregularity* just as in , where the existence of the maximum is guaranteed by [@Ejiri_When_is_the_Albanese_morphism_an_algebraic_fiber_space_in_positive___characteristic? Thm 1.1]. Also, already when $X$ is smooth, our decomposition statement uses the notion of strongly $F$-regular singularities, which is a characteristic $p$ class of mild singularities. In particular, it is contained both in the class of klt and rational singularities. We refer to the many surveys in the topic for a detailed introduction on the notions of $F$-singularities [@Schwede_Tucker_A_survey_of_test_ideals; @Patakfalvi_Schwede_Tucker_Positive_characteristic_algebraic_geometry; @Patakfalvi_Frobenius_techniques_in_birational_geometry] \[thm:smooth\_BB\_decomp\] \[cor:beauville\_bogomolov\] Let $X$ be smooth projective variety over $k$, such that $K_X \sim 0$ and $X$ is weakly ordinary. Then there is a composition $$B \times V \to Z \to X$$ of two finite covers, such that 1. $Z \to X$ is étale, $B \times V \to Z$ is a torsor under $\prod_{i=1}^{\wh{q}(X)} \mu_{p^{j_i}}$ for some integers $j_i \geq 0$, 2. $B$ is an abelian variety with $\dim B = \wh{q}(X)$, and 3. $V$ is a weakly ordinary projective Gorenstein variety over $k$ with strongly $F$-regular singularities, $K_V \sim 0$ and $\wh{q}(V)=0$. \[rem:differences\_to\_char\_zero\] There are two major differences between and the original characteristic zero statement mentioned above: 1. \[itm:differences\_to\_char\_zero:inseparable\] $B \times V \to X$ in is not étale as in characteristic zero, but has an infinitesimal part as well, and 2. \[itm:differences\_to\_char\_zero:singularities\] $V$ is not necessarily smooth, but has only strongly $F$-regular singularities. These two phenomena are in fact interconnected, as both are caused by the possible presence of non-reduced polarized automorphism groups. However, we do not know an example of a smooth projective weakly ordinary variety $X$ with $K_X \sim 0$ for which any of or occurs. On the other hand, we do have mildly singular examples for which phenomenon occurs: see , which satisfies the assumptions of our singular statements in . In we state precise properties of a hypothetical singular Calabi–Yau with a $\mu_p$ action under which the quotient is a smooth Calabi–Yau, the existence of such variety would yield a smooth example to both of the above two phenomena. In fact, one can make the statement of slightly more precise. That is, according to , by adding the following additional requirements, the decomposition of still exists: the action of $G:=\prod_{i=1}^n \mu_{p^{j_i}}$ on $Y$ is the diagonal action induced by an action on $V$ and an action on $B$, respectively, such that 1. $G$ acts freely and faithfully on $B$, and 2. $G$ acts faithfully on $V$. \[rem:weak\_ordinary\_global\_F\_split\] If $X$ is a normal, projective variety over $k$ such that $K_X \sim 0$, then $X$ is weakly ordinary if and only if it is globally $F$-split. That is if and only if the structure morphism $\sO_X \to F_* \sO_X$ splits as a homomorphism of $\sO_X$-modules. This is the framework in which the statement of generalizes to the case when $K_X \equiv 0$. Additionally, in this framework, one can allow also singularities. Hence, in , and in general in most parts of the article, the weakly ordinary condition will be replaced with global $F$-splitting. In , we are also able to prove, the positive characteristic version of the main result of [@Cao_Albanese_maps_of_projective_manifolds_with_nef_anticanonical_bundles], with the caveat that if we are not over a finite field or its algebraic closure, then we have to replace nef by semi-ample. This results, i.e, , uses , which might be of interest independently as well. These statements are also impossible to state completely without the use of the local and global $F$-singularity notions. The notion of global $F$-splitting generalizes weak ordinarity, as explained in , and strong $F$-regularity generalizes smoothness, as mentioned before . \[thm:smooth\_isotriviality\] Let $f : X \to T$ be a surjective morphism from a smooth projective variety to a smooth projective curve with strongly $F$-regular general fiber such that either 1. \[situation:smooth\_isotriviality:semi\_ample\] $-K_{X/T}$ is semi-ample, or 2. \[situation:smooth\_isotriviality:finite\_field\] $k \subseteq \obF_p$ and $-K_{X/T} $ is nef. Then $f$ has isomorphic closed fibers over $\ok$. Luckily, in the situation of , we can prove that the general fibers of the Albanese morphism are strongly $F$-regular (see ). This, together with , leads then to the following statement: \[thm:smooth\_isotriviality\_Albanese\] Let $X$ be a smooth projective globally $F$-split variety such that either 1. \[situation:smooth\_isotriviality\_Albanese:semi\_ample\] $-K_X$ is semi-ample, or 2. \[situation:smooth\_isotriviality\_Albanese:finite\_field\] $k \subseteq \obF_p$ and $-K_X $ is nef. Then the Albanese morphism $f \colon X \to A$ has isomorphic closed fibers over $\ok$. \[rem:trivializes\_over\_torsor\_smooth\] The general version of provided in , makes a more precise statement: $f$ gets trivialized over a specific torsor under the polarized automorphism group of (any) fiber. We conclude the present section with a few applications of our results. We start with a conjecture that mirrors conjectures over number fields for Calabi-Yau varieties: \[conj:rational\_points\] If $X$ is a smooth projective Calabi-Yau variety over $\bF_q$ (that is, $H^i(X, \sO_X)=0$ for $0<i<\dim X$), then $X(\bF_q) \neq \emptyset$. We are able to deduce from our main results a statement in the direction of : \[special case of \] If $X$ is a smooth projective weakly ordinary $3$-fold over $\bF_q$ with $K_X \sim 0$, $\wh{q}(X)\neq 0$ and $q \geq 83$, then $X(\bF_q) \neq \emptyset$. Our other application of our main results is towards the conjecture, the characteristic zero counterpart of which is well known, that $K$-trivial smooth projective varieties have virtually abelian étale fundamental groups. \[special case of \] If $X$ is a smooth projective weakly ordinary variety with $K_X \sim 0$ and $\hat{q}(X)=\dim X -2$, then $\pi_1^{\'et}(X)$ is virtually abelian. We note that in the full statements, that is, in and in we are able to also include the $K_X$ numerically trivial case by assuming $F$-purity instead of weak ordinarity. Statements for singular varieties {#sec:results_singular_case} --------------------------------- As explained before, the main recipe of turning the statements concerning smooth varieties provided in into statements about singular varieties is to replace every occurrence of smooth by strongly $F$-regular, and every occurrence of weakly ordinary by globally $F$-split. Hence for we obtain: \[thm:BB\_decomp\] Let $(X,\Delta)$ be globally $F$-split projective pair over $k$ with strongly $F$-regular singularities, such that $K_X + \Delta \equiv 0$. Then there is a composition $Y \to Z \to X$ of two finite covers such that $Z \to X$ is quasi-étale, $Y \to Z$ is a torsor under $\prod_{i=1}^{\wh{q}(X)} \mu_{p^{j_i}}$ for some integers $j_i \geq 0$, and such that $$(Y, \Delta_Y) \cong (V,\Delta_V) \times B,$$ where 1. $B$ is an abelian variety with $\dim B = \wh{q}(X)$. 2. $(V, \Delta_V)$ is a globally $F$-split projective pair over $k$ with strongly $F$-regular singularities, $K_V + \Delta_V \equiv 0$ and $\wh{q}(V)=0$. The proof of the theorem actually works in a more general setting. We managed to show the following result under the assumption that $-K_X-\Delta$ is nef/semi-ample. \[thm:isotriviality\_Albanese\] Let $(X,\Delta)$ be a projective globally $F$-split pair with strongly $F$-regular singularities such that $K_X + \Delta$ is $\bQ$-Cartier with index prime-to-$p$, and either 1. \[situation:smooth\_isotriviality\_Albanese:semi\_ample\] $-(K_X + \Delta)$ is semi-ample, or 2. \[situation:smooth\_isotriviality\_Albanese:finite\_field\] $k \subseteq \obF_p$ and $-(K_X +\Delta)$ is nef. Then, $(X_t,\Delta_t) \cong (X_s, \Delta_s)$ for every $s,t \in A\left(\ok\right)$. While proving , we also obtain the following theorem concerning fibrations over curves. \[thm:isotriviality\] Let $f : (X,\Delta) \to T$ be a surjective morphism from a projective normal pair to a smooth projective curve such that $\Delta$ is an effective $\bQ$-divisor, $K_X + \Delta$ is $\bQ-Cartier$, the general fiber $(X_t, \Delta_t)$ is strongly $F$-regular and either 1. \[situation:smooth\_isotriviality:semi\_ample\] $-(K_{X/T}+ \Delta)$ is semi-ample, or 2. \[situation:smooth\_isotriviality:finite\_field\] $k \subseteq \obF_p$ and $-(K_{X/T} + \Delta)$ is nef. Then, $(X_t,\Delta_t) \cong (X_s, \Delta_s)$ for every $s,t \in T\left(\ok\right)$. \[remark:intro\_historical\] [<span style="font-variant:small-caps;">Historical remarks (in characteristic zero):</span>]{} As mentioned above, the original smooth version of the Beauville–Bogomolov decomposition was shown in [@Bogomolov_The_decomposition_of_Kahler_manifolds_with_a_trivial_canonical_class; @Beauville_Varietes_Kahleriennes_dont_la_premiere_classe_de_Chern_est_nulle]. The singular Beauville–Bogomolov decomposition has recently attracted a serious amount of attention which culminated in the series of papers [@Greb_Kebekus_Petternel_Singular_Spaces_with_Trivial_Canonical_Class; @Greb_Guenancia_Kebekus_Klt_varieties_with_trivial_canonical_class_-_Holonomy__differential___forms__and_fundamental_groups; @Druel_A_decomposition_theorem_for_singular_spaces_with_trivial_canonical_class___of_dimension_at_most_five; @Horing_Peternell_Algebraic_integrability_of_foliations_with_numerically_trivial_canonical___bundle] leading to the full decomposition theorem for klt varieties with numerically trivial canonical class. The weak Beauville–Bogomolov condition was shown even in the singular logarithmic setting in the papers [@Kawamata_Characterization_of_abelian_varieties; @Ambro_The_moduli_b-divisor_of_an_lc-trivial_fibration]. For the additional statement about trivialization over a flat torsor, mentioned in , we refer to , and . Statements in characteristic zero --------------------------------- Using the techniques developed during the positive characteristic considerations we also managed to prove the following theorem. Let $(X,\Delta)$ be a klt $\bQ$-factorial pair over an algebraically closed field of characteristic zero such that $-K_X-\Delta$ is nef. Then the Albanese morphism $\pi \colon (X,\Delta) \to \Alb_X$ is an isotrivial fibration. This generalizes to the singular and log setting the recent result of Cao [@Cao_Albanese_maps_of_projective_manifolds_with_nef_anticanonical_bundles] resolving in the projective case the question posed in [@Demailly_Peternell_Schneider_Compact_Kahler_Manifolds_with_Hermitian_semipositive] (see ). The problem stated by Demailly, Peternell and Schndeider originates from the works [@Campana_Peternell_Projective_manifolds_whose_tangent_bundles_are_numerically_effective; @Demailly_Peternell_Schneider_Compact_Complex_manifolds_with_numerically_effective_tangent_bundles] concerning the structure of smooth varieties with nef tangent bundle. Outline of the proof -------------------- In the following section, we give an outline of the proof of . This includes all the techniques necessary to get the most general results, i.e., the statements of . However, for the purpose of clarity we avoid some technical difficulties. Let $X$ be a smooth weakly ordinary projective variety defined over an algebraically closed field $k$ of characteristic $p>0$. Suppose that the canonical divisor satisfies the condition $K_X \sim 0$. ### Albanese morphism and augmented irregularity in characteristic $p$ {#ss:intro_albanese_morphism_irregularity} We begin our proof with the application of the results of Ejiri provided in [@Ejiri_When_is_the_Albanese_morphism_an_algebraic_fiber_space_in_positive___characteristic?]. For this purpose, we first apply the standard result of Mehta and Ramanathan [@Mehta_Ramanathan_Frobenius_splitting_and_cohomology_vanishing_for_Schubert_varieties Proposition 9] to see that a weakly ordinary variety satisfying the condition $K_X \sim 0$ is globally $F$-split. Then, using [@Ejiri_When_is_the_Albanese_morphism_an_algebraic_fiber_space_in_positive___characteristic? Theorem 1.1 and Theorem 1.2] we see the Albanese $X \to \Alb_X$ is in fact a relatively normal and $F$-split surjective algebraic fibre space. Consequently, we know that the general fibres of $X \to \Alb_X$ are normal, and all the fibres are $F$-pure and hence reduced. Moreover, since the condition of $F$-splitting is preserved under étale covers, this also implies that the augmented irregularity as defined in is finite. We may therefore take a Galois étale cover $Z \to X$ such that $\dim \Alb_Z = \wh{q}(X)$. We note that as $K_Z\sim 0$ and $Z$ is globally $F$-split, for $Z \to \Alb_Z$ also hold the above features of $X \to \Alb_X$. We shall prove that the Albanese morphism $Z \to \Alb_Z$ becomes a product after taking an étale cover and a further diagonalizable torsor over the base. ### General fibers are strongly $F$-regular The standard tools to control the behaviour of fibrations such as $Z \to \Alb_Z$ are the semi-positivity results for the relative canonical sheaves provided in characteristic $p$ in the paper of the first author [@Patakfalvi_Semi_positivity_in_positive_characteristics]. The ubiquitous requirement for the application of the aforementioned results is the strong $F$-regularity of the general fibres of the investigated morphism. Using the arguments above, until now we only managed to prove that the fibres are $F$-pure. In order to improve the situation and get that the general fibres of $Z \to \Alb_Z$ are strongly $F$-regular it suffices to show that the Frobenius pullbacks $Z^e = Z \times_{F^e_{\Alb_Z}} \Alb_Z$ are strongly $F$-regular along the generic fibres of natural projections $Z^e \to \Alb_Z$, for every $e > 0$ [@Patakfalvi_Schwede_Zhang_F_singularities_in_families]. The natural tool now is the theory of test ideals $\tau(Y) \subseteq \cO_Y$, associated to normal varieties $Y$, developed by Hochster and Huneke (see [@Hochster_Huneke_Tight_closure_and_strong_F-regularity] for the original work and [@Schwede_Tucker_A_survey_of_test_ideals] for a comprehensible survey). The ideals control strong $F$-regularity of $Y$ in the sense that the condition $\tau(Y) = \cO_Y$ is satisfied exactly along the locus where $Y$ is strongly $F$-regular. In our situation, to prove that $\tau(Z^e) = \cO_{Z^e}$ along the generic fibre of the projection we apply the transformation rule for test ideals under finite maps provided in [@Schwede_Tucker_On_the_behavior_of_test-ideals_under_finite_morphisms] to the relative Frobenius $F^e_Z \colon Z \to Z^e$. We emphasize that, as required by the results of *loc.cit*, $Z^e$ is normal along the generic fibre of the projection (see the middle of ). For the precise argument, we refer to . ### Flatness As a next step towards the proof, in we show that the Albanese morphism of $Z$ is in fact flat. For this purpose, we mimic the characteristic zero arguments given in [@Lu_Tu_Zhang_Zheng_On_semistability_of_Albanese_maps Theorem]. Our contribution here is mainly the realization that in characteristic $p$ there are appropriate semi-positivity results (see ) to execute the strategy. To sum up, our current arguments state that the morphism $Z \to \Alb_Z$, which we intend to prove becomes a product, is a flat algebraic fibre space with strongly $F$-regular general fibre. ### Restriction to curves We consequently proceed to the proof of the next approximation of the desired result, that is, we show that the family $Z \to \Alb_Z$ is isotrivial over a general complete intersection curve $T \subset \Alb_Z$ that goes through an arbitrarily fixed closed point $t_{\specialpt} \in \Alb_Z$ and a general fixed closed point $0 \in \Alb_Z$. We set $V = T \times_{\Alb_Z} Z$. Since the all fibres of $Z \to \Alb_Z$ are reduced, and the general fibre is normal and strongly $F$-regular, we see that $V$ is normal and the morphism $V \to T$ is a flat fibration with strongly $F$-regular general fibre. Moreover, using the base change formulas for the relative canonical divisor we see that $K_{V/T} \sim (K_{Z/\Alb_Z})_{ | V} \sim 0$. We note that in general this part of the argument requires a little care. We provide the relevant base change results for the relative canonical in . The details of the argument are provided as the part of the main proof given in . ### Numerical flatness {#ss:intro_numerical_flatness} In order to prove that the morphism $f \colon V \to T$ is isotrivial we first show that there exists an appropriate $f$-ample divisor on $L$ on $V$ such that the relative section sheaves $f_*\cO_V(mL)$, for $m>0$, satisfy certain notion of triviality called *numerical flatness*. A vector bundle $\cE$ on a projective smooth scheme $X$ is numerically flat if both $\cE$ and $\cE^\vee$ are nef. We refer to for a more detailed description of the notion. In our context, we prove that the sheaves $f_*\cO_V(mL)$ are numerically flat if $L$ is an $f$-ample divisor such that $L^{d+1} = 0$, where $d+1 = \dim V$. For the detailed proof we refer to . In this paragraph we give a brief description of the arguments. First of all, we show that an $f$-ample divisor $L$ satisfying $L^{d+1} = 0$ is in fact nef. In fact, it is enough to show that $L + f^* \varepsilon H$ is nef for fixed ample divisor $H$ on $T$ and for every $\varepsilon>0$. The point is that a Riemann-Roch computation shows that in this case there is an effective $\Gamma \sim_{\bQ} L + f^* \varepsilon H$, see . Then, semi-positivity theory applied to $\varepsilon' \Gamma = (K_V + \Delta_V + \varepsilon' \Gamma) + (-K_V - \Delta_V)$ yields the above nefness using that $-K_V - \Delta_V \sim_{\bQ} 0$, see and . Here $\Delta_V$ is a natural boundary on $V$ that makes the linear equivalence $K_{X/T}|_V \sim_{\bQ} K_V + \Delta_V$, see . Having shown the nefness of $L$, the nefness of $f_* \sO_V(mL)$ follows from standard semi-positivity theory again, see . Second, we show that $f_* \sO_V(mL)$ is anti-nef. So, assume the contrary. According to , this is equivalent to the lowest piece $\sE$ of the Harder-Narasimhan filtration of $F^{l,*} f_* \sO_V(mL)$ having positive degree for every $l \gg 0$. Now, by choosing $l \gg0$ we may assume that $\sE$ is strongly semi-stable, and then $\sE^{\otimes r}$ is semi-stable for every $r>0$ [@Langer_Semistable_sheaves_in_positive_characteristic Thm 6.1]. The main idea is that by going to $r \gg 0$, via the multiplication map $\sE^{\otimes r} (-t) \to (f_*\sO_X(rmL))(-t)$ this yields a section in $H^0(T, f_*\sO_X(rmL)(-t)) \cong H^0(X, rmL - X_t)$ that should not exist as $L^{d+1} = 0$, see . ### Isotriviality for finite fields {#ss:intro_isotriviality_over_finite_fields} In characteristic $p>0$, the numerical flatness turns out to be a particularly strong notion if the underlying variety $X$ is defined over the algebraic closure of the finite field. More precisely, using the results of Langer and the classical theorem of Lange and Stühler one can prove that a numerically flat bundle defined on a variety $X/\overline{\FF}_q$ is trivializable on the cover $Y \to X$ which is a composition of a finite étale morphism and a power of the Frobenius (see ). Assuming that the variety $V$ is defined over the algebraic closure of a finite field and taking such a cover $\tau \colon S \to T$ for the bundle $f_*\cO_V(mL)$, where $m$ is chosen so that the natural multiplication maps $\Sym^d f_*\cO_V(mL) \to f_*\cO_V(dmL)$ are surjective, we see that the relative canonical ring $$R_{V_S/S}(mL_S) = \bigoplus_{d \in \NN} f_{S*}\cO_{V_S}(dmL_S) \expl{\isom}{flat base change} \bigoplus_{d \in \NN} \tau^*f_*\cO_V(dmL)$$ consists of numerically flat bundles which are quotients of trivial bundles. Such bundles are in fact trivial themselves and hence, since $S$ is a projective curve, $R_{V_S/S}(mL_S)$ comes as pullback of a ring defined over the base field. This implies that $V_S \to S$ is a product family, and hence gives isotriviality over curves $T$ if the initial variety $X$ is defined over the algebraic closure of a finite field. The precise statements are presented in . Similar arguments actually lead to the proof of . ### Reduction to finite fields In order to reduce to the case where the base field is the algebraic closure of a finite field, we use the spreading out technique and the base change properties for the suitable relative polarized isomorphism scheme. We first observe that above isotriviality result implies that the natural map $$\Isom_T\left((V,mL),(V_t \times_k T,mL_t \times_k T)\right) \to T,$$ where $t \in T$ is a $k$-rational base point, is surjective if the base field $k$ is the algebraic closure of a finite field. In order to get a similar statement in general, for arbitrary perfect base field $k$, we take a spreading out $(\cV \to \cT,\cL,\sigma \colon \Spec(R) \to \cT)$ over a finitely generated $\FF_q$-algebra $R$ of the morphism $V \to T$ along with the divisor $L$ and a choice of a base point $t \in T$. We consider the relative isomorphism scheme $$\cI = \Isom_{\cT}\left((\cV,m\cL),(\cV_{\sigma} \times_{\Spec(R)} \cT, m\cL_{\sigma} \times_{\Spec(R)} \cT)\right) \to \cT.$$ By base change property of isomorphism schemes, we see that the result of the previous section implies that the morphism $\cI \to \cT$ defined over $R$ is surjective when restricted to every closed point of $\Spec(R)$. By a standard scheme theoretic argument, this yields surjectivity at the geometric generic point, and hence the required isotriviality even in the polarized setting. We note that the number $m$ showing up in can be chosen in advance before the spreading out so that it yields surjectivity of the multiplication map for every finite field reduction. ### From isotriviality over curves to the isotriviality over $\Alb_Z$ and to the splitting over a flat cover First, we observe that the choice of a line bundle $L$ in and the number $m$ in can in fact be performed uniformly, see and . Then, we consider the isomorphism scheme $$I = \Isom_{\Alb_Z}\left((Z,mL_Z),(\Alb_Z \times_k Z_0,\Alb_Z \times_k mL_0)\right)$$ where $0 \in \Alb_Z$ is a fixed $k$-rational base point and $L_0 = L_{Z|Z_0}$. By the base-change properties of the isomorphism scheme, and the above explained isotriviality over the curves $T \subseteq \Alb_Z$, the image of $I\to \Alb_Z$ contains every such $T$. However, as $t_{\specialpt} \in T$ was arbitrarily fixed, this means that $\pi : I \to \Alb_Z$ is surjective. Additionally, by using again the base-change properties of the isomorphism scheme, we see that the morphism $Z \to \Alb_Z$ becomes a product (even in the polarized way) after the base change along $\pi$. The formal version of the argument is provided in the proof of . We remark that in the logarithmic setting the actual argument is quite delicate. We provide the details of the necessary base change results for the logarithmic version of the isomorphism scheme in . ### Finiteness of automorphism groups The above argument does not say anything about the structure of the trivializing cover $\pi \colon I \to \Alb_Z$. In order to rectify this situation, we observe that $\pi$ is in fact a torsor under the polarized automorphism scheme $G = \Aut(Z_0,L_0)$ of the fibre $Z_0$. By the previous considerations we see that $Z_0$ is a strongly $F$-regular, Gorenstein variety satisfying $K_{Z_0} \sim 0$. In we show that polarized automorphisms schemes of such varieties are in fact finite group schemes. The proof is by contradiction. Assuming otherwise, we infer that $G$ admits a subgroup isomorphic to the additive or multiplicative group acting on $Z_0$ with generically trivial stabilizers. Using the classical observation of Rosenlicht [@Rosenlicht_Some_basic_theorems_on_algebraic_groups Theorem 2 and 10] this easily imply that $Z_0$ is ruled which gives a contradiction using a simple argument based on Kodaira dimension. We emphasize that the last part of argument requires our strong bounds on singularities of $Z_0$. ### Nori fundamental group scheme We are now ready to finish the proof. In the previous section, we showed that the trivializing morphism $I \to \Alb_Z$ is in fact a torsor under a finite flat groups schemes over $k$. Such objects were extensively studied by Nori in his works concerning generalizations of étale fundamental groups. In particular, in [@Nori_Fundamental_Group_Scheme_Of_An_Abelian_Variety Proposition] it is proven that the reduced part of every torsor under a finite flat group scheme over an abelian variety $A$ is dominated by the $A[n]$-torsor given by the multiplication map $[n] \colon A \to A$, for some $n \in \NN$. Applying this to our situation, this means the morphism $I_{\red} \to \Alb_Z$ is covered by the multiplication map $[n] \colon \Alb_Z \to \Alb_Z$, which in turn implies that $Z \to \Alb_Z$ becomes a product after taking a base change under $[n]$. We conclude the proof by observing that $[n]$ is in fact a composition of an étale morphism and a diagonalizable torsor because $\Alb_Z$ is $F$-split. The details of the above argument are provided in . Acknowledgements ---------------- The authors would like to thank Piotr Achinger, Javier Carvajal-Rojas, János Kollár, Adrian Langer, Max Lieblich, Yuya Matsumoto, Karl Schwede and Burt Totaro for useful conversations and remarks. During the work on the article the authors were supported by grant \#200021/169639 of the Swiss National Science Foundation. This material is partially based upon the work of the first author supported by the National Science Foundation under Grant No. DMS-1440140 while the author was in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Spring of 2019 semester.
--- abstract: | The Burrows-Wheeler-Transform (BWT) is a reversible string transformation which plays a central role in text compression and is fundamental in many modern bioinformatics applications. The BWT is a permutation of the characters, which is in general better compressible and allows to answer several different query types more efficiently than the original string. It is easy to see that not every string is a BWT image, and exact characterizations of BWT images are known. We investigate a related combinatorial question. In many applications, a sentinel character \$ is added to mark the end of the string, and thus the BWT of a string ending with $\$$ contains exactly one \$-character. Given a string $w$, we ask in which positions, if any, the \$-character can be inserted to turn $w$ into the BWT image of a word ending with $\$$. We show that this depends only on the standard permutation of $w$ and present a $\mathcal{O}(n \log n)$-time algorithm for identifying all such positions, improving on the naive quadratic time algorithm. We also give a combinatorial characterization of such positions and develop bounds on their number and value.[^1] author: - Sara Giuliani - 'Zsuzsanna Lipt[á]{}k' - Romeo Rizzi title: When a Dollar Makes a BWT --- Introduction {#sec:introduction} ============ The Burrows-Wheeler-Transform (BWT), introduced by Burrows and Wheeler in 1994 [@BurrowsWheeler94], is a reversible string transformation which is fundamental in string compression and is at the core of many of the most frequently used bioinformatics tools [@bwa; @bowtie; @soap2]. The BWT, a permutation of the characters of the original string, is particularly well compressible if the original string has many repeated substrings, thus making it highly relevant for natural language texts and for biological sequence data. This is due to what is sometimes referred to as [*clustering effect*]{} [@RosoneS13]: repeated substrings cause equal characters to be grouped together, resulting in longer runs of the same character than in the original string, and as a result, in higher compressibility. Given a word (or string) $v$ over a finite ordered alphabet, the BWT is a permutation of the characters of $v$, such that position $i$ contains the last character of the $i$th ranked rotation of $v$, with respect to lexicographic order, among all rotations of $v$. For example, the BWT of the word ${\tt banana}$ is ${\tt nnbaaa}$, see Fig. \[fig:bwt-ex\] (left). A fundamental property of the BWT is that it is [*reversible*]{}: Given a BWT image $w$, a word $v$ such that ${\textrm{BWT}}(v)=w$ can be found in linear time in the length of $w$, and $v$ is unique up to rotation [@BurrowsWheeler94]. --------------- -- rotations of [banana]{} [abanan]{} [anaban]{} [ananab]{} [banana]{} [nabana]{} [nanaba]{} --------------- -- ------------------- -- rotations of ${\tt nanana}$ [ananan]{} [ananan]{} [ananan]{} [nanana]{} [nanana]{} [nanana]{} ------------------- -- --------------------- ---- rotations of ${\tt nanana\$}$ [\$nanana]{} [a\$nanan]{} [ana\$nan]{} [anana\$n]{} [na\$nana]{} [nana\$na]{} [nanana\$]{} \$ --------------------- ---- The BWT is defined for every word, even if not all of its rotations are distinct; this is the case, for example, with the word [nanana]{}, whose BWT is [nnnaaa]{}, see Fig. \[fig:bwt-ex\] (center). (Words for which all rotations are distinct are called [*primitive*]{}.) On the other hand, not every word is a BWT image, i.e. not every word is the BWT of some word. For example, [banana]{} is not the BWT of any word. It can be decided algorithmically whether a given word $w$ is a BWT image, by slightly modifying the above-mentioned reversal algorithm: if $w$ is not a BWT image, then the algorithm terminates in ${\mathcal{O}}(n)$ time with an error message, where $n$ is the length of $w$. Combinatorial characterizations of BWT images are also known [@MantaciRS03; @LShur11]: whether $w$ is a BWT image, depends on the number and characteristics of the cycles of its [*standard permutation*]{} (see Sec. \[sec:basics\]). In particular, $w$ is the BWT image of a primitive word if and only if its standard permutation is cyclic. Moreover, a necessary condition is that the runlengths of $w$ be co-prime [@LShur11]. In many situations, it is convenient to append a sentinel character $\$$ to mark the end of the word $v$; this sentinel character is defined to be lexicographically smaller than all characters from the given alphabet. For example, ${\textrm{BWT}}({\tt nanana\$}) = {\tt annnaa\$}$, see Fig. \[fig:bwt-ex\] (right). Clearly, all rotations of $v\$$ are distinct, thus the inverse of the BWT becomes unique, due to the condition that the sentinel character must be at the end of the word. In other words, given a word $w$ with exactly one occurrence of $\$$, there exists at most one word $v$ such that $w = {\textrm{BWT}}(v\$)$. In this paper, we ask the following combinatorial question: Given a word $w$ over alphabet $\Sigma$, in which positions, if any, can we insert the $\$$-character such that the resulting word is the BWT image of some word $v\$$? We call such positions [*nice.*]{} Returning to our earlier examples: there are two nice positions for the word [annnaa]{}, namely $3$ and $7$: [an\$nnaa]{} and [annnaa\$]{} are BWT images. However, there is none for the word [banana]{}: in no position can $\$$ be inserted such that the resulting word becomes a BWT image. We are interested both in characterizing nice positions for a given word $w$, and in computing them. Note that using the BWT reversal algorithm, these positions can be computed naively in ${\mathcal{O}}(n^2)$ time. Our results are the following: - we show that the question which positions are nice depends only on the standard permutation of $w$; - we present an ${\mathcal{O}}(n \log n)$ time algorithm to compute all nice positions of an $n$-length word $w$; and - we give a full combinatorial characterization of nice positions, via certain subsets which form what we call [*pseudo-cycles*]{} of the standard permutation of $w$. Related work ------------ The BWT has been subject of intense research in the last two decades, from compression [@Manzini01; @FerraginaGMS05; @KaplanV07; @KaplanLV07], algorithmic [@CrochemoreGKL15; @LouzaGT17; @PolicritiP18], and combinatorial [@GiancarloRS07; @RestivoR11; @DaykinGGLLLP18] points of view (mentioning just a tiny selection from the recent literature). It has also been extended in several ways. One of these, the extended BWT, generalizes the BWT to a multiset of strings [@MantaciRRS07; @MantaciRRS08; @BonomoMRRS13], with successful applications to several bioinformatics problems [@MantaciRRS08; @CoxJRS12; @PrezzaPSR19]. A very recent development is the introduction of Wheeler graphs [@GagieMS17], a generalization of a fundamental underlying property of the BWT to data other than strings. There has been much recent work on inferring strings from different data structures built on strings (sometimes called reverse engineering), and/or just deciding whether such a string exists, given the data structure itself. For instance, this question has been studied for directed acyclic word graphs (DAWGs) and suffix arrays [@BannaiIST03], prefix tables [@ClementCR09], LCP-arrays [@KarkkainenPP17], Lyndon arrays [@DaykinFHIS18], and suffix trees [@IIBT14; @StarikovskayaV15; @CazauxR14]. A number of papers study which permutations are suffix arrays of some string [@HeMR05; @SchurmannS08; @KucherovTV13], giving a full characterization in terms of the standard permutation. The analogous question for BWT images was answered fully in [@MantaciRS03] for strings over binary alphabets, and in [@LShur11] for strings over general alphabets. In [@LShur11], the authors also asked the question which strings can be ”blown up” to become a BWT: Given the runs (blocks of equal characters) in $w$, when does a BWT image exist whose runs follow the same order, but each run can be of the same length or longer than the corresponding one in $w$? The authors fully characterize such strings, showing that the non-existence of a global ascent in $w$ is a necessary and sufficient condition. Another work treating a related question to ours is [@MantaciRRRS17], where the authors ask and partially answer the question of which strings are fixpoints of the BWT. ### Overview. {#overview. .unnumbered} The paper is organized as follows. In Section \[sec:basics\] we provide the necessary background and terminology. Next we present our algorithm for computing all nice positions of a string $w$ (Section \[sec:algo\]). In Section \[sec:characterization\], we give a complete characterization of nice positions, followed in Section \[sec:parity\] by some bounds on the number and value of nice positions of a word. In Section \[sec:results\] we give some experimental results. We close with a discussion and outlook in Section \[sec:conclusion\]. Some additional examples and technical details are contained in the Appendix. Basics {#sec:basics} ====== In this section we give the necessary terminology and notation. Words ----- Let $\Sigma$ be a finite ordered alphabet. A [*word*]{} (or [*string*]{}) over $\Sigma$ is a finite sequence of elements from $\Sigma$ (also called [*characters*]{}). We write words as $w=w_1\cdots w_n$, with $w_i$ the $i$th character, and $|w|=n$ its [*length*]{}. Note that we index words from $1$. The [*empty string*]{} is the only string of length $0$ and is denoted ${\varepsilon}$. The set of all words over $\Sigma$ is denoted $\Sigma^*$. The concatenation $w=uv$ of two words $u,v$ is defined by $w = u_1\cdots u_{|u|}v_1\cdots v_{|v|}$. Let $w=uxv$, with $u,x,v$ possibly empty. Then $u$ is called a [*prefix*]{}, $x$ a [*factor*]{} (or [*substring*]{}), and $v$ a [*suffix*]{} of $w$. A factor (prefix, suffix) $u$ of $w$ is called [*proper*]{} if $u\neq w$. For a word $u$ and an integer $k\geq 1$, $u^k = u\cdots u$ denotes the $k$-fold concatenation of $u$. A word $w$ is called a [*primitive*]{} if $w=u^k$ implies $k=1$. A [*run*]{} in a word $w$ is a maximal substring of the form $a^k$ for some $a\in\Sigma$, and a [*runlength*]{} is the length of such a maximal substring. Two words $w, w'$ are called [*conjugates*]{} if there exist words $u,v$, possibly empty, such that $w=uv$ and $w'=vu$. Conjugacy is an equivalence relation, and the set of all words which are conjugates of $w$ constitute $w$’s [*conjugacy class*]{}. Given a word $w=w_1\cdots w_n$, the [*$i$’th rotation*]{} of $w$ is $w_i\cdots w_nw_1\cdots w_{i-1}$. Clearly, two words are conjugates if and only if one is a rotation of the other. The set of all words over $\Sigma$ is totally ordered by the [*lexicographic order:*]{} Let $v,w \in \Sigma^*$, then $v \leq_{{\textrm{lex}}} w$ if $v$ is a prefix of $w$, or there exists an index $j$ s.t. for all $i<j$, $v_i = w_i$, and $v_j < w_j$ according to the order on $\Sigma$. A word $w$ of length $n$ is called [*Lyndon*]{} if it is lexicographically strictly smaller than all of its conjugates $v\neq w$. In the context of string data structures, it is often necessary to mark the end of words in a special way. To this end, let $\$ \not\in \Sigma$ be a new character, called [*sentinel*]{}, and set $\$ < a$ for all $a\in \Sigma$. Let ${\Sigma^*_{\$}}$ denote the set of all words over $\Sigma$ with an additional $\$$ at the end. The mapping $w\mapsto w\$$ is a bijection from $\Sigma^*$ to ${\Sigma^*_{\$}}$. Clearly, every word in ${\Sigma^*_{\$}}$ is primitive. Permutations ------------ Let $n$ be a positive integer. A [*permutation*]{} is a bijection from $\{1,2,\ldots,n\}$ to itself. Permutations are often written using the two-line notation $\bigl(\begin{smallmatrix} 1 & 2 & \ldots & n \\ \pi(1) & \pi(2) & \ldots & \pi(n) \end{smallmatrix}\bigr) $. A [*cycle*]{} in a permutation $\pi$ is a minimal subset $C \subseteq \{1,\ldots,n\}$ with the property that $\pi(C)=C$. A cycle of length $1$ is called a [*fixpoint*]{}, and one of length $2$ a [*transposition*]{}. Every permutation can be decomposed uniquely into disjoint cycles, giving rise to the [*cycle representation*]{} of a permutation $\pi$, i.e. as a composition of the cycles in the cycle decomposition of $\pi$. For example, $\pi = \bigl( \begin{smallmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 4 & 2 & 5 & 6 & 3 & 1 \end{smallmatrix}\bigr) = (1 \: 4 \: 6)(2)(3 \: 5)$. Permutations whose cycle decomposition consists of just one cycle are called [*cyclic*]{}. A fundamental theorem about permutations says that every permutation $\pi$ can be written as a product (composition) of transpositions, and that the number of any sequence of transpositions whose product is $\pi$ is either always even or always odd: this is called the [*parity*]{} of the permutation. The [*sign*]{} ${\textit{sgn}}(\pi)$ of a permutation $\pi$ is defined as $1$ if $\pi$ is even, and as $(-1)$ if it is odd; equivalently, ${\textit{sgn}}(\pi) = (-1)^m$, where $\pi = \prod_{i=1}^m \tau_i$ for some transpositions $\tau_i$. The sign of a cycle of $m$ elements is $(-1)^{m-1}$, since any cycle $C= (x_1, \ldots, x_m)$ can be written as $C = (x_1, x_2)(x_2,x_3)\cdots (x_{m-1},x_m)$, i.e. $C$ is the product of $m-1$ transpositions. Moreover, if $\pi = \prod_{i=1}^c C_i$ is the cycle decomposition of permutation $\pi$ of $\{1,\ldots, n\}$, then ${\textit{sgn}}(\pi) = \prod_{i=1}^c {\textit{sgn}}(C_i) = (-1)^{n - c}$. For more details on permutations, see [@Bona12]. Finally, given a word $w$, the [*standard permutation*]{} of $w$, denoted $\sigma_w$, is the permutation defined by: $\sigma_w(i) < \sigma_w(j)$ if and only if either $w_i < w_j$, or $w_i = w_j$ and $i<j$. For example, the standard permutation of [banana]{} is $\bigl( \begin{smallmatrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 4 & 1 & 5 & 2 & 6 & 3 \end{smallmatrix}\bigr)$. Burrows-Wheeler-Transform ------------------------- It is easiest to define the Burrows-Wheeler-Transform (BWT) [@BurrowsWheeler94] via a construction: Let $v\in \Sigma^*$ with $|v|=n>0$, and let $M$ be an $n \times n$-matrix containing as rows all $n$ rotations of $v$ (not necessarily distinct) in lexicographic order (see Fig. \[fig:bwt-ex\]). Then $w={\textrm{BWT}}(v)$ is the last column of $M$. If $v$ is primitive, then this is equivalent to saying that $w=w_1\cdots w_n$ such that $w_i$ equals the last character of the $j$th rotation of $v$, where the $j$th rotation has rank $i$ among all rotations of $v$ w.r.t. lexicographic order. Linear-time construction algorithms of the BWT are well-known [@RosoneS13], and the BWT is [*reversible*]{}: Given a word $w$ which is the BWT of some word $v$, $v$ can be recovered from $w={\textrm{BWT}}(v)$, uniquely up to its conjugacy class, again in linear time. We briefly recap the algorithm for the case where $w$ is the BWT of a primitive word $v$, since this is the case we will need in the following. The algorithm is based on the following insights about the matrix $M$: (1) the last character in each row is the one [*preceding*]{} the first character in the same row, (2) since the rows are rotations of the same word, every character in the last column occurs also in the first column, (3) the first column lists the characters of $v$ in lexicographical order, and (4) the $i$th occurrence of character $c$ in the last column of $M$ corresponds to the $i$th occurrence of character $c$ in the first column, more precisely: If $j$ and $k$ are the positions of the $i$th $c$ in the last and first columns respectively, and the $j$th row of matrix $M$ is $x_1\cdots x_{n-1}x_n$, then the $k$th row is $x_nx_1\cdots x_{n-1}$. This last property can be used to define a mapping from the last to the first column, called [*LF-mapping*]{} [@BurrowsWheeler94], which assigns to each position $j$ the corresponding position $k$ in the first column—this is, in fact, the standard permutation of the last column. Now, given $w$ which is the BWT of a word, such a word $v$ can be reconstructed, from last character to first, via iteratively applying the standard permutation $\sigma_w$, and noting that $w_{\sigma_w(i)}$ is the character preceding $w_i$ in $v$. In other words, $w_1=v_n$ and $w_{\sigma_w^i(1)} = v_{n-i}$ for $1\leq i \leq n-1$. Problem statement and first results ----------------------------------- Let $w\in \Sigma^*$, $w= w_1\cdots w_n$ and $1\leq i \leq n+1$. We denote by ${\textit{dol}}(w,i)$ the $(n+1)$-length word $w_1 \cdots w_{i-1}\$w_i \cdots w_n$, i.e. the word which results from inserting $\$$ into $w$ in position $i$. Whenever $w$ is clear from the context, we denote by $\sigma_i$ the standard permutation of ${\textit{dol}}(w,i)$. Finally, we refer to a position $i$ as [*nice*]{} if ${\textit{dol}}(w,i) \in {\textrm{BWT}}({\Sigma^*_{\$}})$, i.e. if there exists a word $v\in \Sigma^*$ such that ${\textrm{BWT}}(v\$)=w$. We can now state the problem we treat in this paper: > [**Dollar-BWT Problem:** ]{} Given a word $w \in \Sigma^*$, $|w|=n$, compute all nice positions of $w$, i.e. all $1\leq i \leq n+1$ such that ${\textit{dol}}(w,i) \in {\textrm{BWT}}({\Sigma^*_{\$}})$. The following statement was proved originally in a slightly weaker version in [@MantaciRS03], and in the current form in [@LShur11]: \[thm:MRS03\] For a string $v\in \Sigma^*,$ ${\textrm{BWT}}(v) = a_1^c\cdots a_m^c$ for some $c\geq 1$, if and only if $v = u^c$ with ${\textrm{BWT}}(u) = a_1\cdots a_m$. From this the authors of [@LShur11] obtain the following beautiful result: \[thm:LShur\] A word $w\in \Sigma^*$ is a BWT image if and only if the number of cycles of $\sigma_w$ equals the greatest common divisor of the runlengths of $w$. In the following, we will need the explicit form of the standard permutation of BWT images. \[coro:formofsigma\] If $w$ is the BWT of a word $v\in \Sigma^*$ then $\sigma_w$ has the following form, where $c\geq 1$ and $m = n/c$: $$\label{eq:powerForm} \sigma_w = {(1, e_1, \ldots, e_m)(2, e_1+1, \ldots, e_m+1)\ldots(c, e_1+c-1, \ldots, e_m+c-1)}.$$ Moreover, for $c=1$, it holds that $w$ is the BWT of a primitive word if and only if $\sigma_w$ is cyclic. Let $c$ be the greatest common divisor of the runlengths of $w$. By Thm. \[thm:MRS03\], there exists a primitive word $u$ s.t. $v = u^c$. If $c=1$, then, by Thm. \[thm:LShur\], $\sigma_w$ is cyclic, as claimed. Otherwise, let $1\leq i \leq |w|$ and $i-1 = kc +r$, with $0\leq r < c$ be the unique decomposition of $i-1$ modulo $c$. It follows from the definition of the standard permutation that $$\label{eq:tauc} \sigma_w(i) = \sigma_w(kc+r+1) = \sigma_w(kc+1) + r,$$ since $w_i = w_{i-1} = \ldots = w_{kc+1}$, i.e. position $i$ and position $kc+1$ lie in the same run. But this implies that the standard permutation of $w$ has the form , as claimed. For $c=1$, the reverse implication follows from applying the BWT reversal algorithm: if $\sigma_w$ is cyclic, then the output is a word of length $|w|$. Note that this direction is not true for $c>1$: e.g. $(13)(24)$ is the standard permutation of $bbaa$, but also of $cdab$, and the latter is not a BWT image. Let $w\in \Sigma^*$. Since, for every $i$, the character $\$$ appears exactly once in ${\textit{dol}}(w,i)$, from Thm. \[thm:LShur\] we immediately get the following: \[lemma:iNice\] For $w\in \Sigma^n$ and $1 \leq i \leq n+1$, $i$ is nice if and only if $\sigma_i$ is cyclic. We use a bipartite graph $G_w$ to visualize the standard permutation of $w$ (see Fig. \[fig:constraints15\]). The top row corresponds to $w$, and the bottom row to the characters of $w$ in alphabetical order. When $w$ is a BWT, then this implies that the top row corresponds to the last column of matrix $M$, and the bottom row to the first. (This graph is therefore sometimes called -graph.) Let us refer to the nodes in the top row as $x_1,\ldots, x_n$ and to those in the bottom row as $y_1,\ldots, y_n$. Nodes $x_i$ are labeled by character $w_i$, and nodes $y_i$ are labeled by the characters of $w$ in lexicographic order. We connect $(x_i,y_j)$ if and only if $i=j$ or $j = \sigma_w(i)$. It is easy to see that the node set of any cycle $S$ in $G_w$ has the form $\{x_k,y_k \mid k\in {\cal I}\}$ for some ${\cal I} \subseteq \{1,\ldots, n\}$, and that $S$ is a cycle in $G_w$ if and only if ${\cal I}$ is a cycle in $\sigma$. Now observe what happens when we insert a dollar into $w$ in position $i$ (see Fig. \[fig:constraints15\]). For positions $j$ which are smaller than $i$, their image is incremented by one; $i$ is mapped to $1$; and positions $j$ to the right of $i$, both $j$ and its image $\sigma(j)$ are shifted to the right by one. Formally: \[lemma:sigma2sigmai\] Let $w\in \Sigma^n$, $1\leq i \leq n+1$, $\sigma_w$ the standard permutation of $w$, and $\sigma_i$ the standard permutation of ${\textit{dol}}(w,i)$. Then $$\label{eq:sigma2sigma_i} \sigma_i(j) = \begin{cases} \sigma_w(j) + 1 & \text{ if } j<i, \\ 1 & \text{ if } j=i, \text{ and } \\ \sigma_w(j-1) + 1 & \text{ if } j>i. \end{cases}$$ Immediate from the definition. Algorithm {#sec:algo} ========= Given a word $w$, it is easy to compute all nice positions of $w$, by inserting $\$$ in each position $i$ and running the  reversal algorithm, in a total of ${\mathcal{O}}(n^2)$ time. Here we present an ${\mathcal{O}}(n \log n)$ time algorithm for the problem. The underlying idea is that, if we know $\sigma_i$, the standard permutation of ${\textit{dol}}(w,i)$, then it is not too difficult to compute $\sigma_{i+1}$. \[lemma:sigmai\] Let $w\in \Sigma^n$, and $1 \leq i \leq n$. Then 1. $\sigma_1(1) = 1$ and for $i>1$, $\sigma_1(i) = \sigma_w(i-1)+1$, and 2. $\sigma_{i+1} = (1, \sigma_i(i+1)) \cdot \sigma_i$. In particular, the standard permutation $\sigma_{i+1}$ is the result of applying a single transposition to $\sigma_i$. Part 1. follows by applying Lemma \[lemma:sigma2sigmai\] to $i=1$. For Part 2., first notice that for all $j\neq i,i+1$, $\sigma_i(j) = \sigma_{i+1}(j)$, by Lemma \[lemma:sigma2sigmai\], since $j$ is either smaller than both $i$ and $i+1$, or larger than both $i$ and $i+1$. We have $\sigma_i(i) = 1 = \sigma_{i+1}(i+1)$, and $\sigma_{i}(i+1) = \sigma_w(i) +1=\sigma_{i+1}(i)$, again by Lemma \[lemma:sigma2sigmai\]. As we show next, applying a transposition to a permutation has either the effect of splitting a cycle, or that of merging two cycles. Let $\pi = C_1 \cdots C_k$ be the cycle decomposition of the permutation $\pi$, $x\neq y$, and $\pi' = (\pi(x),\pi(y)) \cdot \pi$. 1. If $x$ and $y$ are in the same cycle $C_i$, then this cycle is split into two. In particular, let $C_i = (c_1, c_2, \ldots, c_j, \ldots, c_m)$, with $c_m = x$ and $c_j = y$. Then $\pi' =$ $ (c_1,c_2,\ldots, c_{j-1},y)(c_{j+1} \ldots c_{m-1},x) \prod_{\ell\neq i} C_{\ell}$. 2. If $x$ and $y$ are in different cycles $C_i$ and $C_j$, then these two cycles are merged. In particular, let $C_i = (c_1, c_2, \ldots, c_m)$, with $c_m = x$, and $C_j = (c'_1, c_2', \ldots, c_r')$, with $c_r'=y$, then $\pi' = (c_1, \ldots, c_{m-1},x, c_1', \ldots, c'_{r-1},y) \prod_{\ell\neq i,j} C_{\ell}$. Let $\tau = (\pi(x),\pi(y))$. First note that $\pi'(z) = \tau(\pi(z)) = \pi(z)$ for all $z\neq x,y$. Case [*1:*]{} $\pi'(x) = \tau(\pi(x)) = \pi(y) = c_{j+1}$ and $\pi'(y) = \tau(\pi(y)) = \pi(x) = c_1$, which proves the claim. Case [*2:*]{} follows analogously. Let us look at an example. Changes from one permutation to the next are highlighted in red, and cyclic $\sigma_i$, i.e. nice positions $i$, are marked with a box. On the right, we note whether a merge or a split has taken place. High-level description of the algorithm --------------------------------------- The algorithm first computes the standard permutation $\sigma=\sigma_w$ of $w$ and initializes a counter $c$ with the number of cycles of $\sigma$. It then computes $\sigma_1$ according to Lemma \[lemma:sigmai\], part 1. It increments counter $c$ by $1$ for $i=1$, since $1$ is always a fixpoint of $\sigma_1$. Then the algorithm iteratively computes the new permutation $\sigma_{i+1}$, updating $c$ in each iteration. By Lemma \[lemma:cycles\], $c$ either increases or decreases by $1$ in every iteration: it increases if $i+1$ is in the same cycle as $i$, and it decreases if it is in a different cycle. Whenever $c$ equals $1$, the algorithm reports the current value $i$. See Algorithm 1 for the pseudocode. \[algo:main\] $n \leftarrow |w|$ $\sigma \gets$ standard permutation of $w$ $c \leftarrow $ number of cycles of $\sigma$ $\mathcal{I} \leftarrow \emptyset$ ()[$i \leftarrow n+1$ $2$ ]{}[ $\sigma(i) \leftarrow \sigma(i-1)$ ]{} $\sigma(1) \leftarrow 1$ $c \gets c+1$ ()[($\sigma, i$)]{} \[procedure\] $\sigma(i) \leftarrow \sigma(i+1)$ $\sigma(i+1) \leftarrow 1$ Implementation with splay trees ------------------------------- For the algorithm’s implementation, we need an appropriate data structure for maintaining and updating the current permutation $\sigma_i$. Using an array to keep $\sigma_i$ would allow us to update it in constant time in each step, but would not give us the possibility to efficiently decide whether $i+1\in C$ in line $11$. Thus we need a data structure to maintain the cycles of $\sigma_i$. The functionalities we seek are (a) decide whether two elements are in the same cycle, (b) split two cycles, (c) merge two cycles. The data structure we have chosen is a forest of splay trees [@SleatorTarjan81]. This data structure supports the above operations in amortized ${\mathcal{O}}(\log n)$ time. Splay trees are self-adjusting binary search trees. They are not necessarily balanced, but they have the property that at every access-operation, the element $x$ accessed is moved to the root and the tree is adjusted in such a way as to move nodes on the path from the root to $x$ closer to the root, thus reducing access-time to these nodes for future operations. The basic operation, called [*splaying*]{}, consists of a series of the usual edge rotations in binary search trees. Which rotations are applied depends on the position of the node with respect to its parent and grandparent (the cases are referred to as [*zig*]{}, [*zig-zig*]{}, and [*zig-zag*]{}). Splay trees can implement the standard operations on binary search trees, such as [*access, insert, delete, join, split*]{} in amortized logarithmic time, in the total number of nodes involved. We refer the reader to the original article [@SleatorTarjan81] for more details. We represent the current permutation $\sigma_i$ as a forest of splay trees, where each tree corresponds to a cycle of $\sigma_i$. Let $(c_1, c_2, \ldots, c_k)$ be an arbitrary rotation of a cycle in $\sigma_i$. We consider the cycle as a ranked list of the elements from $c_1$ to $c_k$ and assign to element $c_j$ its position $j$ as key. Doing this we can build the splay tree of the cycle keying the elements by their position in the cycle: for a node $v$ of the tree, the elements of the list which come before $v$ are contained in the left subtree of $v$, and the elements which come after $v$ are contained in the right subtree of $v$. Note that during the course of the algorithm, we always have $1$ as left-most node and $i$ as right-most node in the first cycle of $\sigma_i$. We now explain how to update the data structure. If $i$ and $i+1$ are in distinct cycles of $\sigma_i$, then the transposition of $1=\sigma_i(i)$ and $\sigma_i(i+1)$ leads to the merge of their cycles (Lemma \[lemma:cycles\]). The implementation of the merge-step using splay trees is shown in Fig. \[fig:abstractMerge\]. Let the two cycles have the form $(1 , A,i)$ and $(B, i+1, C)$, respectively, with $A,B,C$ sequences of numbers. First, with the access operations on $i$ and $i + 1$, these two elements move to the roots of their respective trees. This situation is displayed in (a). Now we have a [*split*]{} of the right subtree of node $i+1$, the result of which is shown in (b). Next, a [*join*]{} operation links the $C$ subtree to the node $i$ as its right child, and another [*join*]{} operation links node $i+1$, together with its left subtree $B$, as right child of the rightmost node of $C$. If $i$ and $i+1$ are in the same cycle of $\sigma_i$, then the transposition of $1=\sigma_i(i)$ and $\sigma_i(i+1)$ leads to a split of the cycle (Lemma \[lemma:cycles\]). The implementation of this operation with splay trees is shown in Fig. \[fig:abstractSplit\]. Let the cycle have the form $(1, A, i+1, B, i)$. The corresponding splay tree after [*access*]{} $i+1$ is shown in (a). The [*split*]{} operation cuts the right subtree of $i+1$ producing the two new trees in (b). Analysis -------- We now show that Algorithm \[algo:findPos\] takes ${\mathcal{O}}(n \log n)$ time in the worst case. Computing the standard permutation of $w$ takes ${\mathcal{O}}(n)$ time (using a variant of Counting Sort [@Cormen], and noting that the alphabet of the string has cardinality at most $n$). The computation of $\sigma_1$ (lines $5$ to $7$) takes ${\mathcal{O}}(n)$ time. All steps in one iteration of the for-loop (lines $9$ to $17$) take constant time, except deciding whether $i+1 \in C$ (line $11$), and updating $\sigma$ (line $15$). For deciding whether $i+1 \in C$, we [*access*]{} $i+1$. If the answer is [yes]{}, we will have a split-step: this is a [*split*]{}-operation on the tree for $C$ (Fig. \[fig:abstractSplit\]). If the answer is [no]{}, then we [*access*]{} $i$, and merge the two trees (Fig. \[fig:abstractMerge\]); the implementation of this consists of one [*split*]{}- and two [*join*]{}-operations on the trees. Therefore, in one iteration of the for-loop, we either have one [*access*]{} and one [*split*]{} operation (for a split-step), or two [*access*]{}-, one [*split*]{}-, and two [*join*]{}-operations (merge-step), thus in either case, at most five operations. There are $n$ iterations of the for-loop, so at most $5n$ operations. Together with the initial insertion of the $n+1$ nodes, we get a total of $6n$ operations. We report the relevant theorem from [@SleatorTarjan81]: \[Balance Theorem with Updates, Thm. 6 in [@SleatorTarjan81]\] \[thm:SleatorTarjan81\] A sequence of $m$ arbitrary operations on a collection of initially empty splay trees takes $\mathcal{O}(m+ \sum^{m}_{j=1} \log n_j)$ time, where $n_j$ is the number of items in the tree or trees involved in operation $j$. For our algorithm, we have $m={\mathcal{O}}(n)$ operations altogether, each involving no more than $n+1$ nodes, thus Theorem \[thm:SleatorTarjan81\] guarantees that the total time spent on the splay trees is ${\mathcal{O}}(n + n \log n)$. Adding to this the computation of $\sigma_1$ and the initialization of the splay trees, each in ${\mathcal{O}}(n)$ time, and of the constant-time operations within the for-loop, we get altogether ${\mathcal{O}}(n \log n)$ time. Memory usage is ${\mathcal{O}}(n)$, since the forest of splay trees consists of $n+1$ vertices in total. Summarizing, we have \[thm:algoanalysis\] Algorithm \[algo:findPos\] runs in ${\mathcal{O}}(n \log n)$ time and uses ${\mathcal{O}}(n)$ space, for an input string of length $n$. Characterization {#sec:characterization} ================ In this section we give a full characterization of nice positions. Our first result is that every BWT image has at least one nice position. \[thm:dol(BWT,c+1)\] Let $w\in {\textrm{BWT}}(\Sigma^n)$, and $c$ the number of cycles of $\sigma_w$. Then $c+1$ is nice. By Corollary \[coro:formofsigma\], $\sigma_w$ has the form $$\sigma_w = {(1, e_1, \ldots, e_m)(2, e_1+1, \ldots, e_m+1)\ldots(c, e_1+c-1, \ldots, e_m+c-1)}.$$ Note that each cycle has exactly one element which is smaller than $c+1$. So by Lemma \[lemma:sigma2sigmai\], the standard permutation $\sigma_{c+1}$ has the form $$\sigma_{c+1} = (1, e_1+1, \ldots, e_m+1, 2, e_1+2, \ldots, e_m+2, \ldots, c, e_1+c, \ldots, e_m+c, c+1),$$ and is thus cyclic. Let $w = {\textrm{BWT}}(v\$)$ such that $w_2 = \$$, i.e. \$ is in the second position of $w$. Then $v$ is Lyndon. Let $v' = v\$$. The smallest rotation of $v'$ is $\$ v$, since $\$$ is smaller than all other characters; while $v\$$ is the second smallest rotation, since $w_2 = \$$. Therefore, every proper suffix $u$ of $v$ is lexicographically strictly larger than $v$, implying that $v$ is Lyndon. In order to see which positions are nice, we want to understand how cycles in $\sigma_i$ are created. Recall our earlier example $w = {\tt beaaecdcb}$ and $i=7$ (see Fig. \[fig:constraints15FIXPOINTS\]). Position $7$ is not nice, since $\sigma_7$ has two fixpoints. In general, if $j$ is a fixpoint in $\sigma_w$ and $i \leq j$, then $j+1$ will be a fixpoint in $\sigma_i$. Similarly, if $\sigma(j) = j-1$ and $j<i$, then $j$ is a fixpoint in $\sigma_i$. These two cases are illustrated in Fig. \[fig:constraints15FIXPOINTS\], where $7$ is a fixpoint in $\sigma_w$ and $\sigma_w(6)=5$, so the insertion of $\$$ in position $i=7$ leads to the two fixpoints $6$ and $8$ in $\sigma_7$. Therefore, position $7$ is not nice. Indeed, the previous observation can be generalized: if $S$ is a cycle in $\sigma_w$, then no position $i\leq \min S$ is nice. Similarly, if $S$ is such that $\sigma_w(S) = S-1 = \{ j-1 \mid j\in S\}$, then no position $i>\max S$ is nice; in both cases, the insertion of $\$$ in such a position would turn $S$ into a cycle. However, the situation can also be more complex, as is illustrated in Fig. \[fig:cedcbbabb\]. In Theorem \[thm:pseudocycle\] we will give a necessary and sufficient condition for creating a proper cycle by inserting a \$ in some position. First we need another definition. \[def:pseudo-cycle\] Given a permutation $\pi$ of $\{1,\ldots, n\}$, a [*pseudo-cycle*]{} w.r.t. $\pi$ is a non-empty subset $S \subseteq \{1,\ldots,n\}$ which can be partitioned into two subsets $S_{\links}$ and $S_{\rechts}$, possibly empty, such that $S_{\links} < S_{\rechts}$, and $\pi(S) = (S_{\links}-1) \cup S_{\rechts}$. Let $a = \max S_{\links}$, and $a = 0$ if $S_{\links}$ is empty. Further, let $b = \min S_{\rechts}$, and $b = n+1$ if $S_{\rechts}$ is empty. The [*critical interval*]{} $R \subseteq \{1,2,\ldots,n+1\}$ of the pseudo-cycle $S$ is defined as $R = [a+1,b]$. For example, in Fig. \[fig:cedcbbabb\], $S=\{3,5,8\}$ is a pseudo-cycle, with $S_{\links} = \{3,5\},$ $S_{\rechts} = \{8\}$, and $R = \{6,7,8\}$. Note that every cycle $C$ of a permutation is a pseudo-cycle, with $C = C_{\rechts}$. In Fig. \[fig:constraints15\], we highlighted two pseudo-cycles: $S_1 = \{6\}$, with critical interval $R_1 = \{7,8,9,10\}$, and $S_2 = \{7\}$, with $R_2 = \{1,2,3,4,5,6,7\}$. The elements of the critical interval are exactly those positions $i$ which turn $S$ into one or more cycles, when the $\$$ is inserted in position $i$ (see in Lemma \[lemma:psudocycle-cycle\]). In particular, with $S = S_{\rechts}=\{1,\ldots, n\}$, we get that $i=1$ is never nice. This is easy to see since, for every word $w$, $1$ is a fixpoint in the standard permutation $\sigma_1$ of $\$w$. \[def:shift\] Let $1\leq i \leq n+1$ and $S \subseteq \{1,2, \ldots , n\}$. We define $$\begin{aligned} {\textit{shift}}(S, i) &= \{ x \mid x \in S \text{ and } x < i \} \cup \{ x+1 \mid x \in S \text{ and } x \geq i \}, \; \text{and} \\ {\textit{unshift}}(S, i) &= \{ x \mid x \in S \text{ and } x < i \} \cup \{ x-1 \mid x \in S \text{ and } x > i\}.\end{aligned}$$ \[lemma:psudocycle-cycle\] Let $w\in \Sigma^*$ and $\sigma = \sigma_w$. Let $1 \leq i \leq n+1$, and $U \subseteq \{1,2, \ldots , n+1\} \setminus \{i\}$. Then $U$ is a cycle in the permutation $\sigma_i$ if and only if $S={\textit{unshift}}(U, i)$ is a pseudo-cycle w.r.t. $\sigma$, and $i$ belongs to the critical interval of $S$. Let $U_{1} = \{ x \in U \mid x< i\}$, $U_{2} = \{ x \in U \mid x> i\}$. Then $S = U_{1} \cup (U_{2} -1)$. We have to show that $U$ is a cycle if and only if $S$ is a pseudo-cycle, with $S_{\links} = U_1$ and $S_{\rechts} = U_2-1$. Note that this implies that $i$ is contained in the critical interval of $S$. First let $S$ be a pseudo-cycle with $S_{\links} = U_1$ and $S_{\rechts} = U_2-1$, and let $x\in U$. We have to show that $x\in \sigma_i(U)$, which implies the claim. If $x\in U_1$, then $x\in S_{\links}$, and there is a $y\in S$ s.t. $\sigma(y) = x-1$. If $y\in S_{\links}$, then $y\in U_1$ and $\sigma_i(y) = x$ by Lemma \[lemma:sigma2sigmai\], thus $x\in \sigma_i(U)$. Else $y\in S_{\rechts}$, then $y+1 \in U_2$ and $\sigma_i(y+1) = x$, again by Lemma \[lemma:sigma2sigmai\], and thus $x\in \sigma_i(U)$. Now let $x\in U_2$. Then $x-1\in S_{\rechts}$ and there is a $y\in S$ s.t. $\sigma(y)=x-1$. If $y\in S_{\links}$, then $y\in U_1$ and $x=\sigma(y) +1=\sigma_i(y)$ by Lemma \[lemma:sigma2sigmai\], thus $x\in \sigma_i(U)$. Else $y\in S_{\rechts}$, then $y+1 \in U_2$ and $\sigma_i(y+1) = x$, again by Lemma \[lemma:sigma2sigmai\], and thus $x\in \sigma_i(U)$. Conversely, let $U$ be a cycle, set $S_{\links} = U_1$ and $S_{\rechts} = U_2 - 1$. Let $x\in S$. We will show that if $x\in S_{\links}$, then $x-1\in \sigma(S)$, and if $x\in S_{\rechts}$, then $x\in \sigma(S)$, proving that $S$ is a pseudo-cycle. The claim follows with analogous arguments as above and noting that $\sigma(j) = \sigma_i(j)-1$ if $j<i$, and $\sigma_i(j+1)-1$ if $j\geq i$. \[thm:pseudocycle\] Let $w$ be a word of length $n$ over $\Sigma$, and $1\leq i \leq n+1$. Then $i$ is nice if and only if there is no pseudo-cycle $S$ w.r.t. the standard permutation $\sigma = \sigma_w$ whose critical interval contains $i$. Let $S$ be a pseudo-cycle w.r.t. $\sigma$, $R$ its critical interval and $i \in R$. By Lemma \[lemma:psudocycle-cycle\], ${\textit{shift}}(S,i)$ is a cycle in $\sigma_i$ not containing $i$. Therefore, $\sigma_i$ has at least two cycles, implying that ${\textit{dol}}(w, i) \not \in {\textrm{BWT}}({\Sigma^*_{\$}})$. Now assume that $i$ is not nice. Then $\sigma_i$ contains a cycle $C\subseteq \{2,\ldots, n+1\}$. By Lemma \[lemma:psudocycle-cycle\], this implies that ${\textit{unshift}}(C)$ is a pseudo-cycle in $\sigma$, and its critical interval contains $i$. With Theorem \[thm:pseudocycle\], we can now prove the statements about our first example strings [banana]{} and [annnaa]{}. The word [banana]{} has the pseudo-cycles $S_1 = \{2\}$ with critical interval $R_1=\{3,4,5,6,7\}$; and $S_2 = \{3,5,6\}$ with $R_2 = \{1,2,3\}$. Therefore, every position is contained in some critical interval. For the word [annnaa]{}, we have $S_1=\{1\}$ with critical interval $R_1 = \{1\}$; $S_2 = \{2,3,4,5,6\}$ with $R_2 = \{1,2\}$; $S_3 = \{ 3,5\}$ with $R_3 = \{4,5\}$; $S_4 = \{4,6\}$ with $R_4 = \{5,6\}$; and all other pseudo-cycles are unions of these. The two positions $3$ and $7$ are not contained in any critical interval, and are therefore nice. In fact, [an\$nnaa]{} = ([ananna\$]{}), and [annnaa\$]{} = ([nanana\$]{}). Bounds on nice positions {#sec:parity} ======================== In this section, we study the number of nice positions of a given string $w$. Recall that $\sigma_i$ is the standard permutation of ${\textit{dol}}(w,i)$. For $w \in \Sigma^n$, let $h(w)$ denote the number of nice positions of $w$. We will first show that all nice positions of a word $w$ have the same parity. \[thm:parity\] Let $w$ be a word over $\Sigma$. Then either all nice positions are even, or all nice positions are odd. In particular, let $c$ be the number of cycles in the standard permutation $\sigma_w$; if $c$ is even, then all nice positions are odd, and if $c$ is odd, then all nice positions are even. Let us assume that $i<j$ are both nice, thus $\sigma_i$ and $\sigma_j$ are cyclic. This implies that ${\textit{sgn}}(\sigma_i) = (-1)^n = {\textit{sgn}}(\sigma_j)$, since both cycles consist of $n+1$ elements. By Lemma \[lemma:sigmai\], $\sigma_{i+1} = \tau_i \cdot \sigma_i$, where $\tau_i = (1, \sigma_i(i+1))$. Thus $\sigma_j = \tau_{j-1}\cdots \tau_i\cdot \sigma_i$, so ${\textit{sgn}}(\sigma_j) = (-1)^{j-i} {\textit{sgn}}(\sigma_i)$, and therefore ${\textit{sgn}}(\sigma_j) = {\textit{sgn}}(\sigma_i)$ if and only if $j-i$ is even. Given a cycle $C=(x_1, \ldots, x_m)$, let $C' = (x_1+1, \ldots, x_m+1)$. Now let $\sigma_w = \prod_{j=1}^c C_j$ be the cycle decomposition of $\sigma_w$. By Lemma \[lemma:sigmai\], $\sigma_1 = (1)\prod_{j=1}^c C'_j$. Therefore, ${\textit{sgn}}(\sigma_1) = (-1)^{n+1 - (c+1)} = (-1)^{n-c}$. On the other hand, again by Lemma \[lemma:sigmai\], $\sigma_i = \tau_{i-1}\cdots \tau_1\cdot \sigma_1$, thus ${\textit{sgn}}(\sigma_i) = (-1)^{n-c+i-1}$. But this equals $(-1)^{n}$ if and only if $c$ and $i$ have different parity. Let $w\in \Sigma^n$. Then $h(w) \leq \lfloor \frac{n+1}{2} \rfloor$. Follows from Thm. \[thm:parity\] and the fact that $\sigma_1$ is not nice. Given the cycle decomposition of $\sigma_w = \prod_{j=1}^c C_j$, let $\ell_j$ denote the minimum element of $C_j$, and $L = \max_{j=1,\ldots, c} \ell_j$. \[prop:L\] If $i$ is nice, then $i\geq L+1$. In particular, $i\geq c+1$, where $c$ is the number of cycles of $\sigma_w$. Note that every cycle $C_j$ is a pseudo-cycle, where $S_{\links} = \emptyset$ and $S_{\rechts}=C_j$, with critical interval $[1,\ell_j]$. Therefore, by Thm. \[thm:pseudocycle\], no $i \leq L$ can be nice. The second claim follows since $L\geq \ell_c \geq c$. Let $w\in \Sigma^n$. Then $h(w) \leq \lceil \frac{n-L+1}{2} \rceil$. Follows from Thm. \[thm:parity\] and Prop. \[prop:L\]. We next derive some properties of nice positions from Algorithm \[algo:findPos\]. \[prop:c\_i\] Let $c_i$ be the number of cycles of $\sigma_i$. If $j$ is nice and $j>i$, then $j \geq i+c_i-1$. The permutation $\sigma_j$ is computed from $\sigma_i$ by $j-i$ iterations of the for-loop of Algorithm \[algo:findPos\] (lines 9-17), each of which either results in incrementing (split) or decrementing (merge) the number of cycles. Therefore, at least $c_i-1$ steps are needed to arrive at a cyclic permutation. (Since $c_1 = c+1$, this implies in particular that for every nice position $j$, $j\geq c+1$, as already seen in Prop. \[prop:L\].) Let $C$ be a cycle of $\sigma_i$. We call $C$ a [*bad cycle w.r.t. $i$*]{} if $i\notin [\min C, \max C]$. The cycle $(5,9)$ is a bad cycle w.r.t. $4$, and $(3,8,5,4,9)$ is a bad cycle w.r.t. $10$. \[prop:badcycles\] If $C$ is a bad cycle w.r.t. $i$, then - if $i< \min C$, then no $j\leq i$ is nice, - if $i> \max C$, then no $j\geq i$ is nice. Let $i < \min C$. Then $i$ is not nice, since $\sigma_i$ has at least two cycles. Now let $j<i$, thus $\sigma_i = \tau_{i-1}\ldots \tau_j \cdot \sigma_j$, where $\tau_k = (1, \sigma_k(k+1))$. Since $j \leq i < \min C$, it follows that each $\tau_k$ is disjoint from $C$, and since $C$ is a cycle of $\sigma_i$, therefore it is also a cycle of $\sigma_j$. Since $[\min C, \max C] \neq \{1,\ldots, n+1\}$, this implies that $j$ is not nice. Analogously, if $i> \max C$, this implies that all $\sigma_j$ for $j\geq i$ have $C$ as a cycle, implying that $j$ is not nice. Let $\sigma_w = \prod_{j=1}^c C_j$ be the cycle decomposition of $\sigma_w$ and $\ell_j = \min C_j$ for $j=1, \ldots, c$, where the cycles are in increasing order w.r.t. their minima, i.e. $\ell_1< \ldots < \ell_c$. We call a pair $(\ell_j, \ell_{j}+1)$ [*bad pair*]{} if $j<c$ and $\ell_{j}+1\in C_j$. Given the permutation $(1)(2,6,3,7,9,4)(5,8,10)$ with $3$ cycles, the pair $(2,3)$ in the second cycle is a bad pair, and it is the only bad pair. \[prop:badpairs\] Let $b$ be the number of bad pairs in $\sigma_w$. If $i$ is a nice position of $w$, then $i \geq 2b+c$. We will count the number of iterations of the for-loop (lines 9-17) of Algorithm \[algo:findPos\] before arriving at a cyclic permutation. Let us refer to the iterations as either merge- or split-steps. As we saw before, we need at least $c$ merge-steps, since $\sigma_1$ has $c+1$ cycles. It should be also clear that every additional split-step will necessitate a further merge-step. Therefore it suffices to show that every bad pair results in a distinct split-step. Let $(\ell_j, \ell_j+1)$ be a bad pair. By Lemma \[lemma:sigma2sigmai\], $\sigma_1 = (1) \prod C_j'$, where $\min C_j' = \ell_j+1$. Therefore, $C_j'$ is a bad cycle w.r.t. $\ell_j$, and thus is present in all $\sigma_i$ for $i\leq \ell_j$. Since $\ell_j \notin C_j'$, step $\ell_j$ is a merge-step. Now $\ell_{j}+2$ is still in $C_j'$, so step $\ell_{j}+1$ is a split-step. The permutation $(1)(2,6,3,7,9,4)(5,8,10)$ is the standard permutation of the word [abbababbaa]{}. There are two nice positions, $6$ and $8$, both greater or equal $5 = 2+3 = 2b+c$. We summarize: Let $w$ be a word over $\Sigma$ and $\sigma_w = \prod_{j=1}^c C_j$ the cycle decomposition of it standard permutation $\sigma_w$. 1. If $i$ is nice, then $i \geq \max\{L+1, 2b+c\}$, where $L = \max_j \min C_j$, and $b$ is the number of bad pairs in $\sigma_w$. 2. Let $c_i$ be the number of cycles of $\sigma_i$. If $j$ is nice, then $j \geq i+c_i-1$. Moreover, if $\sigma_i$ has a bad cycle $C$ s.t. $i>\max C$, then no $j\geq i$ is nice. Part [*1.*]{} of Theorem \[thm:bounds\] can be used for heuristics to speed up the algorithm: Compute $i_0=\max\{L+1, 2b+c\}$, and start the algorithm with $\sigma_{i_0}$ instead of $\sigma_1$. Since $L,c,b$ can be computed in linear time by once scanning $\sigma_w$, and $\sigma_{i_0}$ can be computed in linear time, the total running time of the algorithm is still ${\mathcal{O}}(n \log n)$ in the worst case, but could be often faster in practice. Part [*2.*]{} cannot be immediately turned into an algorithm improvement, because our current implementation does not allow extracting minima and maxima from cycles. Experimental result {#sec:results} =================== In the following, we give some examples (Sec. \[sec:examples\]), followed by some statistics on the number of nice positions (Sec. \[sec:stats\]). Examples {#sec:examples} -------- We list all words over $\{a,b\}$ of length $2$, $3$, $4$, and $5$ (Tables \[tab:sigmaSize2\_all\] to \[tab:sigmaSize5\_all\]). For each word $w$ (first column), we give the lexicographically smallest $v$ such that ${\textrm{BWT}}(v) = w$, if such a $v$ exists, dashes otherwise (second column); the standard permutation $\sigma=\sigma_w$ (third column); and the number $h(w)$ of nice positions for $w$ (fourth column). For each word $w$ with $h(w)>0$, we also list every $dol(w,i)$ with $i$ nice, giving the analogous information, and specify $i$ in the last (fifth) column. In Table \[tab:sigmaSize10-13-15-18\_some\], the same information is shown about some longer strings over alphabets of size 2 and 3. They are ordered by string length ($n=10, 13, 15, 18$). We chose these strings in order to give examples of as many different cases as possible. There are words which are BWTs of primitive words (strings 1, 7, 8, 12, 13, 20, 21, 22), some of which have the maximum number of nice positions according to their length (strings 7, 12, 20); two words have only one nice position (8, 13); string 1 is an example that shows that, once a position is nice, not necessarily all following positions with the same parity are also nice. Similarly, string 21 shows that there are no further nice positions after position 12 due to a bad cycle with respect to 13 containing only elements strictly smaller than 13. We have BWTs of a power of a primitive word (strings 2, 14, 15, 23, 24, 25), but only one of these has the maximum number of nice positions (string 2). The table also contains strings that are not the BWT of any other word (strings 3, 4, 5, 6, 9, 10, 11, 16, 17, 18, 19). Three of these have no nice positions (strings 6, 11, 19). Sometimes the parity of the number of cycles equals the parity of the smallest element of the last cycle (strings 3, 4, 5, 16, 18, 19), but this does not always happen (strings 6, 17). Statistics {#sec:stats} ---------- In Tables \[tab:tablePercentages2-short\] and \[tab:tablePercentages3\], we present statistics on the number of nice positions $h(w)$. Table \[tab:tablePercentages2-short\] contains the statistics for a binary alphabet and $n=15,16,17,18$. We give the statistics for all $n=3, \ldots, 18$ in the Appendix (Table \[tab:tablePercentages2\]). Table \[tab:tablePercentages3\] contains the same information for a ternary alphabet and $n=3, \ldots, 10$. For fixed $n$, we give the absolute number of strings of length $n$ with $k$ nice positions (column 3), as well as the corresponding percentage (column 4). Percentages have been rounded to the next integer, with ’$<0.5$’ in case of non-zero percentage close to 0. In columns 5 and 6, we give those strings with $k$ nice positions which are not BWT images, in absolute and percentage numbers; in columns 7 and 8, the same for BWT images. The last two columns contain a subdivision of column 7: the number of BWT images with $k$ nice positions which are BWTs of a primitive word (column 9) and of powers of primitive words (column 10). Conclusion {#sec:conclusion} ========== In this paper, we studied a combinatorial question on the Burrows-Wheeler transform, namely in which positions (called [*nice*]{} positions) the sentinel character can be inserted in order to turn a given word $w$ into a BWT image. We developed a combinatorial characterization of nice positions and presented an efficient algorithm to compute all nice positions of the word. We also showed that all nice positions have the same parity, and were able to give lower bounds on the values of nice positions, as well as an upper bound on the number of nice positions. These results are based on properties of the standard permutation of the original word. We also included in the paper a number of examples for short strings over an alphabet of cardinality 2 resp. 3, as well as some statistics regarding the number of nice positions of a word. Open problems include finding tighter bounds on nice positions, and finding a $o(n \log n)$ time algorithm for computing all nice positions. One possibility could be switching data structure to one that would allow implementing all bounds on nice positions from Sec. \[sec:parity\]. [10]{} \[1\][`#1`]{} \[1\][https://doi.org/\#1]{} Bannai, H., Inenaga, S., Shinohara, A., Takeda, M.: Inferring strings from graphs and arrays. In: 28th International Symposium on Mathematical Foundations of Computer Science, (MFCS 2003). Lecture Notes in Computer Science, vol. 2747, pp. 208–217. Springer (2003) B[ó]{}na, M.: Combinatorics of Permutations. CRC Press (2012) Bonomo, S., Mantaci, S., Restivo, A., Rosone, G., Sciortino, M.: Suffixes, conjugates and [Lyndon]{} words. In: International Conference on Developments in Language Theory. pp. 131–142. Springer (2013) Burrows, M., Wheeler, D.J.: A block-sorting lossless data compression algorithm. Tech. rep., DIGITAL System Research Center (1994) Cazaux, B., Rivals, E.: Reverse engineering of compact suffix trees and links: [A]{} novel algorithm. J. Discrete Algorithms **28**, 9–22 (2014) Cl[é]{}ment, J., Crochemore, M., Rindone, G.: Reverse engineering prefix tables. In: 26th International Symposium on Theoretical Aspects of Computer Science, [STACS]{} 2009, February 26-28, 2009, Freiburg, Germany, Proceedings. pp. 289–300 (2009) Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms. [MIT]{} Press (2009) Cox, A.J., Jakobi, T., Rosone, G., Schulz[-]{}Trieglaff, O.: Comparing [DNA]{} sequence collections by direct comparison of compressed text indexes. In: Algorithms in Bioinformatics - 12th International Workshop, [WABI]{} 2012, Ljubljana, Slovenia, September 10-12, 2012. Proceedings. Lecture Notes in Computer Science, vol. 7534, pp. 214–224. Springer (2012) Crochemore, M., Grossi, R., K[ä]{}rkk[ä]{}inen, J., Landau, G.M.: Computing the [Burrows-Wheeler]{} transform in place and in small space. J. Discrete Algorithms **32**, 44–52 (2015) Daykin, J.W., Franek, F., Holub, J., Islam, A.S.M.S., Smyth, W.F.: Reconstructing a string from its [Lyndon]{} arrays. Theor. Comput. Sci. **710**, 44–51 (2018) Daykin, J.W., Groult, R., Guesnet, Y., Lecroq, T., Lefebvre, A., L[é]{}onard, M., Prieur-Gaston, [É]{}.: A survey of string orderings and their application to the [Burrows–Wheeler]{} transform. Theoretical Computer Science **710**, 52–65 (2018) Ferragina, P., Giancarlo, R., Manzini, G., Sciortino, M.: Boosting textual compression in optimal linear time. J. [ACM]{} **52**(4), 688–713 (2005) Gagie, T., Manzini, G., Sir[é]{}n, J.: Wheeler graphs: [A]{} framework for [BWT]{}-based data structures. Theoretical computer science **698**, 67–78 (2017) Giancarlo, R., Restivo, A., Sciortino, M.: From first principles to the [Burrows and Wheeler]{} transform and beyond, via combinatorial optimization. Theor. Comput. Sci. **387**(3), 236–248 (2007) Giuliani, S., Lipt[á]{}k, [Zs]{}., Rizzi, R.: When a dollar makes a [BWT]{}. In: Proceedings of the 20th Italian Conference on Theoretical Computer Science, [ICTCS]{} 2019, Como, Italy, September 9-11, 2019. pp. 20–33 (2019) He, M., Munro, J.I., Rao, S.S.: A categorization theorem on suffix arrays with applications to space efficient text indexes. In: Proceedings of the Sixteenth Annual [ACM-SIAM]{} Symposium on Discrete Algorithms, [SODA]{} 2005, Vancouver, British Columbia, Canada, January 23-25, 2005. pp. 23–32 (2005) I, T., Inenaga, S., Bannai, H., Takeda, M.: Inferring strings from suffix trees and links on a binary alphabet. Discrete Applied Mathematics **163**, 316–325 (2014) Kaplan, H., Landau, S., Verbin, E.: A simpler analysis of [Burrows–Wheeler]{}-based compression. Theoretical Computer Science **387**(3), 220–235 (2007) Kaplan, H., Verbin, E.: Most [Burrows-Wheeler]{} based compressors are not optimal. In: Annual Symposium on Combinatorial Pattern Matching. pp. 107–118. Springer (2007) K[ä]{}rkk[ä]{}inen, J., Piatkowski, M., Puglisi, S.J.: String inference from [Longest-Common-Prefix Array]{}. In: 44th International Colloquium on Automata, Languages, and Programming, [ICALP]{} 2017, July 10-14, 2017, Warsaw, Poland. pp. 62:1–62:14 (2017) Kucherov, G., T[ó]{}thm[é]{}r[é]{}sz, L., Vialette, S.: On the combinatorics of suffix arrays. Inf. Process. Lett. **113**(22-24), 915–920 (2013) , T.W., [Li]{}, R., [Tam]{}, A., [Wong]{}, S., [Wu]{}, E., [Yiu]{}, S.M.: High throughput short read alignment via bi-directional [BWT]{}. In: 2009 IEEE International Conference on Bioinformatics and Biomedicine. pp. 31–36 (2009) Langmead, B., Trapnell, C., Pop, M., Salzberg, S.L.: Ultrafast and memory-efficient alignment of short [DNA]{} sequences to the human genome. Genome biology **10**(3),  R25 (2009) Li, H., Durbin, R.: Fast and accurate long-read alignment with [Burrows-Wheeler]{} transform. Bioinformatics **26**(5), 589–595 (2010) Likhomanov, K.M., Shur, A.M.: Two [Combinatorial Criteria]{} for [BWT]{} [Images]{}. In: Computer Science - Theory and Applications - 6th International Computer Science Symposium in Russia, [CSR]{} 2011, St. Petersburg, Russia, June 14-18, 2011. Proceedings. pp. 385–396 (2011) da Louza, F.A., Gagie, T., Telles, G.P.: [Burrows-Wheeler]{} transform and [LCP]{} array construction in constant space. J. Discrete Algorithms **42**, 14–22 (2017) Mantaci, S., Restivo, A., Rosone, G., Russo, F., Sciortino, M.: On [Fixed Points]{} of the [Burrows-Wheeler Transform]{}. Fundam. Inform. **154**(1-4), 277–288 (2017) Mantaci, S., Restivo, A., Rosone, G., Sciortino, M.: An extension of the [Burrows-Wheeler Transform]{}. Theor. Comput. Sci. **387**(3), 298–312 (2007) Mantaci, S., Restivo, A., Rosone, G., Sciortino, M.: A new combinatorial approach to sequence comparison. Theory of Computing Systems **42**(3), 411–429 (2008) Mantaci, S., Restivo, A., Sciortino, M.: [Burrows–Wheeler]{} transform and [Sturmian]{} words. Information Processing Letters **86**(5), 241–246 (2003) Manzini, G.: An analysis of the [Burrows-Wheeler]{} transform. J. [ACM]{} **48**(3), 407–430 (2001) Policriti, A., Prezza, N.: [LZ77]{} computation based on the run-length encoded [BWT]{}. Algorithmica **80**(7), 1986–2011 (2018) Prezza, N., Pisanti, N., Sciortino, M., Rosone, G.: [SNPs]{} detection by [eBWT]{} positional clustering. Algorithms for Molecular Biology **14**(1), 3:1–3:13 (2019) Restivo, A., Rosone, G.: Balancing and clustering of words in the [Burrows–Wheeler]{} transform. Theoretical Computer Science **412**(27), 3019–3032 (2011) Rosone, G., Sciortino, M.: The [Burrows-Wheeler]{} transform between data compression and combinatorics on words. In: Conference on Computability in Europe. pp. 353–364. Springer (2013) Sch[ü]{}rmann, K., Stoye, J.: Counting suffix arrays and strings. Theor. Comput. Sci. **395**(2-3), 220–234 (2008) Sleator, D.D., Tarjan, R.E.: A [Data Structure]{} for [Dynamic Trees]{}. In: Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing. pp. 114–122. STOC ’81, ACM, New York, NY, USA (1981) Starikovskaya, T.A., Vildh[ø]{}j, H.W.: A suffix tree or not a suffix tree? J. Discrete Algorithms **32**, 14–23 (2015) APPENDIX {#appendix .unnumbered} ======== In this appendix, we give the algorithm for computing the standard permutation (Algorithm 2), two further examples for the algorithm, and the full splay tree implementation for Example \[ex:sigmaPermutations1\] (Fig. \[fig:sigmaPermutations1\]). Finally, we include the full version of Table \[tab:tablePercentages2-short\]: Table \[tab:tablePercentages2\] contains statistics for $\sigma=2$ and $n=3, \ldots, 18$. $n \gets |w|$ $count \gets$ array of length $|\Sigma|$ of zeros [^1]: This is an extended version of [@GLR19].
ITEP/TH-33/17 IITP/TH-20/17 **A.Morozov** [*ITEP, Moscow 117218, Russia*]{} [*Institute for Information Transmission Problems, Moscow 127994, Russia* ]{} [*National Research Nuclear University MEPhI, Moscow 115409, Russia* ]{} ABSTRACT [Next step is reported in the program of Racah matrices extraction from the differential expansion of HOMFLY polynomials for twist knots: from the double-column rectangular representations $R=[rr]$ to a triple-column and triple-hook $R=[333]$. The main new phenomenon is the deviation of the particular coefficient $f_{[332]}^{[21]}$ from the corresponding skew dimension, what opens a way to further generalizations. ]{} Introduction ============ Calculation of Racah matrices is the long-standing, difficult and challenging problem in theoretical physics [@racahmatrices]. It is further obscured by the basis-dependence of the answer in the case of generic representations, but this “multiplicity problem” is absent in the case of rectangular representations. The modern way [@M3141]-[@MnonrectS] to evaluate the most important “exclusive” matrices $\bar S_{\mu\nu}^{R}$, $$\Big((R\otimes \bar R)\otimes R \longrightarrow R \Big) \ \stackrel{\bar S}{\longrightarrow} \ \Big(R\otimes (\bar R \otimes R) \longrightarrow R \Big)$$ is based on the combination of two very different expressions for $R$-colored HOMFLY polynomials [@knotpols] of the double-braid knots, coming one from the arborescent calculus of [@arbor] and another from the differential expansion theory [@IMMMfe; @evo; @diffexpan] in the case of rectangular $R=[r^s]$ with $s$ columns of length $r$: \[HRdb1\] (200,280)(-230,-230) (-40,0)(-50,20)(-60,0) (-40,0)(-50,-20)(-60,0) (-20,0)(-30,20)(-40,0) (-20,0)(-30,-20)(-40,0) (-20,0)(-15,10)(-10,10) (-20,0)(-15,-10)(-10,-10) (-5,0) (10,10)(15,10)(20,0) (10,-10)(15,-10)(20,0) (20,0)(30,20)(40,0) (20,0)(30,-20)(40,0) (40,0)(50,20)(60,0) (40,0)(50,-20)(60,0) (-60,0)[(-1,2)[10]{}]{} (-60,0)[(-1,-2)[10]{}]{} (60,0)[(1,2)[10]{}]{} (60,0)[(1,-2)[10]{}]{} (0,-80)(-20,-90)(0,-100) (0,-80)(20,-90)(0,-100) (0,-100)(-20,-110)(0,-120) (0,-100)(20,-110)(0,-120) (0,-120)(-10,-125)(-10,-130) (0,-120)(10,-125)(10,-130) (0,-145) (0,-160)(-10,-155)(-10,-150) (0,-160)(10,-155)(10,-150) (0,-160)(-20,-170)(0,-180) (0,-160)(20,-170)(0,-180) (0,-180)(-20,-190)(0,-200) (0,-180)(20,-190)(0,-200) (0,-80)[(-2,1)[10]{}]{} (0,-80)[(2,1)[10]{}]{} (0,-200)[(-2,-1)[10]{}]{} (0,-200)[(2,-1)[10]{}]{} (0,-200)[(-2,-1)[20]{}]{} (0,-200)[(2,-1)[20]{}]{} (-10,-75)(-80,-40)(-70,-20) (10,-75)(80,-40)(70,-20) (-10,-205)[(2,1)[2]{}]{} (10,-205)[(2,-1)[2]{}]{} (-65,10)[(-1,2)[2]{}]{} (65,10)[(-1,-2)[2]{}]{} (-70,-20)[(1,2)[2]{}]{} (70,-20)[(1,-2)[2]{}]{} (-3,20) (-32,-140) (-70,20)(-80,40)(-97,25) (-97,25)(-111,13))(-100,-30) (-100,-30)(-60,-230)(-20,-210) (70,20)(80,40)(97,25) (97,25)(111,13))(100,-30) (100,-30)(60,-230)(20,-210) (-102,-22)[(1,-4)[2]{}]{} (100,-30)[(1,4)[2]{}]{} Here the sums go over sub-diagrams of the Young diagram $R$, and $\chi^*(N)$ denote the corresponding dimensions for the algebra $sl_N$, i.e. the values of Schur functions $\chi\{p_k\}$ at the topological locus $p_k=p_k^*=\frac{\{A^k\}}{\{q^k\}}$ with $\{x\} = x-x^{-1}$ and $A=q^N$. Combinatorial factor $h_\lambda^2$ cancels the $N$-independent denominators in $\chi^*_\lambda(N+r)\chi^*_\lambda(N-s)$, converting it into a product of “differentials” $\{Aq^i\}$. The other ingredients of the formula come from the evolution method [@DMMSS; @evo] applied to the family of twist knots (double braids with $n=1$): as functions of the “evolution parameter” $m$ knot polynomials are then decomposed into sums of representation $\mu \in R\otimes\bar R$ (which for rectangular $R$ can be labeled by sub-diagrams of the $R$ itself) with dimensions ${\cal D}_\mu$, and $m$-dependence is then provided by $m$-th power of the “eigenvalue” $\Lambda_\mu$ of the ${\cal R}$-matrix, a $q$-power of the Casimir or cut-and-join operator [@DMMSS]. In arborescent calculus the weights are made from the elements of Racah matrix $\bar S$ while in the theory of differential expansions they are composed into amusing generating functions [@evo] F\_\^[(m)]{}(q,A) = \_ f\_\^(q,A)\_\^m with $f_\lambda^\emptyset=1$. Each term in the sum has a non-trivial denominator, however the full sum is a Laurent polynomial in $A$ and $q$ for all $m$. Moreover, it vanishes for $m=0$ (unknot), equals one for $m=-1$ (figure eight knot $4_1$) and is a monomial at $m=1$ (trefoil). According to [@Mfact] and [@KMtwist] the $F$-functions are best described in a peculiar hook parametrization of Young diagrams: (300,150)(-50,-10) (0,0)[(0,1)[130]{}]{} (15,0)[(0,1)[130]{}]{} (30,15)[(0,1)[85]{}]{} (45,30)[(0,1)[40]{}]{} (75,30)[(0,1)[15]{}]{} (90,15)[(0,1)[15]{}]{} (120,0)[(0,1)[15]{}]{} (0,0)[(1,0)[120]{}]{} (0,15)[(1,0)[120]{}]{} (15,30)[(1,0)[75]{}]{} (30,45)[(1,0)[45]{}]{} (0,130)[(1,0)[15]{}]{} (15,100)[(1,0)[15]{}]{} (30,70)[(1,0)[15]{}]{} (3,50) (18,55) (33,60) (50,3) (55,18) (60,33) (130,100) (130,80) (130,60) In particular, \_= \_[(i\_1,j\_1|i\_2,j\_2|…)]{} = \_[k=1]{} (Aq\^[i\_k-j\_k]{})\^[2(i\_k+j\_k+1)]{} the overall coefficients c\_= c\_[(a\_1,b\_1|a\_2,b\_2|…)]{} = \_[k=1]{} (Aq\^)\^[(a\_k+b\_k+1)]{} and F\_\^[(-1)]{}=1,      F\_\^[(0)]{}=\_[,]{},     F\_\^[(1)]{} = (-)\^[\_[k]{} (a\_k+b\_k+1)]{}c\_\^2 \[sumrules1\] Clearly, $c_\lambda$ drops away from the r.h.s. of (\[HRdb1\]). The shape of the coefficients $f_\lambda^\mu$ strongly depends on the number of hooks in $\lambda$ and $\mu$. Currently they are fully known for $\lambda=(a_1,b_1|a_2,0)$ – what is enough to get the Racah matrices $\bar S$ for the case $R=[r,r]$ (actually, for this purpose $b_1=0,1$ is sufficient). understand. $\bullet$ As already mentioned, for the empty diagram $\mu$ always f\_\^= 1 $\bullet$ For the single-hook $\lambda$ and thus single-hook $\mu\subset\lambda$ expressions are still relatively simple and fully factorized: f\_[(a,b)]{}\^[(i,j)]{} = g\_[(a,b)]{}\^[(i,j)]{} K\_[(a,b)]{}\^[(i,j)]{} = (-)\^[i+j+1]{} \[f1vsgK1\] with g\_[(a,b)]{}\^[(i,j)]{} = (-)\^[i+j+1]{} \[gfactors1\] and K\_\^(N) = This $K$ involves skew characters, defined by \_ \_[/]{}{p’\_k}\_[\^[tr]{}]{}{p”\_k} = \_{p’\_k+p”\_k} \[skewdef\] and satisfying the sum rule \_ (-)\^[||]{}\_[/]{}\_[\^[tr]{}]{} = \_[,]{} \[naivesumrule1\] which follows from (\[skewdef\]) and the transposition law $\chi_\mu\{-p_k\} = (-)^{|\mu|} \chi_{\mu^{tr}}\{p_k\}$, and can be considered as a prototype of (\[sumrules1\]). The other notation in (\[f1vsgK1\]) and (\[gfactors1\]) is: D\_a={Aq\^a}=[{q}]{},      |D\_b={A/q\^b}=[{q}]{}and D\_a!=\_[k=0]{}\^a D\_k = [{q}]{}\^[a+1]{},        |D\_b! = \_[k=0]{}\^b |D\_k = [{q}]{}\^[b+1]{} (note that these products start from $k=0$ and include respectively $a+1$ and $b+1$ factors). $\bullet$ For two-hook $\lambda=(a_1,b_1|a_2,b_2)$ the formulas are far more involved, and they are different for different number of hooks in $\mu$: f\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} = f\^[(a\_1,b\_1)]{}\_[(i\_1,j\_1)]{} \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} = g\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}K\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}() \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} \[f12\] f\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = \_[f\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}f\_[(a\_2,b\_2)]{}\^[(i\_2,j\_2)]{}]{} \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} Non-trivial are the correction factors, $\ {\bf true \ for \ a_2\cdot b_2=0}$: \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{}()  + + \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} ()   \[xi1\] and \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} () \[xi2\] where $\ \delta_x = \left\{\begin{array}{ccc} 1 & {\rm for} & x = 0 \\ 0 & {\rm for} & x\neq 1 \end{array}\right. \ $ and \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{}() = \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{}() = \[calK22\] Thus corrections involve a natural modification of $K$-factors and somewhat strange shifts of the argument $N$, i.e. multiplicative shift of $A$ by powers of $q$. These formulas were found in [@Mfact; @KMtwist] for the case when $a_2\cdot b_2=0$ (i.e. when either $b_2=0$ or $a_2=0$). Sufficient for all the simplest non-symmetric rectangular representations $R=[r,r]$ and $R=[2^r]$ are respectively $b_2=0$ and $a_2=0$. Note that underlined expression are the [*arguments*]{} of ${\cal K}$-functions – [*not*]{} additional algebraic factors. Boxes contain projectors on sectors with particular values of $i_1$ and $j_1$. Our goal in this paper is to make the first step towards lifting the restriction $a_2\cdot b_2=0$. Namely, we consider the case of the simplest 3-hook $R=[333]$, which has $20$ Young sub-diagrams, of which there are two, $\lambda = [332]=(2,2|1,1)$ and $\lambda=[333]=(2,2|1,1|0,0)$ with $a_2\cdot b_2\neq 0$. The new function $F_{(22|11)}^{(m)}=F_{[332]}^{(m)}$ ==================================================== The diagram $[332]=(22|11)$ is still two-hook, but both $a_2=b_2=1$ are non-vanishing. If we apply just the same formulas (\[f12\])-(\[calK22\]) in this case, the answer will be non-polynomial. However, one can introduce additional correction factors $\eta_\lambda^\mu$ for all the items in the sum over $\mu$ and adjust them to cancel all the singularities. Of $19$ factors non-trivial (different from unity) are just $8$ (we omit the subscript $\lambda=(22|11)$ to simplify the formulas): \^= \^[(00)]{}=\^[(01)]{} =\^[(10)]{}=\^[(02)]{}=\^[(20)]{}=\^[(22)]{}=1\ \ \^[(11)]{} = ,         \^[(12)]{} = ,     \^[(21)]{} =\ \^[(11|00)]{} = ,         \^[(12|00)]{} = \^[(12|01)]{}=,       \^[(21|00)]{}=\^[(21|10)]{}=\ \ \ \^[(22|00)]{}=\^[(22|01)]{}=\^[(22|10)]{} =\^[(22|11)]{}=1 and the resulting expression is A\^[-8]{}F\_[(22|11)]{}\^[(m)]{} = -\_[(00)]{}\^m +\ + \_[(01)]{}\^m +\ + \_[(10)]{}\^m -\ -\_[(02)]{}\^m -\_[(20)]{}\^m - \_[(11)]{}\^m +\ + \_[(12)]{}\^m +\ + \_[(21)]{}\^m -\ - \_[(22)]{}\^m + \_[(11|00)]{}\^m -\ -\_[(12|00)]{}\^m -\_[(21|00)]{}\^m +\ + \_[(12|01)]{}\^m + \_[(21|10)]{}\^m + \_[(22|00)]{}\^m -\ - \_[(22|01)]{}\^m - \_[(22|10)]{}\^m +\ + \_[(22|11)]{}\^m It nicely satisfies the sum rules (\[sumrules1\]). Extension to $F_{(a_1b_1|11)}$ ============================== We can now develop the success with $F_{(22|11)}$ and extend it to other 2-hook diagrams with $a_2\cdot b_2\neq 0$. We actually restrict our attention to the case of $a_2\cdot b_2=1$, i.e. $a_2=b_2=1$. In the next case of $F_{(33|11)}$ the correction factors are (again we write just $\eta^\mu$ instead of $\eta_{(33|11)}^\mu$): \^= \^[(00)]{}=\^[(01)]{} =\^[(10)]{}=\^[(02)]{}=\^[(20)]{}= \^[(30)]{} = \^[(03)]{} = \^[(22)]{} = \^[(32)]{}=\^[(23)]{} = \^[(33)]{} =1\ \ \^[(11)]{} = ,         \^[(12)]{} =     \^[(21)]{} =\ \^[(13)]{} =     \^[(31)]{} =\ \^[(11|00)]{} =          \ \^[(12|00)]{} = \^[(12|01)]{}=\^[(13|00)]{} = \^[(13|01)]{}=      \^[(21|00)]{}=\^[(21|10)]{}=\^[(31|00)]{}=\^[(31|10)]{}=\ \ \ \^[(22|00)]{}=\^[(22|01)]{}=\^[(22|10)]{}=\^[(22|11)]{} = \^[(23|00)]{}=\^[(23|01)]{}=\^[(23|10)]{}=\^[(23|11)]{} =\ \^[(32|00)]{}=\^[(32|01)]{}=\^[(32|10)]{}=\^[(32|11)]{}= \^[(33|00)]{}=\^[(33|01)]{}=\^[(33|10)]{}=\^[(33|11)]{}=1 This implies a simple extension of (\[xi1\]) and (\[xi2\]) to arbitrary diagrams $(a_1,b_1|1,1)$, i.e. ${\bf true\ for \ a_2\cdot b_2=0,1}$ are: \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} = \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{}()  + + \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} ()   () \^[(1-\_[a\_2b\_2]{})\_[i\_1j\_1-1]{}]{} \[xi1a\] with \[nonskew\] and \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} () ()\^[\_[i\_1-1]{}(1-\_[a\_1b\_1]{})]{} ()\^[\_[j\_1-1]{}(1-\_[a\_1b\_1]{})]{} \[xi2a\] Formula (\[nonskew\]) means that the coefficient $f^{(11)}_\lambda$ is no longer proportional to the skew character $\chi^*_{\lambda/(11)}$. Interpretation of this deviation remains to be found. Note that for $a_2\cdot b_2=0$ we have just u\_[(a\_1,b\_1|a\_2,b\_2)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(1,1)]{}     a\_2b\_2=0 instead of (\[nonskew\]) – as one more manifestation of discontinuity of the formulas, expressed in terms of hook variables. The new function $F_{[333]}^{(m)}=F_{(22|11|00)}^{(m)}$ ======================================================= This $F$-factor is the first, associated with the triple-hook diagram $\lambda$. To get an explicit formula we impose the polynomiality requirement on the correction factors $\eta_{(22|11|00)}^\mu$ to the naive analogue of (\[f12\])-(\[calK22\]) for 3-hook diagrams: f\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} = f\^[(a\_1,b\_1)]{}\_[(i\_1,j\_1)]{} \_[(a\_1,b\_1|a\_2,b\_2)]{}\^[(i\_1,j\_1)]{} = g\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}K\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}(N) \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} f\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = \_[f\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}f\_[(a\_2,b\_2)]{}\^[(i\_2,j\_2)]{}]{} \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} f\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} = \_[f\_[(a\_1,b\_1)]{}\^[(i\_1,j\_1)]{}f\_[(a\_2,b\_2)]{}\^[(i\_2,j\_2)]{}f\_[(a\_3,b\_3)]{}\^[(i\_3,j\_3)]{}]{} \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} In the first approximation the correction factors in the 3-hook case are (they are [**never literally true**]{}, before $\eta$-factors are introduced): \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} ()    + + \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{}() \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} () \[xi3prot\] (note that $a_2>0$ and $b_2>0$ for 3-hook diagrams thus the shifts like $N\longrightarrow \underline{N+(i_1+1)\delta_{b_2}-(j_1+1)\delta_{a_2})}$ do not matter) and \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} = \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} () with \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{}()=\ [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{}()=\ [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{}()= Correction factors $\eta_{(22|11|00)}^\mu$ appear to be \^[(00)]{}=\^[(01)]{}=\^[(10)]{}=\^[(20)]{}=\^[(02)]{}=\^[(22)]{}=1\ \^[(11)]{} =       \^[(12)]{}= =     \^[(21)]{} = =\ \^[(11|00)]{} =\ \^[(12|00)]{} = =\ \^[(21|00)]{} = =\ \^[(12|01)]{} = =\ \^[(21|10)]{} = =\ \^[(22|00)]{}=\^[(22|01)]{}=\^[(22|10)]{}=\ \^[(22|11)]{}=\^[(22|11|00)]{}=1 and the answer for the $F$-function is A\^[-9]{} F\_[(22|11|00)]{}\^[(m)]{} =\ =(1 - \_[(22|11|00)]{}\^m) - (\_[(00)]{}\^m - \_[(22|11)]{}\^m) +\ + (\_[(01)]{}\^m - \_[(22|10)]{}\^m ) + (\_[(10)]{}\^m - \_[(22|01)]{}\^m ) -\ - (\_[(02)]{}\^m - \_[(21|10)]{}\^m ) - (\_[(20)]{}\^m - \_[(12|01)]{}\^m ) -\ - (\_[(11)]{}\^m - \_[(22|00)]{}\^m ) +\ + (\_[(12)]{}\^m - \_[(21|00)]{}\^m ) + (\_[(21)]{}\^m - \_[(12|00)]{}\^m ) -\ - (\_[(22)]{}\^m - \_[(11|00)]{}\^m ) This is actually a Laurent polynomial at all $m$, satisfying (\[sumrules1\]). Extension to $F_{(a_1b_1|11|00)}$ ================================= Again, we can easily extend this result to arbitrary $a_1$ and $b_1$: the substitute of (\[nonskew\]), $\ {\bf true\ for\ a_2\cdot b_2=1,\ a_3\cdot b_3=0}$, is \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} =  [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{} () ()\^[\_[i\_1j\_1-1]{}]{}    + + \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1)]{}() \[xi1b\] \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} = \[xi2b\] [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2)]{} () ()\^[\_[i\_1-1]{}]{} ()\^[\_[j\_1-1]{}]{} ()\^[2\_[i\_1j\_1-1]{} + (1-\_[i\_1-1]{})(1-\_[ j\_1-1]{})) \_[i\_2j\_2]{}]{} \_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} = \[xi3b\] [K]{}\_[(a\_1,b\_1|a\_2,b\_2|a\_3,b\_3)]{}\^[(i\_1,j\_1|i\_2,j\_2|i\_3,j\_3)]{} () The shift $N \ \longrightarrow \ \underline{N +(i_1+i_2+i_3+3)\cdot \delta_{b_3} - (j_1+j_2+j_3+3)\cdot \delta_{a_3}}$ in the last line is not actually tested by these formulas, because the associated ${\cal K}_{(a_1b_1|11|00)}^{(i_1j_1|11|00)}$ do not depend on $A$. The quantity $u_{(a_1b_1|11|00)}$ is given by a literal analogue of (\[nonskew\]): \[nonskew3\] Racah matrix $\bar S$ for representation $R=[333]$ ================================================== Coming back to the case of $R=[333]$ we can now use (\[HRdb1\]) to get the matrix elements $\bar S^{[333]}_{\mu\nu}$. For this purpose it is technically convenient to substitute the expansion in $\Lambda_\mu^m \Lambda_\nu^n$ by that in $\Lambda_\mu\bar\Lambda_\nu$ with independent $\bar \Lambda$ and $\lambda$ instead of arbitrary $m$ and $n$. To get a $20\times 20$ matrix we need to enumerate the subdiagrams of $R=[333]$, which are also in one-to-one correspondence with the $20$ irreducible representations in $R\otimes \bar R= [333] \otimes \overline{[333]}$:\ [|c|ccc|cccccc|]{} &&&&&&&&&\ 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\ &&&&&&&&&\ & \[1\] & \[11\] & \[111\] & \[2\] & \[21\] & \[211\] &\[22\] & \[221\] & \[222\]\ &&&&&&&&&\ & (00) & (01) & (02) & (10) & (11) & (12) & (11|00) & (12|00) & (12|01)\ &&&&&&&&&\ [|cccccccccc|]{} &&&&&&&&&\ 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20\ &&&&&&&&&\ . \[3\] & \[31\] & \[311\] & \[32\] & \[321\] & \[322\] &\[33\] & \[331\] & \[332\] & \[333\]\ &&&&&&&&&\ (20) & (21) & (22) & (21|00) & (22|00) & (21|10) & (22|01) & (22|10) & (22|11) & (22|11|00)\ &&&&&&&&&\ Dimensions ${\cal D}_\mu$ of these representations are obtained from the terms with $\nu=\emptyset$ in (\[HRdb1\]), because $\bar S_{\mu\emptyset} = \frac{\sqrt{{\cal D}_\mu}}{d_R}$: in obvious notation \_= d\_R\^2 (H\_R, \_|\_) After that |S\_ = (H\_R, \_|\_nu) The simplest test of the result is that $\bar S$ is orthogonal matrix, \_[=1]{}\^[20]{} |S\_ |S\_[’]{} = \_[’]{} It is also symmetric. The second exclusive matrix $S^{[333]}$ is then the diagonalizing matrix of $\bar T\bar S\bar T$ [@arbor]: |T|S |T= S T\^[-1]{} S\^with the known diagonal $T$ and $\bar T$, made from the $q$-powers of Casimir. This is actually a linear equation for $S$, (|T|S |T)S = S T\^[-1]{} \[SfrobS\] which is practically solvable, though explicit calculation is somewhat tedious. The resulting matrix $S^{[333]}_{\mu\nu}$ should be orthogonal – what fixes the normalization of the solution to (\[SfrobS\]). In variance with $\bar S$, this $S$ is not symmetric. A typical example of the matrix element is [$$\bar S^{[333]}_{7,15} = \bar S^{[333]}_{[211],[321]} = -\frac{[5]\cdot \{q\}}{D_4D_2^2D_0D_{-2}D_{-4}}\cdot\sqrt{\frac{D_5D_3}{D_1D_{-1}}}\cdot$$ $$\cdot \Big(A^6q^{-2} -A^4(2q^8+3q^6+2q^4+q^2-3-5q^{-2}-2q^{-4}+2q^{-6}+2q^{-8}+3q^{-10}+q^{-12}) +$$ $$+A^2\left(q^{18}+3q^{16}+4q^{14}+4q^{12}-6q^8-9q^6-5q^4+2q^2+12 +{13}q^{-2}+{5}q^{-4}-{4}q^{-6}-{9}{q^{-8}} -7q^{-10}-{q^{-12}}+{4}{q^{-14}}+{4}{q^{-16}} +{3}{q^{-18}}+{q^{-20}}\right) -$$ [$$-(q^2+q^{-2})(q^{22}+3q^{20}+3q^{18}-q^{16}-8q^{14}-9q^{12}+q^{10}+14q^8+19q^6+6q^4-13q^2-22 -13q^{-2}+6q^{-4}+19q^{-6}+14q^{-8}+q^{-10}-9q^{-12}-8q^{-14}-q^{-16}+3q^{-18}+3q^{-20}+q^{-22}) -$$ ]{} $$+A^{-2}(q^{-18}+3q^{-16}+4q^{-14}+4q^{-12}-6q^{-8}-9q^{-6}-5q^{-4}+2q^{-2} +12+13q^2+5q^4-4q^6-9q^8 -7q^{10}-q^{12}+4q^{14}+4q^{16}+3q^{18}+q^{20}) -$$ $$-A^{-4}(2q^{-8}+3q^{-6}+2q^{-4}+q^{-2}-3-5q^2-2q^4+2q^6+2q^8+3q^{10}+q^{12}) +A^{-6}q^2\Big)$$ ]{} The polynomial in brackets reduces to $D(0)^6=\{A\}^6$ at $q=1$ and to $\ -\left([4][3][2]\right)^3\{q\}^6\ $ at $A=1$. A better quantity for practical calculations is unnormalized $\bar \sigma_{\mu\nu} = \bar S_{\mu\nu}\cdot \sqrt{{\cal D}_\mu{\cal D}_\nu} $, which does not contain square roots. Conclusion \[conc\] =================== The main result of the present letter is explicit expression for the two previously unknown $F$-functions $F_{(22|11)}^{(m)}$ and $F_{(22|11|00)}^{(m)}$. Most important is the deviation from the coefficient $f_{(22|11)}^{(11)}$ from the skew dimension, even shifted – what is expressed by eq.(\[nonskew\]), see also (\[nonskew3\]). This new phenomenon explains the failure of previous naive attempts to write down an explicit general expression for $F$ in arbitrary representation: an adequate substitute of the skew characters and appropriate generalization of the corresponding conjecture in [@KMtwist] is needed for this. The next step in this study should be further extension to $a_2\cdot b_2>1$. The two newly-found functions, if combined with the other $18$, associated with 0,1,2-hook diagrams $\lambda$ with the property $a_2\cdot b_2=0$, provide explicit expression for $[333]$-colored HOMFLY for all twist and double braid knots. Moreover, from (\[HRdb1\]) one can read all the elements of the Racah matrix $\bar S^{[333]}$, while $S^{[333]}$ is then found from (\[SfrobS\]). Thus this paper solves the long-standing problem to evaluate $\bar S^{[333]}$ and $S^{[333]}$. Explicit expressions for these Racah matrices as well as for the $[333]$-colored HOMFLY for the simplest twist and double-braid knots are available at [@knotebook]. It still remains to evaluate the twist-knot polynomials and Racah matrices for [*generic*]{} rectangular representations – the new step, made in the present paper, provides the essential new knowledge about this problem which can help to overcome the existing deadlock.[^1] For additional peculiarities of [*non-rectangular*]{} case see [@Mnonrect]. The main point there is that representations in $R\otimes \bar R$ are no longer in one-to-one correspondence with the sub-diagrams of non-rectangular $R$. Still, factorization of the coefficients in the differential expansion for double braids persists, and thus the Racah matrices $\bar S$ can still be extracted from knot polynomials – though the procedure becomes more tedious [@MnonrectS]. Acknowledgements {#acknowledgements .unnumbered} ================ This work was performed at the Institute for the Information Transmission Problems with the support from the Russian Science Foundation, Grant No.14-50-00150. [12]{} G. Racah, Phys.Rev. [**62**]{} (1942) 438-462\ E.P. Wigner, Manuscript, 1940, in: [*Quantum Theory of Angular Momentum*]{}, pp. 87–133, Acad.Press, 1965; [*Group Theory and Its Application to the Quantum Mechanics of Atomic Spectra*]{}, Acad.Press, 1959\ L.D. Landau and E.M. Lifshitz, [*Quantum Mechanics: Non-Relativistic Theory*]{}, Pergamon Press, 1977\ J. Scott Carter, D.E. Flath, M. Saito, [*The Classical and Quantum 6j-symbols*]{}, Princeton Univ.Press, 1995\ S. Nawata, P. Ramadevi and Zodinmawia, Lett.Math.Phys. [**103**]{} (2013) 1389-1398, arXiv:1302.5143\ A. Mironov, A. Morozov, A. Sleptsov, JHEP 07 (2015) 069, arXiv:1412.8432; Pis’ma v ZhETF, 106 (2017) 607, arXiv:1709.02290 A.Morozov, Nucl.Phys. B911 (2016) 582-605, arXiv:1605.09728 A.Morozov, JHEP 1609 (2016) 135, arXiv:1606.06015 v8 Ya.Kononov and A.Morozov, Theor.Math.Phys. 193 (2017) 1630-1646, arXiv:1609.00143 Ya.Kononov and A.Morozov, Mod.Phys.Lett. A Vol. 31, No. 38 (2016) 1650223, arXiv:1610.04778 A.Morozov, arXiv:1612.00422 A.Morozov, Phys.Lett. B 766 (2017) 291-300, arXiv:1701.00359 J.W.Alexander, Trans.Amer.Math.Soc. 30 (2) (1928) 275-306\ V.F.R.Jones, Invent.Math. 72 (1983) 1 Bull.AMS 12 (1985) 103 Ann.Math. 126 (1987) 335\ L.Kauffman, Topology 26 (1987) 395\ P.Freyd, D.Yetter, J.Hoste, W.B.R.Lickorish, K.Millet, A.Ocneanu, Bull. AMS. 12 (1985) 239\ J.H.Przytycki and K.P.Traczyk, Kobe J Math. 4 (1987) 115-139\ A.Morozov, Theor.Math.Phys. 187 (2016) 447-454, arXiv:1509.04928 A.Mironov, A.Morozov, An.Morozov, P.Ramadevi, V.K. Singh, JHEP [**1507**]{} (2015) 109, arXiv:1504.00371\ S.Nawata, P.Ramadevi and Vivek Kumar Singh, arXiv:1504.00364\ A.Mironov and A.Morozov, Phys.Lett. B755 (2016) 47-57, arXiv:1511.09077\ A. Mironov, A. Morozov, An. Morozov, P. Ramadevi, V.K. Singh and A. Sleptsov, J.Phys. A: Math.Theor. [**50**]{} (2017) 085201, arXiv:1601.04199 H. Itoyama, A. Mironov, A. Morozov and An. Morozov, JHEP 2012 (2012) 131, arXiv:1203.5978 A. Mironov, A. Morozov and An. Morozov, AIP Conf. Proc. 1562 (2013) 123, arXiv:1306.3197 S.Arthamonov, A.Mironov, A.Morozov, Theor.Math.Phys. 179 (2014) 509-542, arXiv:1306.5682\ S.Arthamonov, A.Mironov, A.Morozov, An.Morozov, JHEP 04 (2014) 156, arXiv:1309.7984\ Ya.Kononov and A.Morozov, JETP Letters 101 (2015) 831-834, arXiv:1504.07146\ C. Bai, J. Jiang, J. Liang, A. Mironov, A. Morozov, An. Morozov, A. Sleptsov, arXiv:1709.09228 P.Dunin-Barkowski, A.Mironov, A.Morozov, A.Sleptsov, A.Smirnov, JHEP 03 (2013) 021, arXiv:1106.4305 http://knotebook.org M.Kameyama, S.Nawata, R.Tao, H.D.Zhang, arXiv:1902.02275 A.Morozov, arXiv:1902.04140 [^1]: Comment to version 3: This problem is now solved in [@KNTZ] and [@RectSbar].
--- author: - 'Venelin Kozhuharov[^1]' title: NA62 experiment at CERN SPS --- Introduction {#na62vvv:intro} ============ The high intensity approach of the fixed target experiments as opposed to the highest energy collisions provides a unique opportunity to address the Standard model through precision measurements. The phenomena in kaon physics allow to probe both the low energy behaviour of the strong interactions as well as the high energy weak scale through loop processes. Special attention should be given to the rare kaon decays since some of them could achieve sizeable contribution in the presence of New Physics. In the Standard Model they are suppressed either by the necessity of flavour changing neutral current transitions or due to helicity conservation. The NA62 experiment {#na62vvv:na62} =================== The NA62 experiment is located at CERN North area and uses a primary proton beam from the SPS for the production of a secondary kaon beam. Its first data taking took place in the 2007-2008 with the NA48/2 setup and was devoted to the study ot the ${K_{e 2}}$ decays. In 2009 the existing experimental apparatus was dismantled in order to allow the construction of the new setup [@bib:na62tdr] devoted to the study of the ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ decay. In 2012 a technical run with beam was accomplished to study the performance of part of the NA62 subdetectors. At present the experiment is in its final construction and preparation phase and will have the pilot physics run in October 2014. ${K_{e 2}}$ data taking setup ----------------------------- The kaon beam was formed by a primary 400 GeV/c proton beam extracted from SPS hitting a 400 mm long beryllium target. The secondary particles were selected with a momentum of $(74 \pm 1.4)$ GeV/c with the possibility to use simultaneous or single positive and negative beams. The fraction of the kaons in the beam was about 6% and they decayed in a 114 m long evacuated tank. The decay products were registered by the NA48 detector [@bib:na48]. The momentum of the charged particles was measured with resolution $\sigma(p)/p = (0.48 \oplus 0.009 p [GeV/c]) )\% $ by a spectrometer consisting of four drift chambers separated by a dipole magnet. Precise time information and trigger condition was provided by a scintillator hodoscope with time resolution of 150 ps which was followed by a quasi-homogeneous liquid krypton electromagnetic calorimeter measuring photon and electron energy with resolution $\sigma(E)/E = 3.2\%/ \sqrt{E} \oplus 9\%/ E \oplus 0.42\%$ \[GeV\]. It was also able to provide particle identification based on the energy deposit by different particles with respect to their momentum. A lead bar with 9.2 radiation lengths was placed in front of the LKr during 55% of the data taking to study the muon misidentification probability. Data with three different beam conditions were collected - 65% only with $K^+$ beam, 8% only with $K^-$, and the rest with simultaneous beams ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ experimental setup ------------------------------------------------------ The beam and the detector of the NA62 experiment for the ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ data taking are dictated by the main goal - the study of the extremely rare decay ${K^{+} \to \pi^{+} \nu \bar{\nu}}$. The protons intensity from the SPS will be increased by 30% and the secondary positive beam will be with momentum $(75GeV/c \pm 1\% )$. Its rate will be about 800 MHz and the decay volume is evacuated. The final beam line was tested during the technical run in 2012. ![NA62 experimental layout[]{data-label="fig:na62"}](na62layout-PV.eps){width="\textwidth"} The major detector components are shown in fig. \[fig:na62\] and are: [**KTAG:**]{} A hydrogen filled threshold Cherenkov counter used for positive kaon identification in the beam at a rate of 45 MHz. Time resolution of 100 ps was achieved which is important for the suppression of accidental background. [**Gigatracker:**]{} Three stations of thin silicon pixel detectors for the measurement of kaon momentum, flight direction and time. The expected resolutions on the measured quantities will be $\sigma(p_K)/p_K \sim 0.2\%$ on momentum, 16 $\mu rad$ angular and time resolution of 200 ps per station. [**Chanti:**]{} Scintillating anticounters providing a veto against interactions of the beam particles. [**ANTI:**]{} Twelve rings of lead glass counters surrounding the decay region and acting as photon veto detectors (LAV) [@bib:na62-lav] for angles of the photons higher than 8.5 $mrad$ with respect to the kaon flight direction. [**Straw spectrometer:**]{} Four chambers of straw tubes separated by the MNP33 dipole magnet will be operated in vacuum in order to provide momentum resolution of $\sigma(p)/p = (0.3 \oplus 0.008 p[GeV/c]) )\% $ with a minimal material budget. [**RICH:**]{} A ring imaging Cherenkov detector will measure the velocity of the charged particles allowing to separate pions from muons and will provide time resolution better than 100 ps [@bib:rich]. [**CHOD:**]{} A plastic scintillator charged hodoscope will be used in the trigger. [**IRC and SAC:**]{} Shashlyk type veto detectors covering photon angles down to zero (SAC) and also serving to veto converted in the upstream material photons (IRC). [**LKr:**]{} The NA48 liquid krypton calorimeter with renewed readout electronics will serve as a photon veto for photons with angles from 1.5 to 8.5 $mrad$ with inefficiency less than $10^{-5}$ for photon with energies above 10 GeV. [**MUV:**]{} Three muon veto stations based on iron and scintillator sandwich will provide separation between pions and muons better than $10^{-5}$. Both KTAG and Gigatracker are exposed to the full 800 MHz hadron beam while the rate seen by the downstream detectors is at most 10 MHz. Probing the lepton universality with $K^{\pm} \to l^{\pm}\nu$ decays ==================================================================== Within the Standard Model the dilepton charged pseudoscalar meson decays proceed as tree level processes through a W exchange. However, the helicity conservation leads to a strong suppression of the electron mode. The Standard Model (SM) expression for the ratio $R_K= \Gamma(Ke2) / \Gamma(K\mu 2)$ is a function of the masses of the participating particles and is given by $$R_K=\frac{m_e^2}{m_{\mu}^2} \left( \frac{m_K^2 - m_e^2}{m_K^2 - m_{\mu}^2} \right) (1+\delta R_K),$$ where the term $ \delta R_K = -(3.79 \pm 0.04) \% $ represents the radiative corrections. In the ratio $R_K$ the theoretical uncertainties on the hadronic matrix element cancel resulting in an extremely precise prediction $R_K = (2.477 \pm 0.001) \times 10^{-5} $ [@ke2-thnew]. Due to the impossibility to distinguish the neutrino flavour the experimentally measured ratio is sensitive to possible lepton flavour violation effects. In particular, various LFV extensions of the SM (MSSM, different two Higgs doublet models) predict constructive or destructive contribution to $R_K$ as high as 1 % [@Masiero]. Experimentally, the ratio $R_K$ can be expressed as $$R_K = \frac{1}{D}\cdot \frac{N({K_{e 2}})-N_B({K_{e 2}})} {N({K_{\mu 2}}) - N_B({K_{\mu 2}})} \cdot \frac{A({K_{\mu 2}})\times\epsilon_{\mathrm{trig}}({K_{\mu 2}})\times f_\mu} {A({K_{e 2}})\times\epsilon_{\mathrm{trig}}({K_{e 2}})\times f_e} \cdot \frac{1}{f_{LKr}}, \label{RKexp}$$ where $N(K_{\ell 2})$, $\ell=e,\mu$ is the number of the selected ${K_{e 2}}$ and ${K_{\mu 2}}$ candidates, $N_B(K_{\ell 2})$ is the number of expected background events, $f_\ell$ is the efficiency for particle identification, $A(K_{\ell 2})$ is the geometrical efficiency for registration obtained from Monte Carlo simulation, $\epsilon_{\mathrm{trig}}$ is the trigger efficiency, $D=150$ is the downscaling factor for ${K_{\mu 2}}$ events and $f_{lkr}$ is the global efficiency of the LKr readout. Both $f_\ell$ and $\epsilon_{\mathrm{trig}}$ are higher than 99%. The analysis was performed in individual momentum bins for all the four different data samples - $K^+(noPb)$, $K^+(Pb)$, $K^-(noPb)$, $K^-(Pb)$ - resulting into 40 independent values for the $R_K$. The similarity between the two decays allowed to exploit systematics cancellations in the ratio by using common selection criteria. The events were required to have only one reconstructed charged track within the detector geometrical acceptance with momentum in the interval $13 ~GeV/c < p < 65 ~GeV/c$ and consistent with kaon decay. The background was additionally suppressed by vetoing events with clusters in the LKr not associated with the track with energy more than 2 $GeV$. The particle identification was based on the $E/p$ variable, where $E$ is the energy deposited in the LKr and $p$ is the momentum measured by the spectrometer. It had to be close to one for electrons and less than 0.85 for muons. Under the assumption of the particle type the missing mass squared was calculated $M_{miss}^2 = (P_K - P_l)^2$, where $P_K$ ($P_l$) is the kaon (lepton) four momentum. A momentum dependent cut on the $M_{miss}^2$ was used. The dominant background contribution in the ${K_{e 2}}$ sample was identified to come from ${K_{\mu 2}}$ events with muons leaving all their energy in the electromagnetic calorimeter. The two decays are well separated below 35 $GeV/c$ track momentum but completely overlap kinematically for higher values. In order to select clean muon samples the data with the Pb wall was used. The study of the probability of a muon faking an electron was performed separately in the different momentum bins. The estimated background from ${K_{\mu 2}}$ events was $(5.64 \pm 0.20)\%$ with uncertainty dominated by the statistics used for the determination of $P_{\mu e}^{Pb}$. At low track momentum the most significant background source was identified to be the muon halo. The total background after taking into account also the structure dependent and the interference part of the $K^{\pm} \to e^{\pm} \nu \gamma $ decays was found to be $(10.95 \pm 0.27)\%$. (-300,165)[ **(a)**]{} (-100,165)[ **(b)**]{} The missing mass distribution for the reconstructed ${K_{e 2}}$ data events together with the simulation of the signal and backgrounds are shown in fig. \[fig:ke2\] (a). A total of $145958$ ${K_{e 2}}$ and $4.28\times 10^{7}$ ${K_{\mu 2}}$ candidates were reconstructed. The value of $R_K$ in the in individual momentum bins integrated over the data samples is shown in fig. \[fig:ke2\] (b). The final result was obtained by a fit of the 40 independent $R_K$ values and is [@bib:na62-ke2] $$R_K = (2.488 \pm 0.007_{stat} \pm 0.007_{syst})\times 10^{-5}.$$ It is consistent with the Standard Model prediction and with the present PDG value [@bib:ke2-pdg] and supersedes the previous [@bib:na62-ke2prel] NA62 result. Tests of the ChPT with $K^{\pm}\to\pi^{\pm}\gamma\gamma$ ======================================================== Within the ChPT the lowest order terms contributing to the the decay $K^{\pm}\to \pi^{\pm}\gamma\gamma$ are of order $O(p^4)$ [@bib:pigg-th1]. They represent the pion and the kaon loop amplitudes depending on a single unknown constant $\hat{c}$ and a pole amplitude contributing of the order of 5% to the final decay rate. Higher order corrections ($O(p^6)$) have shown to change the decay spectrum significantly leading to a non vanishing differential decay rate at zero diphoton invariant mass [@bib:pigg-th2]. The predicted branching fraction for the $K^{\pm}\to \pi^{\pm}\gamma\gamma$ decay is $\sim 10^{-6}$. It has been studied by the BNL E787 experiment which observed 31 decay candidates [@bib:pigg-bnl] and by the NA48/2 experiment, which identified 149 event candicates in a three day run with minimum bias trigger in 2004 [@bib:na48-pigg]. The data used for the study $K^{\pm}\to \pi^{\pm}\gamma\gamma$ was collected during the 2007 run with a trigger with an effective downscaling of about 20. The signal events were required to have $ z= (m_{\gamma\gamma} / m_K)^2 > 0.2$ in order to suppress the background from ${K^{\pm} \to \pi^{\pm}\pi^0~}$ decays. The reconstructed kaon invariant mass for the event candidates is shown in fig. \[fig:kpgg\] (a). (-300,190)[ **(a)**]{} (-100,190)[ **(b)**]{} A total of 232 event candidates were selected, with backgrounds contaminations of 7% from ${K^{\pm} \to \pi^0\pi^0\pi^{\pm}~}$ and ${K^{\pm} \to \pi^{\pm}\pi^0\gamma}$ with merged clusters. The $z$ spectrum for the NA62 event candidates is shown in fig. \[fig:kpgg\] (b) together with the NA48/2 data and the avaraged values. The acceptance and the background contributions were also calculated separately in each $z$ interval. This allowed to extract a combined NA62 + NA48/2 model independent branching ratio [@bib:na62-pigg] $$Br(K^{\pm}\to \pi^{\pm}\gamma\gamma )_{MI,z>0.2} = (0.965 \pm 0.061_{stat} \pm 0.014_{syst} )\times 10^{-6}.$$ The extraction of the $\hat{c}$ was based on a likelihood fit to the data for both $O(p^4)$ and $O(p^6)$ parametrizations. The final results for $\hat{c}$ both for NA62 data alone and combined with NA48/2 are shown in table \[tab:pigg-prel\]. ChPT parametrization NA62 results Combined NA48/2 + NA62 results ---------------------- -------------------------------------------------- -------------------------------------------------- $O(p^4)$ $\hat{c} = 1.93 \pm 0.26_{stat} \pm 0.08_{syst}$ $\hat{c} = 1.72 \pm 0.20_{stat} \pm 0.06_{syst}$ $O(p^6)$ $\hat{c} = 2.10 \pm 0.28_{stat} \pm 0.18_{syst}$ $\hat{c} = 1.86 \pm 0.23_{stat} \pm 0.11_{syst}$ : Results for $\hat{c}$ within $O(p^4)$ and $O(p^6)$ ChPT parametrization. []{data-label="tab:pigg-prel"} The obtained result doesn’t allow to discriminate between the $O(p^4)$ and $O(p^6)$ parametrization. Using the $O(p^6)$ description and the obtained combined NA48/2 + NA62 value for $\hat{c}$ the branching ratio in the full $z$ kinematic region was found to be $$Br(K^{\pm}\to \pi^{\pm}\gamma\gamma )_{O(p^6)} = (1.003 \pm 0.051_{stat} \pm 0.024_{syst} )\times 10^{-6}.$$ Measurement of $BR({K^{+} \to \pi^{+} \nu \bar{\nu}})$ with 10% precision ========================================================================= Among the rare kaon decays the transitions $K \rightarrow \pi \nu \bar{\nu}$ are extremely attractive. They proceed as FCNC and their branching fractions are theoretically very clean since the hadronic matrix element can be obtained by the isospin symmetry of the strong interactions from the leading decay $K^+ \rightarrow \pi^0 e^+ \nu$ [@ISOSPIN_RELATION]. For the charged kaon mode the NNLO calculations give $Br({K^{+} \to \pi^{+} \nu \bar{\nu}}) = (7.81\pm0.80)*10^{-11}$ [@pnn-th]. Presently seven ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ events have been observed by the E787 and E949 collaborations in a stopped kaon experiment [@BNL_BR] leading to $Br({K^{+} \to \pi^{+} \nu \bar{\nu}}) = (1.73_{-1.05}^{+1.15})\times 10^{-10}$. This value is twice the SM prediction but still compatible with it due to high uncertainty. The decay ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ is very sensitive to New Physics models where the theoretical predictions vary over an order of magnitude. Thus measuring $Br(K^+ \rightarrow \pi^+ \nu \bar{\nu})$ could also help to distinguish between the different types of new physics [@PINN_BSM] once it is discovered. ![ [ Squared missing mass distribution for the ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ and the main charged kaon decay modes. The signal is multiplied by a factor $10^{10}$ while the backgrounds are according to their branching ratios. ]{} \[fig:kna62\] ](m2miss_log.eps){width="80.00000%"} The measurement which NA62 collaboration aims to perform is based on the kaon decay in flight. In order to identify the signal events and suppress the background the three techniques - kinematics, particle identification and vetoing will be exploited. Since there is only one observable particle in the final state the kinematics variable considered for the separation of the decay is the missing mass squared under pion hypothesis for the charged track. It is defined as $$m_{miss}^2 \simeq m_K^2\left(1-\frac{|P_{\pi}^2|}{|P_{K}^2|}\right) + m_{\pi}^2\left(1-\frac{|P_{K}^2|}{|P_{\pi}^2|}\right) - |P_{K}||P_{\pi}|\theta_{\pi K}^2$$ With the planned Gigatracker and Straw spectrometer the expected resolution on the missing mass squared is $0.001 ~GeV^2/c^4$. The signal region is defined by the edges of the kinematically limited in the $m_{miss}^2$ distribution kaon decays: $K^+ \to \mu^+ \nu$, $K^+ \to \pi^+ \pi^0$ and $K^+\to \pi^+\pi^+\pi^-$, as shown in fig. \[fig:kna62\](b). The detector is hermetic for photons with angles up to 50 $mrad$ originating from $\pi^0$ in the decay region and the overall photon veto system composed from ANTI, LKR, SAC, and IRC provides an inefficiency less than $10^{-8}$ for a $\pi^0$ coming from $K^+\rightarrow \pi^+\pi^0$ decay. Decays with muons in the final state (like $K^+\rightarrow \mu^+ \nu$, $K^+\rightarrow \pi^+\pi^-\mu^+ \nu$) will be suppressed using the muon-pion identification based on RICH and MUV for which the total inefficiency should be less than $5 \times 10^{-6}$. The presented setup and analysis strategy will allow NA62 experiment to collect O(100) events in two years of data taking. The construction is advanced for the start of the experiment in October 2014. Rare and new physics processes with NA62 ======================================== NA62 experiment will accumulate the highest available charged kaons statistics. Combined with the excellent veto efficiency, particle identification capabilities, and ultimate momentum and energy resolution this turns the NA62 experiment into a multipurpose facility able to execute a diverse physics program devoted to rare processes. Among them are the lepton flavour violation kaon decays, searches for new particles (including dark photons and heavy neutrinos), searches for forbidden kaon and pion decays. In addition, an extensive and high precision study of the $K^+ \to \pi^0\pi^0 l^+ \nu$ and $K^{\pm}\to\pi^{\pm}\gamma\gamma$ decays is also under consideration. Conclusions =========== The rare kaon decays continue to provide a valuable input to the high energy physics. The NA62 experiment with its huge statistics, excellent resolution, particle identification, and hermeticity is the future laboratory for charged kaon physics. Currently a four per mile measurement of $R_K$ and a new higher statistics study of the $K^{\pm}\to \pi^{\pm}\gamma\gamma$ have been performed. The next step is the study of ${K^{+} \to \pi^{+} \nu \bar{\nu}}$ decay and the measurement of the CKM matrix element $V_{td}$ with a 10% precision. The general purpose experimental setup could allow to study other rare decays, including lepton flavour and lepton number violating ones. [99]{} F. Hahn [*et al.*]{} \[NA62 Collaboration\], http://cds.cern.ch/record/1404985. \(2007) [433]{}. P. Massarotti [*et al.*]{}, PoS ICHEP [**2012**]{}, 504 (2013). , [205]{} (2010) , 231801 (2007). , 011701 (2006). C. Lazzeroni [*et al.*]{} \[NA62 Collaboration\], Phys. Lett. B [**719**]{}, 326 (2013). , [105]{} (2011). . , [665]{} (1988). , [403]{} (1996). , 4079 (1997). J. R. Batley [*et al.*]{} \[NA48/2 Collaboration\], Phys. Lett. B [**730**]{}, 141 (2014). C. Lazzeroni [*et al.*]{} \[NA62 Collaboration\], Phys. Lett. B [**732**]{}, 65 (2014). , 034017 (2007). 034030 (2011). , 092004 (2009). , arXiv:1012.3893 \[hep-ph\]. [^1]: \ Speaker, for the NA62 Collaboration: G. Aglieri Rinella, F. Ambrosino, B. Angelucci, A. Antonelli, G. Anzivino, R. Arcidiacono, I. Azhinenko, S. Balev, J. Bendotti, A. Biagioni, C. Biino, A. Bizzeti, T. Blazek, A. Blik, B. Bloch-Devaux, V. Bolotov, V. Bonaiuto, D. Britton, G. Britvich, N. Brook, F. Bucci, V. Buescher, F. Butin, E. Capitolo, C. Capoccia, T. Capussela, V. Carassiti, N. Cartiglia, A. Cassese, A. Catinaccio, A. Cecchetti, A. Ceccucci, P. Cenci, V. Cerny, C. Cerri, O. Chikilev, R. Ciaranfi, G. Collazuol, P. Cooke, P. Cooper, G. Corradi, E. Cortina Gil, F. Costantini, A. Cotta Ramusino, D. Coward, G. D’Agostini, J. Dainton, P. Dalpiaz, H. Danielsson, J. Degrange, N. De Simone, D. Di Filippo, L. Di Lella, N. Dixon, N. Doble, V. Duk, V. Elsha, J. Engelfried, V. Falaleev, R. Fantechi, L. Federici, M. Fiorini, J. Fry, A. Fucci, S. Gallorini, L. Gatignon, A. Gianoli, S. Giudici, L. Glonti, A. Goncalves Martins, F. Gonnella, E. Goudzovski, R. Guida, E. Gushchin, F. Hahn, B. Hallgren, H. Heath, F. Herman, E. Iacopini, O. Jamet, P. Jarron, K. Kampf, J. Kaplon, V. Karjavin, V. Kekelidze, A. Khudyakov, Yu. Kiryushin, K. Kleinknecht, A. Kluge, M. Koval, V. Kozhuharov, M. Krivda, J. Kunze, G. Lamanna, C. Lazzeroni, R. Leitner, R. Lenci, M. Lenti, E. Leonardi, P. Lichard, R. Lietava, L. Litov, D. Lomidze, A. Lonardo, N. Lurkin, D. Madigozhin, G. Maire, A. Makarov, I. Mannelli, G. Man- nocchi, A. Mapelli, F. Marchetto, P. Massarotti, K. Massri, P. Matak, G. Mazza, E. Menichetti, M. Mirra, M. Misheva, N. Molokanova, J. Morant, M. Morel, M. Moulson, S. Movchan, D. Munday, M. Napolitano, F. Newson, A. Norton, M. Noy, G. Nuessle, V. Obraztsov, S. Padolski, R. Page, V. Palladino, A. Pardons, E. Pedreschi, M. Pepe, F. Perez Gomez, F. Petrucci, R. Piandani, M. Piccini, J. Pinzino, M. Pivanti, I. Polenkevich, I. Popov, Yu. Potrebenikov, D. Protopopescu, F. Raffaelli, M. Raggi, P. Riedler, A. Romano, P. Rubin, G. Ruggiero, V. Russo, V. Ryjov, A. Salamon, G. Salina, V. Sam- sonov, E. Santovetti, G. Saracino, F. Sargeni, S. Schifano, V. Semenov, A. Sergi, M. Serra, S. Shkarovskiy, A. Sotnikov, V. Sougonyaev, M. Sozzi, T. Spadaro, F. Spinella, R. Staley, M. Statera, P. Sutcliffe, N. Szilasi, D. Tagnani, M. Valdata-Nappi, P. Valente, V. Vassilieva, B. Velghe, M. Veltri, S. Venditti, M. Vormstein, H. Wahl, R. Wanke, P. Wertelaers, A. Winhart, R. Winston, B. Wrona, O. Yushchenko, M. Zamkovsky, A. Zinchenko
--- abstract: 'In the past few years the field of hadron spectroscopy has seen renewed interest due to the pubblication, initially mostly from $B$-Factories, of evidences of states that do not match regular spectroscopy, but are rather candidates for bound states with additional quarks or gluons. A huge effort in understanding the nature of this new states and in building a new spectroscopy is ongoing. This report reviews the experimental and theoretical state of the art on heavy quarkonium exotic spectroscopy, with particular attention on the steps towards a global picture.' author: - 'N. Drenska\[ab\], R. Faccini\[ab\], F. Piccinini\[c\], A. Polosa\[b\], F. Renga\[ab\], C. Sabelli\[ab\]' bibliography: - 'SpecNC.bib' title: New Hadronic Spectroscopy --- *[**Aknowledgements**]{}*. We wish to thank C. Bignamini and B. Grinstein for fruitful collaboration, and T. Burns for comments and suggestions on the manuscript.
--- abstract: 'We present results obtained from Strömgren photometry of 13 young ($\sim$30-220 Myr) Magellanic Cloud (MC) clusters, most of them lacking in the literature from direct metallicity measurements. We derived for them \[Fe/H\] values from a high-dispersion spectroscopy-based empirical calibration of the Strömgren metallicity sensitive index $m_{\rm 1}$ for yellow and red supergiants (SGs). Particular care was given while estimating their respective uncertainties. In order to obtain the mean cluster metallicities, we used \[Fe/H\] values of selected SGs for which we required to be located within the cluster radii, placed in the expected SG region in the cluster colour-magnitude diagrams, and with \[Fe/H\] values within the FWHM of the observed cluster metallicity distributions. The resulting metallicities for nearly 75 per cent of the cluster sample agree well with the most frequently used values of the mean MCs’ present-day metallicities. The remaining clusters have mean \[Fe/H\] values that fall near the edge of the MC present-day metallicity distributions. When comparing the cluster metallicities with their present positions, we found evidence that supports the claimed recent interaction of the MCs with the Milky Way, that could have caused that some clusters were scattered from their birthplaces. Indeed, we show examples of clusters with metal contents typical of the galaxy inner regions placed outward them. Likewise, we found young clusters, at present located in the inner regions of both MCs, formed out of gas that has remained unmixed since several Gyr ago.' author: - | Andrés E. Piatti$^{1,2}$[^1], Grzegorz Pietrzyński$^3$, Weronika Narloch$^{4,5}$, Marek Górski$^4$ and Dariusz Graczyk$^5$\ $^{1}$Consejo Nacional de Investigaciones Científicas y Técnicas, Godoy Cruz 2290, C1425FQB, Buenos Aires, Argentina\ $^{2}$Observatorio Astronómico de Córdoba, Laprida 854, 5000, Córdoba, Argentina\ $^3$Nicolaus Copernicus Astronomical Center, 00-716 Warsaw, Poland\ $^4$Departamento de Astronomía, Universidad de Concepción, Casilla 160-C, Chile\ $^5$Millennium Institute of Astrophysics, Santiago, Chile\ $^6$Centrum Astronomiczne im. Miko$\l$aja Kopernika, PAN, Rabiańska 8, 87 - 100 Toruń, Poland date: 'Accepted XXX. Received YYY; in original form ZZZ' title: Metallicity estimates of young clusters in the Magellanic Clouds from Strömgren photometry of supergiant stars --- \[firstpage\] galaxies: individual: Magellanic Clouds – galaxies: star clusters: general Introduction ============ Nearly 100-200 Myr ago the Milky Way has experienced its first passage to the Magellanic Clouds (MCs) [@beslaetal2012]. As a consequence of such an interaction sudden cluster formation episodes have taken place throughout these galaxies [@bch05; @mk2011; @p18c]. Since clusters share the metallicities of their birthplaces, those younger objects can tell us about the efficiency of the gas mixing within the MCs, the metal enrichment due to the galaxy chemical evolution, the infall of gas from MCs-Milky Way interaction, etc. Young clusters also describe the most recent structures of these galaxies, where active cluster formation regions can even exist. Young clusters are tracers of the galaxy present-day metallicity distributions. By analysing the broadness of such a metallicity distributions and their relationship with the young cluster spatial distribution, we can get some clues about the effectiveness of scattering clusters from galaxy interactions, to assess whether clusters have been formed in an outside-in or inside-out formation scenario, among others. The number of young clusters (age $\la$ 200 Myr) in the MCs with actual measurements of their metal contents is really negligible in the literature. Most of the catalogued young clusters have been studied photometrically, using their colour-magnitude diagrams (CMDs) to derive their ages by assuming that they share the known MCs’ mean present-day metallicities [see, e.g. @getal10; @p17e]. Sometimes, a couple of different \[Fe/H\] values have been chosen to match theoretical isochrones to the cluster CMDs. More recently, bayesian and maximum-likelihood approaches have been implemented to fits thousand of isochrones to the CMDs in order to get the best fitted cluster ages and metallicities [@detal14; @pvp15]. Nevertheless, none of them perform direct measures of the cluster’s members chemical compositions. With the aim of mitigating the lack of metallicity measurements of young MC clusters, we used here Strömgren photometry of yellow and red supergiants (SGs) to provide for the first time with accurate mean \[Fe/H\] values for 12 young MCs, and for the Small Magellanic Cloud cluster NGC330, whose previous spectroscopic iron abundance served as a reference for our metallicity scale. Details of the data sets obtained and the careful process carried out with the images until obtaining the standardised Strömgren photometry is described in Section 2. In Section 3 we deal with the cluster metallicities, how we derived them and thoroughly estimated their uncertainties. We analyse and discuss in Section 4 different implications of the resulting cluster \[Fe/H\] values, in the context of the MCs’ chemical evolution histories and interaction with each other and of them with the Milky Way. Finally, Section 5 summarises the main conclusions of this work. Strömgren photometry data set ============================== The photometric data sets analysed in this work were obtained during an observing campaign aimed at studying the chemical evolution of the MCs from star clusters and field stars (programme ID: SO2008B-0917, PI: Pietrzyński). The images are publicly available at the National Optical Astronomy Observatory (NOAO) Science Data Management (SDM) Archives.[^2] Two different observing runs were carried out (17-19 December 2008 and 16-18 January 2009) with the SOAR Optical Imager (SOI) attached to the 4.1m Southern Astrophysical Research (SOAR) telescope (FOV = 5.25$\arcmin$$\times$5.25$\arcmin$, scale=0.154$\arcsec$/px in binned mode). The images resulted of excellent quality (typical FWHM $\sim$ 0.6$\arcsec$) and were processed following the SOI’s pipeline guidance available at http://www.ctio.noao.edu/soar/content/soar-optical-imager-soi. In doing this, we used suitable zero and flat-field images obtained during each observing night. Table \[tab:table1\] lists the log of observations for the studied young MC clusters. Other subsample of clusters have been analysed previously to search for intrinsic metallicity spreads among Large Magellanic Cloud (LMC) old globular clusters [@pk2018] and in NGC1978 [@martocchiaetal2018b; @pb2018] and hints of multiple populations among Small Magellanic Cloud (SMC) intermediate-age clusters [@niederhoferetal2017; @p18b]. We selected the standard stars HD64, HD3417, HD12756, HD22610, HD57568, HD58489, HD66020, TYC 7547-711-1, TYC 7548-698-1, TYC 7583-1011-1, TYC 7583-1622-1, TYC 7626-763-1, TYC 8033-906-1, TYC 8067-207-1, TYC 8104-856-1 and TYC 8104-969-1 [@hm1998; @p2005] to secure transformation of the instrumental magnitudes to the standard system. Particular care was given to the observations of these stars by obtaining images in all the $uby$ filters at small and large hour angle (airmass between 1.02 and 2.20). Additionally, we observed each star twice at a given airmass, with the aim of placing them in each of the two CCDs used by SOI. As shown in @pb2018, there is an excellent agreement between the independent transformation coefficients from both CCDs. For this reason, we decided to use all the measured stars, regardless their positions in SOI. The transformation equations fitted are as follows: $v = v_1 + V_{\rm std} + v_2\times X_v + v_3\times (b-y)_{\rm std} + v_4\times m_{\rm 1 std}$,\ $b = b_1 + V_{\rm std} + b_2\times X_b + b_3\times (b-y)_{\rm std}$,\ $y = y_1 + V_{\rm std} + y_2\times X_y + y_3\times (b-y)_{\rm std}$,\ where $v_i$, $b_i$ and $y_i$ are the i-th fitted coefficients, and $X$ represents the effective airmass. The resulting coefficients are listed in Table \[tab:table2\]. The instrumental magnitudes were derived from point-spread-function (PSF) photometry using the routine packages [daophot]{}, [allstar]{}, [daomatch]{} and [daomaster]{} in their stand-alone version [@setal90]. The PSF of each image was created from a sample of nearly one hundred not-saturated, bright, isolated stars, interactively selected and distributed throughout the entire image. These PSF samples were previously cleaned from fainter neighbours using preliminary PSFs built with the best nearly forty PSF candidates. We adopted a quadratically spatially-varying PSF function for all the images. We applied the created PSFs to the identified stellar sources and took advantage of the subtracted images for identifying new fainter stars that were added to the previous list. The last steps were iterated three times, deriving instrumental magnitudes from simultaneously applying the respective PSF to the enlarged sample of stars. We computed aperture corrections in the range -0.04 - -0.07 mag. Finally, we inverted the fitted transformation equations to obtain magnitudes in the standard system. Errors were estimated from extensive artificial star tests as previously performed for other subsets of MC clusters imaged during the same observing programme [see @pk2018; @p18b; @pb2018]. In brief, we used the stand-alone [addstar]{} program in the [daophot]{} package [@setal90] to add synthetic stars, generated bearing in mind the colour and magnitude distributions of the stars in the CMD as well as the cluster radial stellar density profile. We added a number of stars equivalent to $\sim$ 5$\%$ of the measured stars in order to avoid in the synthetic images significantly more crowding than in the original images. We created a thousand different images for each original one. We used the option of entering the number of photons per ADU in order to properly add the Poisson noise to the star images. We then repeated the same steps to obtain the photometry of the synthetic images as described above, i.e., performing three passes with the [daophot/allstar]{} routines. The photometric errors were derived from the magnitude difference between the output and input data of the added synthetic stars using the [daomatch]{} and [daomaster]{} tasks. We found that this difference resulted typically equal to zero and in all the cases smaller than 0.003 mag. The respective rms errors were adopted as the photometric errors. Strömgren metallicities ======================= @gr1992 recommended the following expression to estimate metallicities of SGs: $${\rm [Fe/H]} = \frac{(m_{\rm 1})_o + a_1 \times (b-y)_o + a_2}{a_3 \times (b-y)_o + a_4}$$ where $a_1$ = -1.240$\pm$0.006, $a_2$ = 0.294$\pm$0.030, $a_3$ = 0.472$\pm$0.040 and $a_4$ = -0.118$\pm$0.020, respectively. Notice that $m_{\rm 1}$ = ($v-b$) - ($b-y$). We used eq. (1) for cluster SGs that satisfy the following requirements: i) the SGs lie within the cluster radius [@betal08]. ii) They have intrinsic $(b-y)_o$ colours in the range 0.4 - 1.1 mag for which eq. (1) is valid. iii) They fall above the cluster main sequence turnoff in the $V$ versus $b-y$ CMD, where cluster SGs are expected to be distributed. iv) Their individual \[Fe/H\] values are within the FWHM of the metallicity distribution of all SGs complying with the above criteria. This latter requisite helped us to clean the sample of cluster SGs. Notice that field SGs are not homogeneously distributed throughout the observed fields, so that the frequently procedure of choosing a region with an equal cluster area far away from the cluster as a star field reference to clean the cluster CMD, could be misleading. In addition, field SGs are distributed stochastically in the cluster CMD, so that it could not be straightforward to distinguish them from cluster SGs by considering only their positions in those CMDs. Fig. \[fig:fig1\] shows the CMDs for all the stars within the clusters’ radii with black dots, while selected stars above the cluster turnoffs and with metallicities within the FWHM of the metallicity distributions are drawn with big black and red filled circles, respectively. We extracted from the [*Gaia*]{} archive[^3] parallaxes ($\varpi$) and proper motions in Right Ascension (pmra) and Declination (pmdec) for stars located within 10 arcmin from the centres of our cluster sample, with the aim of including an additional criterion on the membership status of cluster SG selection. To choose cluster stars we constrained our sample to those satisfying the following criteria: i) stars located at the MC distances, i.e. $|\varpi|$ $<$ 3$\sigma(\varpi)$ and $|\varpi|$ $<$ 4.0 mas. We rejected all stars with $\varpi$ not consistent with zero at more than 3$\sigma$ level [see @vasiliev2018]; ii) stars located within the cluster radii [@betal08]. Unfortunately, we did not find stars with proper motion errors $\le$ 0.3 mas/yr, which correspond to $\sim$ 70 and 85 km/s, if the mean Large and Small Magellanic Clouds (L/SMC) distances are used. Therefore, without the necessary proper motion accuracy, it was not possible to conduct any membership probability analysis. We have highlighted the sample of selected cluster SGs with big red filled circles in the cluster CMDs of Fig. \[fig:fig1\]. We also show their placement in the $(m_{\rm 1})_o$ versus $(b-y)_o$ plane, which includes iso-abundance lines according to eq. (1). In order to estimate the individual metallicities, we first dereddened the measured $b-y$ and $m_{\rm 1}$ colour indices by using the expression given by @cm1976 and the largest $E(B-V)$ value of those retrieved from the @hetal11 [hereafter H11] MC extinction map and from the NASA/IPAC Extragalactic Database (NED). For the sake of the reader, Table \[tab:table3\] lists both $E(B-V)$ colour excesses. The uncertainties in the \[Fe/H\] values were calculated by propagating every involved error, namely: the photometric errors $\sigma (b-y)_o$ and $\sigma (m_{\rm 1})_o$ and the errors in the $a_i$ values ($i= 1,..,4$) of eq. (1), according to the expression:\ $\sigma{\rm [Fe/H]} = [(\frac{(b-y)_o}{c}\sigma(a_1))^2 + (\frac{1}{c}\sigma(a_2))^2 +\\ \\(\frac{(b-y)_o{\rm [Fe/H]}}{c}\sigma(a_3))^2 + (\frac{{\rm [Fe/H]}}{c}\sigma(a_4))^2 + \\ \\ (\frac{(a_1 - a_3{\rm [Fe/H]})}{c}\sigma((b-y)_o))^2 + (\frac{1}{c}\sigma((m_{\rm 1})_o))^2]^\frac{1}{2}$,\ where $c = a_3(b-y)_o + a_4$. Since $\sigma{\rm [Fe/H]}$ varies from one SG to another within a cluster, we used the well-known maximum likelihood approach described in, e.g., @pm1993 and @walker2006 to derive the mean cluster metallicities and the respective errors. The resulting \[Fe/H\] values are listed in the last column of Table \[tab:table3\]. Analysis and discussion ======================= As far as we are aware, most of the studied clusters do not have direct estimates of their metallicities. From a careful search through the available literature, we realised that only NGC330 have been targeted for a spectroscopic metallicity analysis (see [@gr1992 and references therein]. @gr1992 obtained a mean value of \[Fe/H\] = -1.26 dex, in excellent agreement with our present estimate. Note, however, that the recent work by @miloneetal2018 adopted a more metal-rich value (\[Fe/H\] = -0.9 dex). @dirschetal2000 estimated \[Fe/H\] = -0.57 dex for NGC1711, rather different to our derived value (-0.06$\pm$0.05 dex). Notice that they showed a comparison of their metallicities with those from high-dispersion spectroscopy that resulted in differences between 0.0 and 0.8 dex, being their values more metal-poor. For the remaining clusters, previous photometric studies have adopted the accepted mean galaxy present-day metallicities, i.e.. \[Fe/H\]= -0.4 and -0.7 dex, for the LMC and SMC, respectively [see, e.g. @pg13]. Some few photometric studies have tried with a couple of different metallicity values while matching isochrones to the cluster CMDs or recovering their formation histories (NGC376, 1844, 1847 and 2136). The middle columns of Table \[tab:table3\] list the values of ages and metallicities we found while searching the literature. By comparing our resulting cluster metallicities with those previously used in the literature, we found some differences ($\Delta$(\[Fe/H\]) $\sim$ 0.3-0.4 dex) that led us to speculate about the possibility that the assumption for young MC clusters to have metal contents similar to the mean galaxy present-day metallicities is justified for statistical purposes. Otherwise, when young clusters are studied to search for chemical abundance anomalies, light element abundance variations, binary fraction, extended main sequence turnoffs, among others, the knowledge of their actual metallicities could have an impact. This could be the case, for instance, of NGC1844, NGC1847 and NGC330, for which @miloneetal13, @niederhoferetal15a and @miloneetal2018 adopted, respectively, more metal-rich metal abundances to show evidence of multiple populations. In order to see whether a better tracking of the split main sequences can be achieved, it would be worth trying to match their CMDs with theoretical isochrones with metallcities similar to those derived in this work. In the case of NGC1850, a cluster with a large population of near-critically rotating stars, a slightly more metal-rich value has usually been adopted [@bastianetal2017]. The existence of a spread in metallicity within the younger stellar populations of both MCs is well-known. @pg13, using an homogeneous age/metallicity compilation, showed that the FWHM of such a scatter is 0.51 dex for the LMC and and 0.32 dex for the SMC [see, also @chetal16; @choudhuryetal2018]; the MC cluster populations also exhibit a noticeable scatter at their younger end [see, also @perrenetal2017]. In this context, most of the present studied clusters are within the expected metallicity range, while others fall at the edge of the known metallicity distributions. This is the case of NGC330, the most metal-poor young SMC cluster (\[Fe/H\] = -1.15 dex) so far. In the LMC, NGC1847 resulted to be the most-metal poor young cluster (\[Fe/H\]=-0.91 dex) ever known, while NGC1711 turned out to be at the metal-rich end of the LMC cluster’s metallicities (\[Fe/H\]= -0.06 dex). These relatively extreme metallicity values tell us that the gas out of which these young clusters were formed was not well-mixed. Note that there has been speculations about the role of infall of unenriched (or less enriched) gas into the MCs leading to an unexpectedly large spread in cluster abundances at a relatively constant age [see, e.g. @dh98]. We looked at the cluster positions in their respective host galaxies in order to search for any link of their derived metallicities with the chemical evolution histories of the MCs, particularly of those with more extreme values. Fig. \[fig:fig2\] depicts with black points the spatial distributions in both MCs of all the clusters catalogued by @betal08. The studied clusters are drawn with big filled circles. As a spatial reference, we have also included the areas defined by @hz09 in the LMC main body (the bar is traced with a light-blue line) and the ellipses proposes by @petal07d as a simple representation of the orientation, dimension and shape of the SMC main body. As can be seen, most of the studied LMC clusters are located along the bar, and some few others in the disc, while the studied SMC clusters are confined to the ellipse with semi-major axis of $\sim$ 1 degree. In an outside-in galaxy formation scenario – which appears to be the case of both MCs [@meschin14; @rubeleetal2018; @p18c; @p18d] – the inner regions of a galaxy turn our to be more metal-rich than the outer ones. Indeed, from @pg13 we found that the metallicity level of field stars in the outer LMC disc ($\rho >4 \degr$, $<$\[Fe/H\]$>$ = -0.90$\pm$0.20 dex) is on average more metal-poor than that for inner disc field stars ($\rho <4 \degr$, $<$\[Fe/H\]$>$ =-0.50$\pm$0.20 dex). For the SMC, we got $<$\[Fe/H\]$>$ = -1.20$\pm$0.20 dex and -0.70$\pm$0.15 dex for regions with semi-major axes larger and smaller than $1 \degr$, respectively [@p12a]. Star clusters share the metallicities of their birthplaces. Nevertheless, with time they drift away from their birth locations. Interactions and other perturbations may produce additional velocity components. The derived metallicities of NGC330 (\[Fe/H\]= -1.15 dex, light-green circle in Fig. \[fig:fig2\]) and NGC1847 (\[Fe/H\]= -0.91 dex, orange circle in Fig. \[fig:fig2\]) are typical of stellar populations located in the outer regions of the MCs, although both clusters are projected toward inner regions. Conversely, NGC1711 (red circle in Fig. \[fig:fig2\]), which is projected on to the LMC outer disc, have a metal content (\[Fe/H\]= -0.06 dex) typical of the LMC bar. Recently, @piattietal2018a showed that a recently discovered young cluster placed in the outer disc of the LMC, possibly reached the present position after being scattered from the innermost LMC regions where it might have been born. This possibility could be applied to NGC1711, unless the cluster is the outcome of an episode of recent cluster formation as a consequence of the first passage of the LMC by the Milky Way triggering cluster formation due to the ram pressure of Milky Way halo gas [@piattietal2018b]. As for the birthplaces of NGC330 and NGC1847, we can only infer that they have been formed from gas that has remained unmixed during the last $\sim$ 4 Gyr in the SMC and $\sim$ 9 Gyr in the LMC. In order to infer these ages, we used the age-metallicity relationships derived by @pg13 [see their figure 6]; we then entered with the cluster metallicities and looked for the corresponding ages. Finally, we searched the literature for radial velocity (RV) measurements. As far as we are aware, we found RVs for NGC330 [149.0$\pm$8.0 km/sec @fb1980], NGC1850 [251.4$\pm$2.0 km/sec @fischeretal1993] and NGC2136 [271.4$\pm$0.4 km/sec @mucciarellietal2012]. Radial velocities are not available for the two anomalous LMC clusters NGC1711 and NGC1847. One of the diagnostic diagrams most frequently used to assess whether a cluster belongs to the LMC disc is that which shows the relationship between position angles (PAs) and RVs [@s92; @getal06; @shetal10; @vdmareletal2002; @vdmk14] for a disc-like rotation geometry. We here followed the recipe used by @s92, who converted the observed heliocentric cluster RVs to Galactocentric RVs through eq.(4) in @fw79. We computed cluster PAs by adopting the LMC disc central coordinates obtained by @vdmk14 from $HST$ average proper motion measurements for stars in 22 fields. We obtained PAs of 308$\degr$.0 and 44$\degr$.0 and Galactocentric RVs of 33.0 km/sec and 95.0 km/sec for NGC330 and 2136, respectively. These values are fully consistent with both clusters belonging to the LMC disc (see figure 7 in @piattietal2018a). As for the SMC, we used the high-resolution HI data from the Australian Square Kilometre Array Pathfinder (ASKAP) obtained by @diteodoroetal2019. We compared the NGC330’s RV with that from the ASKAP velocity map (see their figure 1) for the cluster position and found a very good agreement. Conclusions =========== We obtained Strömgren photometry of selected young MC clusters in order to provide direct estimates of their metal contents, which are noticeably lacking in the literature. The observations of 13 young MC clusters, namely: NGC330, 376, 1711, 1844, 1847, 1850, 1863, 1903, 1986, 2065, 2136, IC1611 and Lindsay35, were performed with the SOI attached to the SOAR telescope during two observing runs in December 2008 and January 2009, respectively, as part of an observational programme aimed at studying the chemical evolution of these galaxies from their star clusters and field star populations. In deriving the metallicities of measured yellow and red SGs we made use of an empirical calibration recommended by @gr1992, based on the Strömgren metallicity sensitive index $m_{\rm 1}$. We paid particular attention in estimating the metallicity uncertainties, which were calculated from propagation of all the involved errors added in quadrature, i.e., those coming from the obtained Strömgren photometry and those published from the employed metallcity calibration. After a careful selection of yellow and red cluster SGs, on the basis of their positions along the line-of-sight of the clusters, their locations in their respective cluster CMDs and relative placement in the cluster metallicty distribution functions, we estimated mean cluster metallicities by applying a maximum likelihood approach. The derived uncertainties are between 0.04 and 0.15 dex, with an average of 0.08 dex. We found null intrinsic \[Fe/H\] spreads within the studied clusters with an upper limit between 0.05 and 0.24 dex, with an average of 0.10 dex. As far as we are aware, only NGC330 has previous metallicity estimates. Particularly, the most recent \[Fe/H\] value obtained by @gr1992 as well as those from high-dispersion spectroscopy [@spiteetal1986] are in excellent agreement with that obtained in this work. For the remaining studied clusters, the \[Fe/H\] values derived here are the first metallicity estimates provided so far. In general, the resulting metal abundances agree well with the known mean galaxy present-day metallicities, as expected since the youth of the studied clusters. Nevertheless, there are some clusters whose derived mean \[Fe/H\] values fall toward to edge of the present-day metallicity distribution function. We found that NGC300 and NGC1847 are at present the most metal-poor young clusters in the SMC and LMC, whereas NGC1711 one of the most metal-rich in the LMC. When comparing the cluster metallicities with their present positions in the galaxies, we found evidence that support the outside-in formation scenario in both MCs. At the same time, we found that interactions between the MCs and of the MCs with the Milky Way could have caused that some clusters were scattered from their birthplaces. Indeed, we show examples of LMC clusters with metal contents typical of the innermost galaxy regions placed in the galaxy outer disc. Likewise, we found young clusters, at present located in the inner regions of both MCs, formed out of gas that has remained unmixed since several Gyr ago. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the referee for the thorough reading of the manuscript and timely suggestions to improve it. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission [*Gaia*]{} (<https://www.cosmos.esa.int/gaia>), processed by the [*Gaia*]{} Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the [*Gaia*]{} Multilateral Agreement. We also thank support from the IdP II 2015 0002 64 grant of the Polish Ministry of Science and Higher Education. ![image](fig1a){width="\columnwidth"} ![image](fig1b){width="\columnwidth"} ![image](fig1c){width="\columnwidth"} ![image](fig1d){width="\columnwidth"} \[fig:fig1\] ![image](fig1e){width="\columnwidth"} ![image](fig1f){width="\columnwidth"} ![image](fig1g){width="\columnwidth"} ![image](fig2){width="\textwidth"} \[fig:fig2\] ----------- -------------- ----- ----- ----- ------ ------ ------ Cluster Date $v$ $b$ $y$ $v$ $b$ $y$ NGC330 17 Dec. 2008 400 200 120 1.50 1.49 1.49 19 Dec. 2008 400 160 100 1.48 1.48 1.47 NGC376 18 Dec. 2008 500 300 180 1.53 1.52 1.52 19 Dec. 2008 350 140 90 1.53 1.52 1.52 NGC1711 16 Jan. 2009 350 180 100 1.37 1.38 1.38 NGC1844 17 Jan. 2009 400 140 100 1.27 1.27 1.27 NGC1847 17 Jan. 2009 350 160 100 1.29 1.30 1.30 NGC1850 18 Jan. 2009 400 160 100 1.34 1.34 1.34 NGC1863 18 Jan. 2009 400 180 100 1.32 1.32 1.32 NGC1903 17 Dec. 2008 300 100 60 1.36 1.36 1.36 NGC1986 18 Jan. 2009 350 180 100 1.53 1.52 1.51 NGC2065 18 Jan. 2009 400 180 90 2.07 2.05 2.04 NGC2136 17 Jan. 2009 450 180 110 1.99 1.97 1.96 IC1611 18 Dec. 2008 500 200 120 1.57 1.56 1.56 Lindsay35 18 Dec. 2008 500 200 120 1.83 1.81 1.81 ----------- -------------- ----- ----- ----- ------ ------ ------ : Log of observations.[]{data-label="tab:table1"} Date Filter coef$_1$ coef$_2$ coef$_3$ coef$_4$ rms -------------- -------- ------------ ------------ ------------ ------------ ------- 17 Dec. 2008 $y$ 0.946 0.118 -0.008 0.010 $\pm$0.015 $\pm$0.009 $\pm$0.015 $b$ 0.959 0.163 0.942 0.002 $\pm$0.003 $\pm$0.002 $\pm$0.003 $v$ 1.137 0.301 2.008 1.028 0.017 $\pm$0.027 $\pm$0.016 $\pm$0.058 $\pm$0.068 18 Dec. 2008 $y$ 0.932 0.122 -0.005 0.010 $\pm$0.015 $\pm$0.009 $\pm$0.016 $b$ 0.942 0.177 0.946 0.008 $\pm$0.014 $\pm$0.009 $\pm$0.014 $v$ 1.122 0.295 1.995 1.026 0.002 $\pm$0.007 $\pm$0.005 $\pm$0.048 $\pm$0.061 19 Dec. 2008 $y$ 0.939 0.107 0.018 0.016 $\pm$0.019 $\pm$0.010 $\pm$0.015 $b$ 0.916 0.169 0.999 0.010 $\pm$0.013 $\pm$0.007 $\pm$0.011 $v$ 1.096 0.286 2.004 1.117 0.010 $\pm$0.015 $\pm$0.009 $\pm$0.030 $\pm$0.038 16 Jan. 2009 $y$ 1.005 0.120 -0.046 0.007 $\pm$0.004 $\pm$0.010 $\pm$0.011 $b$ 1.014 0.170 0.939 0.011 $\pm$0.007 $\pm$0.003 $\pm$0.018 $v$ 1.196 0.290 2.034 0.914 0.007 $\pm$0.005 $\pm$0.010 $\pm$0.032 $\pm$0.028 17 Jan. 2008 $y$ 0.940 0.155 0.012 0.017 $\pm$0.019 $\pm$0.023 $\pm$0.090 $b$ 0.957 0.201 0.931 0.013 $\pm$0.015 $\pm$0.014 $\pm$0.058 $v$ 1.194 0.295 2.025 0.950 0.010 $\pm$0.010 $\pm$0.007 $\pm$0.049 $\pm$0.058 18 Jan. 2008 $y$ 1.003 0.132 -0.035 0.013 $\pm$0.013 $\pm$0.007 $\pm$0.023 $b$ 1.013 0.184 0.916 0.008 $\pm$0.008 $\pm$0.004 $\pm$0.014 $v$ 1.194 0.300 2.018 0.987 0.012 $\pm$0.032 $\pm$0.016 $\pm$0.097 $\pm$0.092 [@lcccccc]{}Cluster & & Age (Myr) & \[Fe/H\] (dex) & Ref. & \[Fe/H\] (dex)\ & H11 & NED & & & &\ \ NGC330 &— & 0.03 & 40 & -0.90 & 1 & -1.15$\pm$0.06\ NGC376 & 0.03& 0.03 & 28 & -0.60 & 2 & -0.55$\pm$0.09\ IC1611 & 0.05& 0.03 & 100 & -0.70 & 6 & -0.80$\pm$0.09\ Lindsay35 & 0.04& 0.03 & 220 & -0.70 &11 & -0.85$\pm$0.15\ \ NGC1711 & 0.07& 0.06 & 50 & -0.57 & 12 & -0.06$\pm$0.05\ NGC1844 & 0.04& 0.06 & 150 & -0.20 & 4 & -0.50$\pm$0.11\ NGC1847 & 0.05& 0.06 & 50 & -0.40 & 3 & -0.91$\pm$0.09\ NGC1850 & 0.06& 0.06 & 80 & -0.40 & 5 & -0.53$\pm$0.04\ NGC1863 & 0.05& 0.06 & 40 & -0.40 & 8 & -0.53$\pm$0.09\ NGC1903 & 0.07& 0.06 & 100 & -0.40 & 9 & -0.60$\pm$0.05\ NGC1986 & 0.05& 0.06 & 70 & — &10 & -0.46$\pm$0.06\ NGC2065 & 0.10& 0.06 & 100 & — & 7 & -0.40$\pm$0.06\ NGC2136 & 0.07& 0.06 & 124 & -0.50 & 3 & -0.51$\pm$0.08\ Ref.: (1) @miloneetal2018; (2) @sabbietal2011; (3) @niederhoferetal15a; (4) @miloneetal13; (5) @bastianetal2017; (6) @petal07d; (7) @asadetal2016; (8) @petal03; (9) @petal15b; (10) @ef1985; (11) @pietal08; (12) @dirschetal2000. \[lastpage\] [^1]: E-mail: andres@oac.unc.edu.ar [^2]: http //www.noao.edu/sdm/archives.php. [^3]: http://gea.esac.esa.int/archive/
--- abstract: 'In this paper the authors produce a projective indecomposable module for the Frobenius kernel of a simple algebraic group in characteristic $p$ that is not the restriction of an indecomposable tilting module. This yields a counterexample to Donkin’s longstanding Tilting Module Conjecture. The authors also produce a Weyl module that does not admit a $p$-Weyl filtration. This answers an old question of Jantzen, and also provides a counterexample to the $(p,r)$-Filtration Conjecture.' address: - | Department of Mathematics, Statistics and Computer Science\ University of Wisconsin-Stout\ Menomonie\ WI 54751, USA - | Department of Mathematics\ University of Georgia\ Athens\ GA 30602, USA - | Department of Mathematics and Statistics\ University of South Alabama\ Mobile\ AL 36688, USA - | Department of Mathematical Sciences\ Georgia Southern University\ Statesboro, GA 30458, USA author: - 'Christopher P. Bendel' - 'Daniel K. Nakano' - Cornelius Pillen - Paul Sobaje date: - - title: 'Counterexamples to the Tilting and $(p,r)$-Filtration Conjectures' --- [^1] [^2] [^3] Introduction ============ Let $G$ be a semisimple, simply connected algebraic group over an algebraically closed field of characteristic $p>0$ and ${\mathfrak g}$ be its Lie algebra. Restricted representations for the Lie algebra ${\mathfrak g}$ are equivalent to representations for the first Frobenius kernel $G_{1}$. In the 1960s Curtis showed that the simple $G_{1}$-modules lift to simple modules for $G$. Later, Humphreys and Verma investigated the projective indecomposable modules for $G_{1}$ and asked whether these modules have a compatible $G$-structure. This statement was verified for $p\geq 2h-2$ (where $h$ is the Coxeter number) by work of Ballard [@B] and Jantzen [@J]. For over 50 years, it has been anticipated that the Humphreys-Verma Conjecture would hold for all $p$. In 1990, Donkin presented a series of conjectures at MSRI. One of the conjectures, known as the Tilting Module Conjecture, states that a projective indecomposable module for $G_r$ can be realized as an indecomposable tilting $G$-module (see Conjecture \[tilting\]). Like the Humphreys-Verma Conjecture, the Tilting Module Conjecture holds for $p\geq 2h-2$ with the hope of being valid for all $p$. Recently, the Tilting Module Conjecture has been shown to be related to another one of Donkin’s conjectures involving good $(p,r)$-filtrations. A more detailed exposition with the connections is presented in Section \[S:conjectures\]. The Tilting Module Conjecture has taken on additional importance following work by Achar, Makisumi, Riche, and Williamson [@AMRW], who have shown that when $p > h$, the characters of indecomposable tilting modules can be given via $p$-Kazhdan-Lusztig polynomials, confirming a conjecture by Riche and Williamson [@RW]. When $p \ge 2h-2$, the Tilting Module Conjecture then allows one to deduce the characters of simple $G$-modules. The authors of [@AMRW] credit Andersen with this observation. The goal of this paper is to present counterexamples to the conjectures and questions stated in Section \[S:conjectures\]. In this subsection, let $G$ be a simple algebraic group whose root system is of type $G_2$ and $p=2$. In particular, we - present a counterexample to the Tilting Module Conjecture - see Theorem \[tilt:no\]; - construct a counterexample to one direction of Donkin’s Good $(p,r)$-Filtration Conjecture (i.e., Conjecture \[donkinconj\]($\Leftarrow$)) - see Theorem \[T:no2good\] and Section \[S:moduleM\]; - give an example of a costandard/induced module $\nabla({\lambda})$ that does not admit a good $(p,r)$-filtration - see Theorem \[T:no2good\]. Specifically, we demonstrate that there does not exist a good $2$-filtration for the induced module $\nabla(2,1)$.[^4] This gives a negative answer to an open question of Jantzen [@J], and this module is also is a counterexample for (1.2.2). As a consequence of these results, we prove that the indecomposable tilting module $T(2,2)$ is decomposable over the first Frobenius kernel of $G$. We present a formal proof of this fact using information about extensions of simple $G$-modules of small highest weights. [^5] Acknowledgements ---------------- The authors would like to thank Henning H. Andersen and Jens C. Jantzen for useful comments and suggestions on an earlier version of this manuscript. Preliminaries ============= Notation. --------- The notation will follow the conventions in [@BNPS Section 2.1], most of which follow those in [@rags] (though our notation for induced and Weyl modules follows the costandard and standard module conventions in highest weight category literature). Let $G$ be a connected, semisimple algebraic group scheme defined over ${\mathbb F}_{p}$ and $G_{r}$ be its $r$th Frobenius kernel. Let $X_{+}$ denote the dominant weights for $G$, and $X_{r}$ be the $p^{r}$-restricted weights. For $\lambda\in X_{+}$, there are four fundamental classes of $G$-modules (each having highest weight $\lambda$): $L(\lambda)$ (simple), $\nabla(\lambda)$ (costandard/induced), $\Delta(\lambda)$ (standard/Weyl), and $T(\lambda)$ (indecomposable tilting). A $G$-module $M$ has a [*good filtration*]{} (resp. [*Weyl filtration*]{}) if and only if $M$ has a filtration with factors of the form $\nabla(\mu)$ (resp. $\Delta(\mu)$) for suitable $\mu\in X_+$. For $\lambda\in X_+$ with unique decomposition $\lambda = \lambda_0 + p^r\lambda_1$ with $\lambda_0\in X_r$ and $\lambda_1\in X_+$, define $\nabla^{(p,r)}(\lambda) = L(\lambda_0)\otimes \nabla(\lambda_1)^{(r)}$ where $(r)$ denotes the twisting of the module action by the $r$th Frobenius morphism. Similarly, set $\Delta^{(p,r)}(\lambda) = L(\lambda_0)\otimes \Delta(\lambda_1)^{(r)}$. A $G$-module $M$ has a [*good $(p,r)$-filtration*]{} (resp. [*Weyl $(p,r)$-filtration*]{}) if and only if $M$ has a filtration with factors of the form $\nabla^{(p,r)}(\mu)$ (resp. $\Delta^{(p,r)}(\mu)$) for suitable $\mu\in X_+$. In the case when $r=1$, we often refer to good $(p,1)$-filtrations as good $p$-filtrations. Let $\rho$ be the sum of the fundamental weights and $\text{St}_r = L((p^r-1)\rho)$ (which is also isomorphic to $\nabla((p^r-1)\rho)$ and $\Delta((p^r-1)\rho)$) be the $r$th Steinberg module. For $\lambda\in X_{r}$, let $Q_{r}(\lambda)$ denote the projective cover (equivalently, injective hull) of $L(\lambda)$ as a $G_{r}$-module. If $\lambda\in X_{r}$, set $\hat{\lambda}=2(p^{r}-1)\rho+w_{0}\lambda$ where $w_{0}$ is the long element in the Weyl group $W$. Let $M$ be a finite-dimensional $G$-module, and let $$M\supseteq \text{rad}_{G} M \supseteq \text{rad}^{2}_{G} M \supseteq \dots \supseteq \{0\}$$ be the radical series of $M$. Moreover, let $$\{0\} \subseteq \text{soc}_{G} M \subseteq \text{soc}^{2}_{G} M \subseteq \dots \subseteq M$$ be the socle series for $M$. One can similarly define such filtrations for $G_{r}$-modules. The Conjectures. {#S:conjectures} ---------------- In the early 1970s Humphreys and Verma presented the following conjecture on the lifting of $G$-structures on the projective modules for $G_{r}$. \[lifting\] For $\lambda\in X_{r}$, the $G_{r}$-module structure on $Q_{r}(\lambda)$ can be lifted to $G$. The conjecture was first verified by Ballard for $p\geq 3h-3$ [@B] and then by Jantzen for $p\geq 2h-2$ [@J], who further showed under this improved bound that the $G$-structure was unique up to isomorphism. Later, at a conference at MSRI in 1990, Donkin presented the following conjecture, predicting that a $G$-module structure on $Q_{r}(\lambda)$ arises from a specific tilting module which must be *the* $G$-module structure whenever uniqueness of $G$-structure holds. \[tilting\] For all $\lambda\in X_{r}$, $T(2(p^{r}-1)\rho+w_{0}\lambda)|_{G_{r}}=Q_{r}(\lambda)$. Conjecture \[tilting\] holds for $p\geq 2h-2$ and the proof under this bound entails locating one particular $G$-summand of $\text{St}_{r}\otimes L(\lambda)$. At the same conference at MSRI, another conjecture was introduced by Donkin that interrelates good filtrations with good $(p,r)$-filtrations via the Steinberg module. \[donkinconj\] Let $M$ be a finite-dimensional $G$-module. Then $M$ has a good $(p,r)$-filtration if and only if $\operatorname{St}_r\otimes M$ has a good filtration. We denote the two directions of the statement as follows: - Conjecture \[donkinconj\]($\Rightarrow$): If $M$ has a good $(p,r)$-filtration, then $\operatorname{St}_r\otimes M$ has a good filtration. - Conjecture \[donkinconj\]($\Leftarrow$): If $\operatorname{St}_r\otimes M$ has a good filtration, then $M$ has a good $(p,r)$-filtration. Conjecture \[donkinconj\]($\Rightarrow$) is equivalent to $\text{St}_{r}\otimes L(\lambda)$ being a tilting module for all $\lambda\in X_{r}$. Andersen [@And] and later Kildetoft and Nakano [@KN] verified Conjecture \[donkinconj\]($\Rightarrow$) when $p\geq 2h-2$. In a recent paper, the authors lowered the bound to $p \geq 2h-4$ (cf. [@BNPS]). For rank 2 groups (including $G_{2}$), Conjecture \[donkinconj\]($\Rightarrow$) was proved for all $p$ in [@KN] and [@BNPS]. There are also strong relationships, established by Kildetoft and Nakano [@KN] and also by Sobaje [@So], between these conjecture given by the following hierarchy of implications:  \[donkinconj\]   $\Rightarrow$     \[tilting\]   $\Rightarrow$    \[donkinconj\]($\Rightarrow$). While we will provide counterexamples to Conjecture \[tilting\] and the full Conjecture \[donkinconj\], we remark that Conjecture \[donkinconj\]($\Rightarrow$) may still hold for all $p$. A special case of Conjecture \[donkinconj\]($\Leftarrow$) was earlier posed by Jantzen [@J]. \[Jantzen-nabla\] For $\lambda\in X_{+}$, does $\nabla(\lambda)$ admit a good $(p,r)$-filtration? Parshall and Scott affirmatively answered the aforementioned question if $p \ge 2h-2$ and the Lusztig Conjecture holds for the given prime and group [@PS]. Recently, Andersen [@And2] has shown this for $p \geq (h-2)h$. Weyl modules and good $(p,r)$-filtrations for $G_{2}$ {#S:Weylandgood} ===================================================== Simple and Projective Modules ----------------------------- Assume throughout this section (and most of the remainder of the paper) that the root system of $G$ is of type $G_2$ and that the prime $p=2$. We follow the Bourbaki ordering of the simple roots: $\alpha_1$ is the short root and $\alpha_2$ is the long root. For $a,b \in \mathbb{Z}$, we denote by $(a,b)$ the weight $a\varpi_1+b\varpi_2$, where $\varpi_1$ and $\varpi_2$ are the fundamental dominant weights. The set of restricted weights is $$X_1 = \{(0,0), (1,0), (0,1), (1,1)\}.$$ Let ${\operatorname{St}}=\text{St}_{1}$ denote the first Steinberg module $L(1,1)$. The module $L(0,1) \cong \nabla(0,1) \cong \Delta(0,1)$ is the $14$-dimensional adjoint representation. Among the four costandard $G$-modules of restricted highest weight, only $\nabla(1,0)$ is not simple, and we have that $\nabla(1,0)/L(1,0) \cong k$. Every simple $G$-module is self-dual, and the weight lattice and root lattice coincide. Since the characters of the simple $G$-modules of restricted highest weight are known here, it is possible to compute directly the dimensions of the projective indecomposable $G_1$-modules. We recall in Table \[table:1\] some of the information provided by Humphreys in [@Hu 18.4, Table 4], originally due to Mertens [@M]. ${\lambda}$ $\dim L({\lambda})$ $\dim Q_1({\lambda})$ ------------- --------------------- ----------------------- $(0,0)$ $1$ $36\cdot 64$ $(1,0)$ $6$ $12\cdot 64$ $(0,1)$ $14$ $6\cdot 64$ $(1,1)$ $64$ $64$ : Dimensions of simple and projective $G_1$-modules[]{data-label="table:1"} $\text{Ext}^{1}$-calculations {#S:Ext1} ----------------------------- In our analysis of the structure of the Weyl modules we will need the following $\text{Ext}^{1}$-calculations that appear in Dowd and Sin [@DS Lemma 3.3], part (c) of which dates back to work of Jantzen [@J91]. \[DS-Ext\] One has the following isomorphisms as $G$-modules: - ${\operatorname{Ext}}_{G_1}^1(L(1,0),L(0,1))= 0$ - ${\operatorname{Ext}}_{G_1}^1(L(0,1),L(0,1)) = 0 $ - ${\operatorname{Ext}}_{G_1}^1(k,L(0,1)) \cong \nabla(1,0)^{(1)}$. Decomposition of ${\operatorname{St}}\otimes L(\lambda)$, $\lambda\in X_{1}$ ---------------------------------------------------------------------------- Recall that ${\operatorname{St}}$ is projective over the first Frobenius kernel $G_1$. Hence, for ${\lambda}\in X_1$, ${\operatorname{St}}\otimes L({\lambda})$ is also projective over $G_1$. As the highest weight of ${\operatorname{St}}\otimes L({\lambda})$ is $\rho + {\lambda}= 2\rho - (\rho - {\lambda})$, which is the same as that of $Q_1(\rho - {\lambda})$, the module $Q_1(\rho - {\lambda})$ is necessarily a $G_1$-summand of ${\operatorname{St}}\otimes L({\lambda})$. The following proposition gives a precise decomposition of ${\operatorname{St}}\otimes L({\lambda})$ for each ${\lambda}\in X_1$. \[St:tensor\] We have the following decompositions into projective indecomposable modules over $G_1$: - ${\operatorname{St}}\otimes k \cong {\operatorname{St}}$ - ${\operatorname{St}}\otimes L(1,0) \cong Q_1(0,1)$ - ${\operatorname{St}}\otimes L(0,1) \cong Q_1(1,0) \oplus {\operatorname{St}}^{\oplus 2}$ - ${\operatorname{St}}\otimes {\operatorname{St}}\cong Q_1(0,0) \oplus Q_1(0,1)^{\oplus 2} \oplus {\operatorname{St}}^{\oplus 16}$. The first isomorphism is immediate, and the second follows by the module dimensions given in Table \[table:1\]. To get the other two, we use the fact that for any $G$-module $M$, $${\ensuremath{\operatorname{Hom}}}_{G_1}({\operatorname{St}}, {\operatorname{St}}\otimes M) \cong {\ensuremath{\operatorname{Hom}}}_{G_1}({\operatorname{St}}\otimes {\operatorname{St}}, M) \cong M^{T_1},$$ where $T_1$ is the Frobenius kernel of the maximal torus $T$. Now the weight $0$ appears twice in $L(0,1)$, so that ${\operatorname{St}}^{\oplus 2} \subseteq {\operatorname{St}}\otimes L(0,1)$. There is also an embedding of $L(1,0)$ into ${\operatorname{St}}\otimes L(0,1)$. The dimensions in Table \[table:1\] then imply that (c) holds. Finally, the $G_1$-socle of ${\operatorname{St}}\otimes {\operatorname{St}}$ is determined by all $L({\lambda})^{T_1}$ for ${\lambda}\in X_1$. Using a table of weights for $G$-modules (see for example [@L]) and the fact that ${\operatorname{St}}\otimes {\operatorname{St}}$ is a tilting module, one finds that $${\operatorname{soc}}_{G_1} ({\operatorname{St}}\otimes {\operatorname{St}}) \cong k \oplus L(0,1)^{\oplus 2} \oplus ({\operatorname{St}}\otimes T(1,0)^{(1)})^{\oplus 2},$$ when viewed as a $G$-module. Note that ${\operatorname{St}}\otimes T(1,0)^{(1)} \cong {\operatorname{St}}^{\oplus 8}$ as a $G_1$-module, proving (d). For $\lambda\in X_{1}$, we know that ${\operatorname{St}}\otimes L(\lambda)$ is a tilting module [@KN] of highest weight $\rho + {\lambda}$. Hence, the indecomposable tilting module $T(\rho + {\lambda})$ embeds in ${\operatorname{St}}\otimes L(\lambda)$. Furthermore, the $G_1$-Steinberg block component of any $G$-module splits off as a summand over $G$. Thus we conclude from Proposition \[St:tensor\]: \[T:Q\] Over $G_1$ there are isomorphisms - $T(1,1)\cong {\operatorname{St}}$ - $T(2,1) \cong Q_1(0,1)$ - $T(1,2) \cong Q_1(1,0)$. One can show that these are the unique $G$-structures on these modules, by showing that any $G$-structure on $Q_1(1,0)$ or on $Q_1(0,1)$ must admit a good filtration (a more detailed explanation of this will be provided in a forthcoming paper). There exists a surjective homomorphism of $G$-modules $$T(2,1) \twoheadrightarrow \nabla(2,1).$$ Since $T(2,1) \cong Q_1(0,1)$, $L(0,1)$ is its unique semisimple quotient over $G_1$, and therefore the same holds over $G$ since every simple $G$-module is semisimple over $G_1$. These facts are then true of its homomorphic image $\nabla(2,1)$. That is, $$\textup{rad}_{G_1} \nabla(2,1) = \textup{rad}_G \nabla(2,1)$$ and $$\nabla(2,1)/\textup{rad}_G \nabla(2,1) \cong L(0,1).$$ Since $T(2,1) \cong Q(0,1)$ as a $G_1$-module, the $G_1$-socle of $T(2,1)$ is $L(0,1)$. We now want to compute the second layer of the radical series of $\nabla(2,1)$. This will be accomplished by calculating the second socle layer of $T(2,1)$ using the ${\operatorname{Ext}}^1$-results of Proposition \[DS-Ext\]. \[P:socle-radical\] There exist the following isomorphisms of $G$-modules: - ${\operatorname{soc}}_{G_1}^2 T(2,1)/ {\operatorname{soc}}_{G_1} T(2,1) \cong \nabla(1,0)^{(1)}$ - ${\operatorname{soc}}_{G}^2 T(2,1)/{\operatorname{soc}}_{G}T(2,1)\cong L(1,0)^{(1)}$ - $\textup{rad}_G \nabla(2,1)/\textup{rad}_G^2 \nabla(2,1) \cong L(1,0)^{(1)}$. \(a) and (b): For ${\lambda}\in X_{1}$, one has isomorphisms $$\begin{aligned} {\ensuremath{\operatorname{Hom}}}_{G_1}(L({\lambda}),T(2,1)/L(0,1)) & \cong {\ensuremath{\operatorname{Hom}}}_{G_1}(L({\lambda}),Q_1(0,1)/L(0,1))\\ & \cong {\operatorname{Ext}}^1_{G_1}(L({\lambda}),L(0,1)),\\\end{aligned}$$ where the first isomorphism holds since $T(2,1) \cong Q_1(0,1)$, and the second comes from degree shifting in cohomology. Proposition \[DS-Ext\] then establishes that $${\operatorname{soc}}_{G_1}^2 T(2,1)/{\operatorname{soc}}_{G_1} T(2,1)$$ is $7$-dimensional and is trivial as a $G_1$-module. Considering this, as a $G$-module, its only possible composition factors are $k$ and $L(1,0)^{(1)}$. Since $k$ does not extend $L(0,1)$ nontrivially over $G$, we conclude that $${\operatorname{soc}}_{G}^2 T(2,1)/{\operatorname{soc}}_{G} T(2,1) \cong L(1,0)^{(1)},$$ and that $${\operatorname{soc}}_{G_1}^2 T(2,1)/{\operatorname{soc}}_{G_1} T(2,1) \cong \nabla(1,0)^{(1)}$$ (which agrees with the $G$-module structure in Proposition \[DS-Ext\]; this extended argument is included to be precise on the inference of $G$-module structure). (c): Every tilting $G$-module and every simple $G$-module is self-dual, and $\Delta(2,1)^* \cong \nabla(2,1)$, so we will work in the dual situation. We have that $\Delta(2,1) \subseteq T(2,1)$, therefore $${\operatorname{soc}}_G^2 \Delta(2,1)/{\operatorname{soc}}_G \Delta(2,1) \subseteq {\operatorname{soc}}_G^2 T(2,1)/{\operatorname{soc}}_G T(2,1) \cong L(1,0)^{(1)}.$$ But, ${\operatorname{soc}}_G^2 \Delta(2,1)/{\operatorname{soc}}_{G} \Delta(2,1)\ne 0$, therefore ${\operatorname{soc}}_G^2 \Delta(2,1)/{\operatorname{soc}}_{G} \Delta(2,1) \cong L(1,0)^{(1)}$. Finally, one has $$\textup{rad}_G \nabla(2,1)/\textup{rad}_G^2 \nabla(2,1) \cong ({\operatorname{soc}}_G^2 \Delta(2,1)/{\operatorname{soc}}_{G} \Delta(2,1))^* \cong L(1,0)^{(1)}.$$ This following example answers Question \[Jantzen-nabla\] in the negative, and it is also a counterexample to Conjecture \[donkinconj\]($\Leftarrow$), since ${\operatorname{St}}\otimes\nabla(2,1)$ has a good filtration. \[T:no2good\] The module $\nabla(2,1)$ for the group of type $G_2$ does not have a good $2$-filtration. Suppose that $$0 = F_0 \subseteq F_1 \subseteq \cdots \subseteq F_n = \nabla(2,1)$$ is a good $2$-filtration. In view of the structure of the radical series of $\nabla(2,1)$, $$F_n/F_{n-1} \cong L(0,1) \quad \text{and} \quad F_{n-1}/F_{n-2} \cong \nabla(\mu)^{(1)},$$ with $L(1,0)$ being the $G$-head of $\nabla(\mu)$. Since $2\mu \le (2,1)$ under the usual partial ordering of weights, we have $$2\langle \mu, \alpha_0^{\vee} \rangle \le \langle (2,1), \alpha_0^{\vee} \rangle = 7,$$ where $\alpha_0$ denotes the maximal short root. Therefore, $$\langle \mu, \alpha_0^{\vee} \rangle \le 3,$$ implying that $\mu \in \{ (0,0), (1,0), (0,1) \}$. But $L(1,0)$ is not in the head of $\nabla(\mu)$ for any of these choices of $\mu$, therefore no such filtration on $\nabla(2,1)$ is possible. H.H. Andersen has pointed out to us that the module $\nabla(0,2)$ is uniserial, and that its top two layers are the same as those of $\nabla(2,1)$, so that this module also fails to have a good $2$-filtration. {#S:moduleM} The lack of a good $2$-filtration leads to other interesting phenomena which will factor into our proof that the Tilting Module Conjecture does not hold. \[P:nogood\] For the group $G_2$ with $p = 2$, the module ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$ does not have a good filtration. It suffices to show that the Steinberg block component of this module does not admit a good filtration. Any composition factor of ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$ that lies within the Steinberg block has the form ${\operatorname{St}}\otimes L(\mu)^{(1)}$. Further, for any such composition factor, we have $2\mu \le (2,1)$, and as in the previous proof one has $\mu \in \{(0,0), (1,0), (0,1)\}$. Since $L(1,0)^{(1)}$ is the head of $\textup{rad}_G \nabla(2,1)$, ${\operatorname{St}}\otimes L(1,0)^{(1)}$ must appear in the head of (the Steinberg block of) ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$. But we again reason as in the proof above. If the Steinberg block of ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$ has a good filtration, then there is some $\nabla(\mu)$ such that $L(1,0)$ is the head of $\nabla(\mu)$ and ${\operatorname{St}}\otimes \nabla(\mu)^{(1)}$ is a subquotient of ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$. But no such subquotient is possible with the limitations on $\mu$. Conjecture \[donkinconj\]($\Leftarrow$): Minimal Counterexample --------------------------------------------------------------- The module ${\operatorname{St}}\otimes \nabla(2,1)$ has a good filtration, and none of its $\nabla$-quotients map onto $L(3,1) \cong {\operatorname{St}}\otimes L(1,0)^{(1)}$. It was observed earlier that two copies of ${\operatorname{St}}$ are contained in ${\operatorname{St}}\otimes L(0,1)$. Therefore, it follows that one of these copies nontrivially extends the composition factor ${\operatorname{St}}\otimes L(1,0)^{(1)}$ in ${\operatorname{St}}\otimes \textup{rad}_G \nabla(2,1)$ that comes from $${\operatorname{St}}\otimes [{\operatorname{rad}}_G \nabla(2,1)/{\operatorname{rad}}_G^2\nabla(2,1)].$$ Now define the $G$-module $M$ via the short exact sequence $$\label{M} 0 \to {\operatorname{rad}}_G^2T(2,1) \to T(2,1) \to M \to 0.$$ Then the non-split sequences $$0 \to {\operatorname{rad}}_G^2\nabla(2,1) \to \nabla(2,1) \to M \to 0$$ and $$0 \to L(1,0)^{(1)} \to M \to L(0,1) \to 0$$ are immediate consequences of Proposition \[P:socle-radical\]. From weight considerations and Theorem \[T:Q\], it follows that ${\operatorname{St}}\otimes M \cong T(1,2) \oplus S$, where $S$ is the summand containing all composition factors in the $G_1$-Steinberg block of ${\operatorname{St}}\otimes M$. We know that $S$ contains ${\operatorname{St}}\otimes L(1,0)^{(1)}$ once as a composition factor and the Steinberg module twice. No other composition factors occur, and as a consequence of previous discussion, one of the Steinberg factors must sit on top of ${\operatorname{St}}\otimes L(1,0)^{(1)}$. In conclusion, $$\begin{aligned} {\operatorname{St}}\otimes M & \cong T(1,2) \oplus ({\operatorname{St}}\otimes \nabla(1,0)^{(1)}) \oplus {\operatorname{St}}\\ & \cong T(1,2) \oplus \nabla(3,1) \oplus {\operatorname{St}},\end{aligned}$$ which has a good filtration. This then proves the following: \[M:good\] Let $M$ be the module defined in . - ${\operatorname{St}}\otimes M$ has a good filtration. - ${\ensuremath{\operatorname{Hom}}}_G( {\operatorname{St}}, {\operatorname{St}}\otimes M) =k.$ The module $M$ has composition factors $L(0,1)$ and $L(1,0)^{(1)}$. Since $L(1,0)^{(1)} \not\cong \nabla(1,0)^{(1)}$, we see that $M$ does not have a good $2$-filtration, even though ${\operatorname{St}}\otimes M$ has a good filtration. One could then consider $M$ as a minimal counterexample to Conjecture \[donkinconj\]($\Leftarrow$), as it has only two composition factors. Indeed, in the general context of a semisimple $G$ and arbitrary prime $p$, a counterexample with only one composition factor is not possible. For example, if for some ${\lambda}={\lambda}_0+p{\lambda}_1$, with ${\lambda}_0 \in X_1$ and ${\lambda}_1 \in X_+$, the module $${\operatorname{St}}\otimes L({\lambda}_0) \otimes L({\lambda}_1)^{(1)}$$ has a good filtration, then it must be tilting. But then $${\operatorname{St}}\otimes L({\lambda}_0) \otimes T((p-1)\rho-{\lambda}_0) \otimes L({\lambda}_1)^{(1)}$$ is tilting, and since ${\operatorname{St}}$ is a summand of $L({\lambda}_0) \otimes T((p-1)\rho-{\lambda}_0)$, we have that ${\operatorname{St}}\otimes {\operatorname{St}}\otimes L({\lambda}_1)^{(1)}$ is also tilting, and then that ${\operatorname{St}}^{\otimes 3} \otimes L({\lambda}_1)^{(1)}$ is tilting. But ${\operatorname{St}}$ is a summand of ${\operatorname{St}}^{\otimes 3}$, so that ${\operatorname{St}}\otimes L({\lambda}_1)^{(1)}$ is tilting, and we conclude that $L({\lambda}_1) \cong \nabla({\lambda}_1) \cong T({\lambda}_1)$. Consequently, $L({\lambda}_0) \otimes L({\lambda}_1)^{(1)}$ is a good $p$-filtration module. On The Tilting Module Conjecture ================================ {#section-4} We return to the assumption that $G$ has a root system of type $G_2$ and the prime $p = 2$. The fact that ${\operatorname{St}}\otimes {\operatorname{rad}}_G \nabla(2,1)$ does not have a good filtration guarantees that the Tilting Module Conjecture does not hold in this case. This essentially follows from [@So Theorem 5.1.1], but here we will give a simple self-contained proof of this fact using the results already established in this paper. \[tilt:no\] The Tilting Module Conjecture does not hold for $G_2$ and $p=2$. Assume that the Tilting Module Conjecture holds, so that $T(2,2)|_{G_1} \cong Q_1(0,0)$. From the $G$-module structure of the $G_1$-socle of ${\operatorname{St}}\otimes {\operatorname{St}}$, as observed in the proof of Proposition \[St:tensor\] part (d), and Theorem \[T:Q\], one then concludes that (as $G$-modules) $$\label{E:StSt} {\operatorname{St}}\otimes{\operatorname{St}}\cong T(2,2)\oplus T(2,1)^{\oplus 2}\oplus T(3,1)^{\oplus 2}.$$ In particular, the tilting module $T(2,1)$ appears twice in the tensor product ${\operatorname{St}}\otimes {\operatorname{St}}$. Let $M$ be the quotient of $T(2,1)$ from Proposition \[M:good\]. Then we have that $$2 \leq \dim {\ensuremath{\operatorname{Hom}}}_G({\operatorname{St}}\otimes {\operatorname{St}}, M) = \dim {\ensuremath{\operatorname{Hom}}}_G({\operatorname{St}}, M \otimes {\operatorname{St}}),$$ a contradiction to part (b) of Proposition \[M:good\]. The socle of $T(2,2)$ --------------------- There are two copies of $L(0,1)$ in the $G$-socle of ${\operatorname{St}}\otimes {\operatorname{St}}$, but we have now established that $T(2,1)$ occurs as a summand of ${\operatorname{St}}\otimes {\operatorname{St}}$ at most once (i.e., the decomposition in to hold). Looking again at Theorem \[T:Q\], it follows that $L(0,1)$ must appear as a submodule of $T(2,2)$. This fact has been independently confirmed by Doty’s program [@Doty; @GAP], which has computed more precisely that $$k \oplus L(0,1) \cong {\operatorname{soc}}_G \Delta(2,2) \subseteq T(2,2).$$ We note that, whenever $T(\hat{\lambda})=Q_{1}(\lambda)$ as a $G_{1}$-module for $\lambda\in X_{1}$, then ${\operatorname{soc}}_G \Delta(\hat{\lambda})$ must be simple and isomorphic to $L(\lambda)$. The Humphreys-Verma Conjecture ------------------------------ Although $T(2,2)$ is not a lift of $Q_1(0,0)$, it is still possible that $Q_1(0,0)$ has some other $G$-module structure, so the Humphreys-Verma Conjecture remains open for now. Nevertheless, it is significant that even if there is some $G$-structure, it will not occur as a $G$-submodule of ${\operatorname{St}}\otimes {\operatorname{St}}$ (though it could appear as a subquotient). This defies the long held expectation, going back to early work by Humphreys and Verma, that a $G$-structure should occur in precisely this way. [888888888]{} Pramod Achar, Shotaro Makisumi, Simon Riche, Geordie Williamson, Koszul duality for Kac-Moody groups and characters of tilting modules, [*J. Amer. Math. Soc.*]{}, [32]{}, (2019), 261-310. Henning Haahr Andersen, $p$-filtrations and the Steinberg module, [*J. Algebra*]{}, [244]{}, (2001), 664-683. Henning Haahr Andersen, $p$-filtrations of dual Weyl modules, preprint, arXiv:1810.0405, 2018. John W. Ballard, Injective modules for restricted enveloping algebras, [*Math. Z.*]{}, [163]{}, (1978), 57-63. Christopher P. Bendel, Daniel K. Nakano, Cornelius Pillen, Paul Sobaje, On tensoring with the Steinberg representation, [*Transformation Groups*]{}, to appear. Steven Doty, WeylModules – a GAP package, Version 1.1, 2009, `(http://doty.math.luc.edu/weylmodules)`. Michael F. Dowd, Peter Sin, On representations of algebraic groups in characteristic $2$, [*Comm. Algebra*]{}, [24]{}, (1996), no. 8, 2597-2686. The GAP Group, [*GAP – Groups, Algorithms, and Programming, Version 4.8.10*]{}; 2018, `(https://www.gap-system.org)`. James E. Humphreys, [*Modular Representations of Finite Groups of Lie Type*]{}, London Mathematical Society Lecture Notes Series, Vol. 326, Cambridge University Press, Cambridge, 2006. Jens Carsten Jantzen, Darstellungen halbeinfacher Gruppen und ihrer Frobenius-Kerne, [*J. reine angew. Math.*]{}, [317]{}, (1980), 157-199. Jens Carsten Jantzen, First cohomology groups for classical Lie algebras, [*Progress in Mathematics*]{}, [95]{}, Birkh[" a]{}user, 1991, 289-315. Jens Carsten Jantzen, [*Representations of Algebraic Groups*]{}, Second Edition, Mathematical Surveys and Monographs, Vol. 107, American Mathematical Society, Providence RI, 2003. Frank L[" u]{}beck, [*Tables of Weight Multiplicities*]{},\ http://www.math.rwth-aachen.de/\~Frank.Luebeck/chev/WMSmall/index.html. Tobias Kildetoft, Daniel K. Nakano, On good $(p,r)$ filtrations for rational $G$-modules, [*J. Algebra*]{}, [423]{}, (2015), 702-725. D. Mertens, Zur Darstellungstheorie der endlichen Chevalley-Gruppen vom Typ $G_2$, Diplomarbeit, Univ. Bonn, 1985. Brian J. Parshall, Leonard L. Scott, On $p$-filtrations of Weyl modules, [*J. Lond. Math. Soc. (2)*]{}, [91]{}, (2015), no. 1, 127-158. Simon Riche, Geordie Williamson, Tilting modules and the $p$-canonical basis, [*Ast[é]{}risque*]{}, 2018, no. 397, ix+184 pp. Paul Sobaje, On $(p,r)$-filtrations and tilting modules, [*Proc. Amer. Math. Soc.*]{}, [146]{}, (2018), no. 5, 1951-1961. [^1]: Research of the first author was supported in part by Simons Foundation Collaboration Grant 317062 [^2]: Research of the second author was supported in part by NSF grant DMS-1701768 [^3]: Research of the third author was supported in part by Simons Foundation Collaboration Grant 245236 [^4]: A major step in this process was a computation of a filtration of $\Delta(2,1)$, obtained using Stephen Doty’s WeylModule package for the software GAP [@Doty; @GAP], that, when dualized, indicated that $\nabla(2,1)$ could not have a good $2$-filtration. [^5]: This fact was verified in another way by running Doty’s GAP program to compute that the socle of $\Delta(2,2)$ is isomorphic to $k \oplus L(0,1)$. As $\Delta(2,2)$ is a submodule of $T(2,2)$, one concludes that the socle of $T(2,2)$ has at least two factors over $G_1$, so that $T(2,2)$ splits into at least two projective summands over $G_1$.
--- abstract: 'We use optimal fluctuation method for a new ballistic $\sigma$-model to study the long time dispersion of conductance $G(t)$ of a mesoscopic sample. In the long time limit the conductance of a $d$-dimensional sample decays as $ \exp \left( -A \ln^d t \right) $. At shorter times the new results match those in our previous paper [@MK94]. It is found that at very long times the diffraction effects are important and the ballistic treatment is not valid. We also suggest a physical picture of trapping.' address: | Cavendish Laboratory, University of Cambridge, Cambridge CB3 0HE, UK\ and L.D.Landau Institute for Theoretical Physics, Moscow, Russia author: - 'B.A.Muzykantskii and D.E.Khmelnitskii' date: '*December 27, 1995*' title: | Nearly Localised States in Weakly Disordered Conductors.\ II. Beyond Diffusion Approximation. --- [2]{} Introduction ============ This article is a continuation of our paper [@MK94] where we suggested an optimal fluctuation method to study exponentially rare fluctuations in weakly disordered conductors. The quantity of interest was expressed using the super-matrix nonlinear $\sigma$-model [@Efetov], and the resulting functional integral was evaluated by the saddle point method. In this way we obtained an intermediate asymptote of conductance time dispersion in one- and two- dimensional metals. It was, however, pointed out that this method has only a limited scope of validity, e.g., it fails to describe satisfactorily three-dimensional conductors. The failure occurs when the super-matrix corresponding to the optimal fluctuation changes rapidly over the mean free path $l$. Then the diffusion approximation breaks down and the use of the nonlinear $\sigma$-model [@Efetov] can no longer be justified. In this article we treat the problem using the recently proposed ballistic $\sigma$-model [@MK95]: the generalisation of the standard one that correctly accounts for large gradients. To be specific, we consider the long time asymptote of conductance dispersion. If a voltage $V(t)$ is applied to a sample, the current through it is given by the Ohm law $$\label{respon} I(t) = \int_{-\infty}^t G(t-t') V(t') dt',$$ and we are interested in the behaviour of the conductance $G(t)$ in the long time limit. We first discuss this problem under conditions when the diffusion approximation is valid. The super-matrix theory is defined by the the partition function (see [@Efetov] and [@VWZ] for review): $$Z=\int {\cal D} Q e^{-F},\; F= \frac{ \pi \nu}{8} \mathop{\rm str}\nolimits \int d{\bbox r} \{ D ({\bbox \nabla} Q)^2 + 2 i\omega \Lambda Q\}, \label{Efet}$$ where the functional integral over super-matrices $Q$ is subjected to the constraint $$Q^2 = 1. \label{ConstraintQ}$$ The expression for the averaged conductance $G(t)$ looks as follows $$G(t) = G_0 e^{- t/\tau} +\int\frac{d\omega}{2 \pi} e^{-i\omega t} \int\limits_{ Q^2 =1} {\cal D} Q P\{Q\} e^{-F} \label{G}$$ In Eqs. (\[Efet\]), (\[G\]) $\nu$ is the density of states, $D$ is diffusion coefficient and $\tau$ is the mean free time. The strategy suggested in Ref [@MK94] consists in studying the condition of the extremum of the free energy $F$ in Eq. (\[Efet\]) $$2 D \nabla (Q \nabla Q) + i \omega [\Lambda, Q]=0 \label{Us}$$ together with the condition at the boundary $\Gamma$ between the mesoscopic sample and a bulk electrode $$Q |_{\Gamma} = \Lambda, \label{Bound}$$ and a self-consistency condition $$\frac{4 t\Delta}{\pi \hbar} = - \int \frac{d {\bbox r}}{V} \mathop{\rm str}\nolimits \{ \Lambda Q \}; \quad \Delta = \frac{1}{\nu V}. \label{self-cons-diffusive}$$ that arises after integrating out $\omega$ in Eq. (\[G\]). After solving equation (\[Us\]) with boundary conditions (\[Bound\]), expressing frequency $\omega$ through time $t$, using self-consistency condition (\[self-cons-diffusive\]) and substituting the solution $Q(r)$ to the free energy (\[Efet\]), we obtain the conductance $G(t)$ with exponential accuracy. Long time retardation in electric response $G(t)$ is caused by relatively improbable quasi-localised states that are weakly coupled to the bulk electrodes and have life time of the order of $t$. Since the mean square of the wave function $|\Psi(r)|^2$ is connected to the super-matrix $Q(r)$ via $$\label{psi-and-Q} |\Psi(r)|^2 \sim - \mathop{\rm str}\nolimits \{ \Lambda Q \},$$ the self-consistency condition (\[self-cons-diffusive\]) can be regarded as a relation between the life-time of a quasi-stationary state and its wave function. The super-matrix $Q$ is fixed at the boundary of the sample (see Eq. (\[Bound\])), so, to satisfy the self-consistency condition (\[self-cons-diffusive\]) $\mathop{\rm str}\nolimits \{ \Lambda Q \}$ must grow towards the middle of the sample. The gradients of the $Q$-matrix increase simultaneously with the delay time $t$. Since theory (\[Efet\]) correctly accounts only for the lowest term in $\nabla Q$, it cannot be used for sufficiently long times $t$. The importance of high gradients for long time asymptotes and tails of distribution functions was first announced by Altshuler, Kravtsov and Lerner (AKL) [@AKL], who found the growth of the corresponding invariant charges under renormalization group flow. The optimal fluctuation method has been used recently by Falko and Efetov [@EfFal] and by Mirlin [@Mirlin1], who studied the influence of the nearly localised states on the distribution of wave function amplitudes [@EfFal] and local density of states [@Mirlin1]. These authors found that the gradients of $Q$ become large near the centre of the sample and, therefore, the $\sigma$-model description breaks down. To treat consistently this problem, along with the others where ballistic motion of electrons is essential, we suggested a new version of the nonlinear $\sigma$-model [@MK95]. This theory operates with a super-matrix distribution function $g_{\bbox n} ({\bbox r})$, where ${\bbox n}$ is the unit vector of the electron momentum direction (${\bbox p} = {\bbox n} p_F$). It effectively accounts for the infinite series in $l \nabla Q$ of which only the leading term is kept in the action (\[Efet\]), and, therefore, correctly describes fluctuations of the super-matrix $Q$ with wave vectors $q \sim 1/l$. The theory is still restricted, however, to the region of validity of semi-classical approximation. The condition of extremum for the new action is related to the kinetic equation in the same manner as Eq. (\[Us\]) is related to the diffusion equation. We solve the kinetic equation for one- two- and three-dimensional geometries and obtain very long time asymptotes of conductance. The results are summarised in table \[table-summary\]. The exponential decay law for times $ t_D \ll t \ll \hbar/\Delta $ can be obtained analysing the time dependence of weak localisation corrections. The “ballistic” part of the long time asymptote at $t > t_b$ was first studied by AKL who used the frequency representation and expanded the conductance $G(\omega)$ in powers of $\omega$. They estimated the growth rate of coefficients in this expansion and pointed out the importance of high gradients. Unfortunately, the Fourier transformation to $t$-representation in two-dimensions was carried out with insufficient precision (see [@Mirlin2] for discussion) and the factor $g$ was lost under the sign of $\ln^2$. The AKL approach also failed to predict the existence of intermediate asymptote at $\hbar \Delta^{-1} \ll t \ll t_b$ which was discovered by the authors [@MK94] using the optimal fluctuations method for the diffusive $\sigma$-model. We found the region of validity for this intermediate asymptote and obtained some estimates for even longer times. These estimates were recently improved by Mirlin [@Mirlin2] who imposed somewhat arbitrary effective boundary conditions on the super-matrix $Q$ at the point where the diffusion approximation breaks down. The value of the action turned out to be rather insensitive to the exact form of these boundary conditions which enabled Mirlin to rederive the AKL result in 2d and obtain $\exp (- A \ln^3 t)$ asymptote in 3d, although without the value of the coefficient $A$ in the exponent. The full ballistic treatment presented in this article gives the coefficient in the exponent for 3d; confirms the AKL result in 2d; and discovers a new regime in the case of a thick wire. We have also found a restriction on the validity of ballistic treatment. It turns out, that at ultra-long delay times the super-matrix distribution function depends strongly on the direction ${\bbox n}$ of the momentum developing sharp features with characteristic width $ \delta \phi$. At times $t \sim t_Q$ this width becomes comparable with the diffraction angle $\delta \phi_q \sim \lambda/a$, where $a$ is a typical size. At longer times the diffraction effects become important and the ballistic treatment is no longer valid. The values of times $t_Q$ are presented in Table \[table-summary\]. Apart from derivation of these results from the first principles the paper describes the optimal fluctuation of random potential in a strictly one-dimensional wire that traps an electron for time $t$. We believe that the same mechanism is responsible for long time delays in higher dimensions. The material is organised as follows. In section II we present physical motivations behind the ballistic $\sigma$-model, find its condition of extremum; and show how the ballistic description transforms into the diffusive one when the gradients are small. We also discuss the geometry of the super-matrix distribution function $g_{\bbox n}(r)$ and introduce its convenient parametrisation. The optimal fluctuation method is described in section III. In section IV the solution of kinetic saddle-point equation is found and the long time asymptote of the conductance is evaluated in two and three dimensions. The one dimensional case is discussed in section V. Physical picture of trapping is presented in section VI. Finally, in section VII we discuss the results and the limits of their validity. Effective Action for Quantum Ballistics ======================================= Outline of Derivation --------------------- In this section we present a generalised non-linear super-matrix $\sigma$-model [@MK95], which is valid in the ballistic regime. If the quantum effects are neglected, the ballistic regime is described by the Boltzmann kinetic equation for the distribution function $f_{\bbox n} ({\bbox r}) $ of coordinate $\bf r$ and momentum ${\bf p} = {\bbox n} p_F.$ The quantum description operates with density matrix $\hat g_{\bbox n} ({\bbox r})$. To enable averaging over disorder, $\hat g_{\bbox n} ({\bbox r})$ should be a super-matrix [@MK95], analogously to the matrix $\hat Q$ in the standard $\sigma$-model (\[Efet\]). The quantum generalisation of the kinetic equation has the form [@Eilenberger] $$\label{Eilen} 2 v {\bbox n} \frac{\partial g_{\bbox n} ({\bbox r})}{\partial {\bbox r}} = \left[ \left( i \omega \Lambda - \frac{\langle g ({\bbox r})\rangle}{\tau} \right),g_{\bbox n} ({\bbox r}) \right]$$ with the additional constraint $$g_{\bbox n}^2 = 1, \label{Norm}$$ which is similar to the one imposed on $Q$. Equation (\[Eilen\]) is the required ballistic generalisation of the saddle-point equation (\[Us\]) and serves as an extremum condition for the ballistic action we are constructing. To generate the first derivative in Eq. (\[Eilen\]), the action has to have a Wess-Zumino type term $$\begin{aligned} &&{\cal W}\{g_{\bbox n}\} = \int \int_0^1 du \mathop{\rm str}\nolimits \left \langle \tilde g_{\bbox n} ({\bbox r},u) \left[\frac{\partial \tilde g_{\bbox n}}{\partial u}, {\bbox n} \frac{\partial \tilde g_{\bbox n}} {\partial {\bbox r}} \right] \right \rangle d {\bbox r} \label{W} \\ &&\tilde g_{\bf n} ({\bf r},u=0) = \Lambda;\; \qquad \tilde g_{\bf n}({\bf r},u=1) = g_{\bf n}({\bf r}), \label{Vic}\end{aligned}$$ where an arbitrary smooth interpolation can be chosen as $\tilde {g}_{\bbox n} ({\bbox r},u)$ [@Fradkin], and the angular brackets denote averaging over directions of ${\bbox n}$. The functional derivative $\delta {\cal W} / \delta g_{\bbox n} ({\bbox r})$ is taken with the constraint (\[Norm\]) which guarantees that $ g_{\bbox n} \delta g_{\bbox n} + \delta g_{\bbox n} g_{\bbox n} =0 $ and an arbitrary variation $\delta g_{\bbox n}$ has the form $\delta g_{\bbox n} = \left[ g_{\bbox n}, a_{\bbox n} \right]$. As a result of variation $$\label{deltaW} \delta {\cal W} = 4 \int d {\bbox r} \mathop{\rm str}\nolimits \left \langle {\bbox n} \frac{\partial g_{\bbox n}}{\partial {\bbox r}} a_{\bbox n} \right \rangle$$ the first derivative appears.There is another way of writing the functional $\cal W$ which employs the decomposition $ g_{\bbox n} = U \Lambda U^{-1}$: $$\label{W-vs-U} {\cal W}\{g_{\bbox n}\} = 4 \int d {\bbox r} \mathop{\rm str}\nolimits \langle \Lambda U^{-1} {\bbox n} \frac{\partial U}{\partial {\bbox r}} \rangle.$$ This representation can be verified by comparing the variation of $\cal W$ with Eq. (\[deltaW\]). The quantum ballistic partition function $Z$ can be presented as an integral over distribution functions $g_{\bbox n} ({\bbox r})$ with effective action $F$: \[BigVic\] $$\begin{aligned} Z&=&\int\limits_{g_{\bbox n}^2=1} \!\!\! {\cal D} g_{\bbox n} ({\bbox r}) e^{-F}, \label{BigVic1} \\ F &=& \frac{\pi \nu}{4} \int d {\bbox r} \mathop{\rm str}\nolimits \left\{ i\omega \Lambda \langle g ({\bbox r}) \rangle - \frac{1}{2 \tau} \langle g({\bbox r})\rangle^2 \right\} - \nonumber \\ && - \frac{\pi \nu v_F}{8} {\cal W} \{ g_{\bbox n} \}. \label{BigVic2}\end{aligned}$$ The details of the derivation can be found in Ref [@MK95] . Field theory (\[BigVic\]) enables us to study any chaotic problem, for which the semi-classical approach is valid, irrespective of validity of the diffusion approximation. If space gradients are small ($ l \nabla g \ll g $), the standard treatment recovers (see [@MK95]) the $Q$-matrix theory (\[Efet\]). To show this we expand the matrix $g_{\bbox n}$ into a sum over angular harmonics keeping only the zeroth and first terms: $$g_{\bbox n} = Q({\bbox r}) \left( 1 - \frac{Q {\bbox J}^2}{2}\langle {\bbox n}^2 \rangle \right) + {\bbox J} ({\bbox r}) \cdot {\bbox n} \label{subst}$$ The constraint $g^2 = 1$ now reads $$Q^2 =1, \qquad Q {\bbox J} + {\bbox J} Q = 0. \label{constr7}$$ Substituting Eq. (\[subst\]) into Eqs. (\[BigVic\]) and using conditions (\[constr7\]), we obtain the partition function in the form $$\begin{aligned} Z&=& \int {\cal D} Q \int {\cal D} {\bbox J} e^{-F_{Q, {\bbox J}}}, \nonumber \\ F_{Q, {\bbox J}} &=& \frac{\pi \nu}{4} \int d {\bbox r} \mathop{\rm str}\nolimits \{ i\omega \Lambda Q + \frac{{\bbox J}^2}{6\tau } - \frac{v_F}{3} ({\bbox \nabla} Q) Q {\bbox J} \} \label{poldorogi}\end{aligned}$$ The Gaussian integral over ${\bbox J}$ in Eq. (\[poldorogi\]) is dominated by the vicinity of $$\label{current} {\bbox J} = l ({\bbox \nabla} Q ) Q$$ and leads finally to Eq. (\[Efet\]). Symmetries of $g$-matrices -------------------------- Both $Q$ and $\Lambda$ are $ 8 \times 8$ matrices which act on the $8$-component super-vectors $\Psi$ with the basis [@Efetov]: $$\Psi^\top = (\chi_1, \chi_1^*, S_1, S_1^*, \chi_2, \chi_2^*, S_2, S_2^*), \label{basis-efetov}$$ where $\chi$ are the fermionic and $S$ are bosonic variables; indices $1,2$ correspond to the retarded and advanced Green functions with energies $E_f \pm \omega/2$. For a super-matrix $\hat{M}$ the super-trace is defined as follows: $$\begin{aligned} \mathop{\rm str}\nolimits \hat{M} &=& M_{11} + M_{22} -M_{33} -M_{44} + \\ & & M_{55} + M_{66} -M_{77} -M_{88} \end{aligned}$$ In this basis the super-matrix $\Lambda$ has the form $$\Lambda_{ij} = \lambda_i \delta_{ij}, \quad \lambda_i = \left \{ \begin{array}{rr} 1,\quad & i=1 \ldots 4 \\ -1,\quad & i=5 \ldots 8 \end{array} \right.$$ For our purposes it is more convenient to use another basis where each variable is stands next to its charge conjugate partner and fermionic variables are separated from bosonic: $$\Psi^\top = (\chi_1, \chi_2^*,\chi_2, \chi_1^*, S_1, S_2^*, S_2, S_1^*). \label{Abasis}$$ In this new basis, which will be used everywhere in this paper, the matrix $\Lambda$ has the form: $$\label{Lambda} \Lambda =\left( \begin{array}{cc} \sigma_3 \otimes \tau_3 & 0\\ 0& \sigma_3 \otimes \tau_3 \end{array} \right),$$ where the blocks in Eq. (\[Lambda\]) correspond to the fermionic and bosonic sectors. The matrices ${\bbox \tau}$ act inside the $2\times 2$ blocks, while matrices the ${\bbox \sigma}$ mix these blocks inside the $4\times 4$ sectors. We first discuss the symmetry of the Q-matrix from the standard $\sigma$-model (\[Efet\]) focusing on the properties of the boson-boson ($B$) and fermion-fermion ($F$) sectors only. We consider oly the unitary ensemble when an additional condition $$\label{unitary} \left[ Q^{B,F}, 1 \otimes \tau_3 \right] = 0$$ is fulfilled. Together with constraint (\[ConstraintQ\]), requirement (\[unitary\]) permits to parametrise the matrices $Q^{B,F}$ with four complex 3-vectors ${\bbox l}^{B,F}, {\bbox m}^{B,F}$ subject to the constraint ${\bbox l}^2={\bbox m}^2=1$: $$\label{param-Q0} Q^{B,F}= \frac{1 + \tau_3}{2} \otimes {\bbox l} {\bbox \sigma} + \frac{1 - \tau_3}{2} \otimes {\bbox m} {\bbox \sigma}.$$ Parametrisation (\[param-Q0\]) is still too general because the matrix $Q$ has additional symmetries: charge neutrality \[symm\] $$\bar Q \equiv CQ^\top C^\top = Q, \label{symm-charge}$$ with the charge conjugation matrix $$C=\left( \begin{array} {cc} i \sigma_2 \otimes \tau_1 & 0 \\ 0 & -\sigma_2 \otimes \tau_2 \end{array} \right),$$ and pseudo-hermiticity $$Q^{\dagger} \equiv K Q^+ K = Q, \quad C=\left( \begin{array} {cc} 1 & 0 \\ 0 & \sigma_3 \otimes \tau_3 \end{array} \right). \label{symm-hermit}$$ Requirements (\[symm-charge\]) are satisfied when \[l-and-m\] $$\begin{aligned} && {\bbox l}^B = - {\bbox m}^B, \qquad {\bbox l}^F = - {\bbox m}^F, \\ \hbox{and} \nonumber \\ && {\bbox l}^B=\hat \mu ({\bbox l}^B)^*,\; {\bbox l}^F= ({\bbox l}^F)^*, \; \hat \mu = \left(\begin{array}[c]{ccc} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array}\right).\end{aligned}$$ Thus, the fermionic sector is parametrised by a real vector ${\bbox l}^F$ subjected to constraint $({\bbox l}^F)^2=1$, i.e. a sphere ${\cal S}^2$ $${\cal S}^2 = \left\{ {\bbox l}^F,\quad l_1^2 + l_2^2 + l_3^2 =1,\quad \mbox{Im } {\bbox l} =0\right\} \label{sphere}$$ while the bosonic sector is represented by the vector ${\bbox l}^B$ with two imaginary components $l_1^B$ and $l_2^B$ and one real $l_3^B$. Due to constraint ${\bbox l}^B=1$ the bosonic sector is represented by a hyperboloid ${\cal H}_2^2$: $$\label{hyperb} {\cal H}_2^2 = \left\{ {\bbox l}^B, \quad -|l_1|^2 - |l_2|^2 + l_3^2 =1,\quad {\bbox l} = \hat \mu {\bbox l}^* \right\}.$$ The matrix $g_{\bbox n}$ of the ballistic $\sigma$-model (\[BigVic\]) obeys the constraint $g_{\bbox n}^2 =1$ and, therefore, can be parametrised with Eq. (\[param-Q0\]). The condition of charge neutrality [@Comment] (\[symm-charge\]) must be replaced by the generalised version $$\label{symm-g} \bar g_{\bbox n} \equiv Cg_{\bbox n}^\top C^\top = g_{-{\bbox n}}$$ because the charge conjugation changes the sign of the $\cal W$-term in action (\[BigVic\]). The pseudo-hermitian transformation $g_{\bbox n} \to K g_{\bbox n}^+ K$ not only changes the sign of the $\cal W$-term but also replaces $\omega$ by $-\omega^*$. In the long time asymptote calculation the frequency $\omega$ is purely imaginary and the generalisation of Eq. (\[symm-hermit\]) reads: $$K g_{\bbox n}^+ K = g_{-{\bbox n}} \label{herm-g}$$ Symmetries (\[symm-g\]) and (\[herm-g\]) impose a new restriction on the vectors ${\bbox l}$ and ${\bbox m}$ (compare with Eqs. (\[l-and-m\])): $$\label{l-and-m-g} {\bbox l}_{\bbox n}^B = - {\bbox m}_{-{\bbox n}}^B, \; {\bbox l}^F = - {\bbox m}_{-{\bbox n}}^F, \; {\bbox l}_{-{\bbox n}}^B = \hat \mu ({\bbox l}_{\bbox n}^B)^*, \; {\bbox l}_{-{\bbox n}}^F = ({\bbox l}_{\bbox n}^F)^*.$$ Since conditions (\[l-and-m-g\]) are imposed on the componets of two vectors ${\bbox l}$ and ${\bbox m}$, they are less restrictive than Eqs. (\[symm\]). Both $g_{\bbox n}^B$ and $g_{\bbox n}^F$ can be parametrised with a complex unit vector ${\bbox l}_{\bbox n}= {\bbox \xi}_{\bbox n} + i{\bbox \eta}_{\bbox n}$: $$\label{param-g} g_{\bbox n}^{B,F}= \frac{1 + \tau_3}{2} \otimes {\bbox \sigma} {\bbox l}_{\bbox n} + \frac{1 - \tau_3}{2} \otimes {\bbox \sigma} {\bbox l}_{-{\bbox n}}.$$ with the constraint $${\bbox l}^2= {\bbox \xi}^2 -{\bbox \eta}^2 + 2 i ({\bbox \xi} {\bbox \eta}) =1. \label{xieta}$$ The geometric meaning of Eq. (\[xieta\]) can be described as follows. The vector ${\bbox \xi} = {\bbox \nu} \xi$ is characterised by its absolute value $\xi$ and a unit vector ${\bbox \nu}$, which corresponds to a point on a sphere ${\cal S}^2$. Due to the condition $0={\bbox \xi} {\bbox \eta} = \xi {\bbox \nu} {\bbox \eta}$, the vector ${\bbox \eta}$ belongs to the plane tangential to the sphere $ {\cal S}^2$ at the point ${\bbox \nu}$. The condition $\xi^2 - {\bbox \eta}^2 = 1$ means that at every point of the sphere there is an upper part of the two sheet hyperboloid ${\cal H}_2^2$ with the axis along the radius of the sphere. In other words, the vector ${\bbox l}$ subject to constraint (\[xieta\]), belongs to the fibre bundle with the base $ {\cal S}^2$ and the fibre ${\cal H}_2^2$. The $Q$-matrices in this geometric picture are represented by the sub-manifold of this fibre bundle: $Q^F$ belongs to the base and $Q^B$ lies on the fibre over the North Pole of ${\cal S}^2$ (see Eq. (\[sphere\])). The $\Lambda$-matrix corresponds to the bottom of ${\cal H}_2^2$ in the fibre over the North Pole. steepest descent procedure for quantum ballistics ================================================= The long time asymptote of the conductance $G(t)$ is found following the procedure outlined in the introduction. The generalised version of Eq. (\[G\]) has the form $$G(t) = G_0 e^{-t/\tau} +\int\frac{d\omega}{2\pi}e^{-i\omega t} \int_{g^2=1} {\cal D} g_{\bbox n} P\{g_{\bbox n}\} e^{-F}, \label{Gt2}$$ where $F$ is the ballistic action (\[BigVic2\]), and the explicit form of the functional $P$ is irrelevant within exponential accuracy. The functional integral (\[Gt2\]) over $g_{\bbox n}$ is evaluated using the steepest descent method, which consists in: 1. finding a solution $ g_n(r)$ of the saddle point equation (\[Eilen\]); 2. expressing the frequency $\omega$ through the time $t$ using the self-consistency condition $$\label{self-cons-ballisitcs} \frac{4 t\Delta}{\pi \hbar} = - \int \frac{d {\bbox r}}{V} \mathop{\rm str}\nolimits \{ \Lambda \langle g \rangle \},$$ which arises as a result of integration over $\omega$ in Eq. (\[Gt2\]); 3. substituting $g_n(r)$ in Eq. (\[Gt2\]) and obtaining the result with exponential accuracy. We do not specify boundary conditions for Eq. (\[Eilen\]) bearing in mind that far from the centre of the sample the space gradients become small and $g_{\bbox n}(r)$ approaches a solution of diffusion equation (\[Us\]). It was shown in Ref.[@MK94] that for the long times $ t \gg \hbar /\Delta$ the solutions of diffusion equation (\[Us\]) can be written in the form: $$\label{Q-solution} Q = U \left( \begin{array}{cc} \sigma_3 \otimes \tau_3 & 0 \\ 0 & (\sigma_3 \cosh \theta + i \sigma_2 \sinh \theta ) \otimes \tau_3 \end{array} \right) U^{-1},$$ where the real angle $\theta$ obeys the equation $$\label{theta} D \nabla^2 \theta + i \omega \sinh \theta =0; \quad \theta|_{\mbox{lead}}=0$$ and the constant super-matrices $U$ commute with $\Lambda$. They do not enter the action and thus will be omitted. As can be seen from Eq. (\[Q-solution\]), only the bosonic block $Q^B$ has a non-trivial dependence on the coordinates. The same holds true in the ballistic region. and from now we consider only the bosonic sector of the theory. Combining the diffusion asymptote (\[Q-solution\]) with Eqs. (\[subst\],\[current\], \[param-g\]), we find that the vector ${\bbox l}_{\bbox n}$ that parametrises $g_n$ approaches the limit $$\label{l-diffusive-limit} l_1= -l {\bbox n} {\bbox \nabla} \theta, \quad l_2 = i \sinh \theta, \quad l_3 = \cosh \theta$$ far from the centre of the sample. One can see from Eq. (\[l-diffusive-limit\]) that $$\label{imaginary-and-real} \mbox{Im } l_1 = \mbox{Re } l_2 = \mbox{Im } l_3 = 0$$ It turns out that the condition (\[imaginary-and-real\]) remains valid even in the ballistic regime. Therefore, for the solution of kinetic equation (\[Eilen\]) the complex vector ${\bbox l}_n(r)$ is restricted to the real three dimensional subspace (\[imaginary-and-real\]). The additional constraint ${\bbox l}^2=1$ thus acquires the form $$\label{1-sheet-hyper} l_1^2 -|l_2|^2 + l_3^2 =1$$ and defines a one sheet hyperboloid. Convenient coordinates on the subspace (\[imaginary-and-real\]) are associated with the cone $ l_1^2 -|l_2|^2 + l_3^2 =0$ and described in appendix \[appendix-conica\]. Due to the symmetry (\[l-and-m-g\]), $l_1$ is an odd and $l_{2,3}$ are even functions of ${\bbox n}$. Averaging Eq. (\[Eilen\]) over directions of ${\bbox n}$ and using the parametrisation (\[param-g\]), we arrive at the continuity condition $$2 v_F \hbox{div}\langle {\bbox n} l_1 \rangle + \omega \langle l_2 \rangle= 0 \label{discontin}$$ which reduces to the conservation law for the current: $${\bbox J} \equiv \langle {\bbox n} l_1 \rangle, \qquad \mbox{div} {\bbox J} =0, \label{conserv1}$$ if the frequency is small and $\gamma \equiv i \omega \tau \rightarrow 0$. Solution of Kinetic Equation and Long time Asymptote of conductance {#section-23} =================================================================== In Ref [@MK94] we considered the solution of the diffusion equation (\[theta\]) either for a disordered wire (1D), or for a disc (2D), or for a droplet (3D) of radius $R$ (see Fig \[fig-samples-geom\]) . In analysing the kinetic equation (\[Eilen\]) we stick to the same geometry. In this section we find the solutions of Eq. (\[Eilen\]) in space dimensions 2 and 3, postponing the discussion of one-dimensional case to the next section. Long times correspond to small frequencies $\omega$ ($\gamma \ll 1$). This means that the terms containing $\gamma$ in the kinetic equation can be neglected everywhere, except the central part of the sample. It can be seen from Eq. (\[conserv1\]) that the total flux of the current ${\bbox J}$ is conserved in the outer area. Since the current ${\bbox J}$ is directed along the radius, flux conservation means that in two- and three-dimensional samples the current density decays towards the outer boundary together with the space gradients of all relevant quantities, and the solution of the kinetic equation approaches the diffusion asymptote. The situation is, however, different in the one-dimensional case when the current and gradients do not decay and the diffusion regime is reached only outside the sample. Qualitative description ----------------------- In the space dimensions 2 and 3 the qualitative behaviour of the solutions is the same for both the diffusive and kinetic equation. We illustrate it with the diffusion Equation (\[theta\]) which can be regarded as describing a chemical reaction where $\theta$ is the concentration of a reacting agent. The term $ D {\bbox \nabla}^2 \theta$ describes the propagation of the agent in a porous medium and the rate of the agent reproduction depends on its concentration as $\gamma \sinh \theta$. Since $\gamma \ll 1$, generation takes place only in the central part of the sample, where the value of $\theta$ is high. We refer to this central zone as the “zone of reaction”. In the outer part of the sample the agent diffuses freely (the “run-out zone”), i.e. $\theta$ is a solution of the Laplace equation ${\bbox \nabla}^2 \theta =0$. An azimuthally symmetric solution $\theta(r)$ obeying the boundary condition $\theta(R)=0$ decays like \[theta-at-large-r\] $$\begin{aligned} \theta = C \ln \frac{R}{r} &\quad& (\hbox{2D}) \\ \theta = C \left( \frac{1}{r} - \frac{1}{R} \right) &\quad& (\hbox{3D}) \end{aligned}$$ as $ r \to R$. The separation into two zones is also valid for solutions of the kinetic equation. It turns out that in the run-out zone the space gradients are small and the diffusion asymptote (\[l-diffusive-limit\]) with $\theta$ given by (\[theta-at-large-r\]) is reached. In the centre of the sample ${\bbox l}_{\bbox n}$ does not depend on ${\bbox n}$, because the solution is azimuthally symmetric. Taking into account that $l_1$ is an odd function of ${\bbox n}$ and using the constraint (\[1-sheet-hyper\]), we can define the value of ${\bbox l}_{\bbox n}$ at $r=0$ with a single parameter $\theta_0$: $$\label{l-at-r=0} l_1(0)=0, \quad l_2(0)= i \sinh \theta_0, \quad l_3(0) = \cosh \theta_0.$$ In the next subsection we find the exact solution of the kinetic equation for $\theta_0= \ln(1/\gamma)$. Although it does not match the asymptote (\[l-diffusive-limit\]), (\[theta-at-large-r\]) at large $r$, it plays an important role. Any solution of the kinetic equation that starts at $\theta_0 < \ln(1/\gamma)$ eventually approaches the diffusion asymptote, while the one that starts at $\theta_0 > \ln(1/\gamma)$ does not. Therefore this exact solution is a separatrix. Now we are ready to describe the shape of the solution that obeys the boundary conditions (\[l-at-r=0\]) and reaches the diffusion asymptote at large $r$. If $\gamma \ll 1$, it starts at $\theta_0$, being only slightly smaller than $ \ln 1/\gamma$, and, therefore, runs close to the separatrix up to the large radius $r_* \gg l$. For even larger $r$ the deviation becomes significant and our solution crosses over to the diffusion asymptote (\[l-diffusive-limit\]) with $\theta$ given by (\[theta-at-large-r\]). The cross-over region is of the order of the mean free path $l$ and is much smaller than both the reaction and run-out zones. We match the separatrix in the reaction zone with the diffusion asymptote in the run-out zone keeping the mean value $\langle {\bbox l} \rangle$ continuous at the point $r_*$ (see Fig. \[fig-separatrix\]), thus finding both the position of the cross-over $r_*$ and the coefficient $C$ in Eq. (\[theta-at-large-r\]). Solutions --------- An azimuthally symmetric solution of the kinetic equation depends only on the radius $r$ and the angle $\phi$ between the radius and the vector ${\bbox n}$. Therefore, the space derivative in the kinetic equation has the form $$\label{nnabla-vs-r-theta} {\bbox n} {\bbox \nabla} = \cos \phi \frac{ \partial }{\partial r} - \frac{\sin \phi}{r} \frac{\partial}{\partial \phi} = \left( \frac{ \partial }{\partial s} \right)_\rho,$$ where the impact parameter $ \rho = r \sin \phi$ and the distance along a straight line trajectory $s = r \cos \phi$ are introduced (see Fig. \[fig-samples-geom\]). If the parameterisation (\[param-g\]) for the matrix $g_{\bbox n}$ is used together with the conic basis for the vector ${\bbox l}_{\bbox n}$ (see appendix \[appendix-conica\]): $$\label{l-vs-k} l_1 = k_1, \quad l_3 - i l_2 = k_+ + \gamma, \quad l_3 + i l_2 = k_- + \gamma,$$ the kinetic equation is simplified to the form (in this section we measure all distances in the units of the mean free path $l$): \[Eilen-k\] $$\begin{aligned} && \frac{\partial}{\partial s} k_+ = - \langle k_+ \rangle k_1, \\ && \frac{\partial}{\partial s} k_- = \langle k_- \rangle k_1, \label{k-}\\ && k_1^2 + (k_+ + \gamma) (k_-+ \gamma)=1. \label{k-constraint}\end{aligned}$$ The condition at the origin (\[l-at-r=0\]) has the form: $$\label{k-at-r=0} k_1(0)=0, \quad k_+ (0) = e^{\theta_0} - \gamma, \quad k_- (0) = e^{-\theta_0} - \gamma.$$ The separatrix solution starts at $\theta_0= \ln ( 1/\gamma ) $ and therefore, $k_-(0)=0$. Equation (\[k-\]) for $k_-$ now gives $k_-=0$ for all $r$. Taking into account the strong inequality $\gamma \ll 1$, we simplify the system (\[Eilen-k\]): $$\begin{aligned} && \frac{\partial k_+}{\partial s} = - \langle k_+ \rangle k_1, \\ && k_1^2 + \gamma k_+ =1 \label{inden2}\end{aligned}$$ and obtain a closed integro-differential equation for $k_+$: $$\label{k} \frac{\partial k_+}{\partial s} = - \langle k_+ \rangle \sqrt{1 - \gamma k_+}$$ The solution of this equation is given in appendix \[appendix-solution\]; at distances $r \ge 1$ it has the form: \[separatrix-solution\] $$\begin{aligned} \label{separatrix-solution-k_+} \gamma k_+ &&= 1 - \frac{a^2}{4} \ln^2 \cot \frac{\phi}{2} + \frac{ab}{\gamma r} \frac{(\frac{\pi}{2} - |\phi|)\ln \cot \frac{\phi}{2}}{|\sin\phi|} \\ \label{separatrix-solution-a} a &=& 2 \left \langle \ln^2 \cot \frac{\phi}{2} \right\rangle ^{-1/2} = \frac{4}{\pi} \left\{ \begin{array}{ll} 1 & \quad (2D) \\ \sqrt{3} & \quad (3D) \end{array} \right. \\ b&=&\left\langle \frac{(\frac{\pi}{2} -|\phi|) \ln \cot \frac{\phi}{2}} {|\sin\phi|} \right\rangle^{-1} = \left\{ \begin{array}{cl} 4 \ln^2 r &\quad (2D)\\ \approx .95 &\quad (3D) \end{array} \right.\end{aligned}$$ The divergence of $k_+$ at $\phi = 0$ is cutoff at the the angles $\phi \sim 1/r^2$. Since all integrals with $k_+$ converge, the exact form of this cut-off is relevant only for the criterium of validity of the ballistic treatment (see section VII). The firstot term in ( \[separatrix-solution-k\_+\]) has zero average and does not depend on $r$, while the mean value $\langle k_+ \rangle = a/(\gamma r) $ decays when $r \to \infty$. Using Eq. (\[inden2\]), we find the expressions for the component $k_1$ and the current $J = \langle \cos \phi k_1 \rangle$ $$\begin{aligned} \label{separatrix-solution-k1} k_1 &=& - \frac{a}{2} \ln |\cot \frac{\phi}{2}| \\ \label{separatrix-solution-J} J &=&\frac{a}{2} \left\langle \cos \phi \ln |\cot \frac{\phi}{2}| \right\rangle = \frac{1}{\pi} \left\{ \begin{array}{cl} 2 &\quad (2D)\\ \sqrt{3} &\quad (3D) \end{array} \right.\end{aligned}$$ Note, that the the current $J$ don’t depend on the coordinates. We are looking at a solution of Eq. (\[Eilen-k\]) that transfers to the diffusion regime at a certain radius $r_* \gg 1$. The transition region has a width of the order of unity, and, therefore, both the mean value $\langle k_+ \rangle $ and the current $J$ change negligiblely across the cross-over region. Combining Eqs. (\[l-diffusive-limit\]), (\[theta-at-large-r\]) and (\[l-vs-k\]) we obtain in the diffusion region: \[solution-at-large-r\] $$\begin{aligned} \label{k-at-large-r} \langle k_+ \rangle = \left(\frac{R}{r}\right)^{C}, \quad J = \frac{Cl }{2r} &\quad& (2D) \\ \langle k_+ \rangle = \exp \frac{C}{r}, \quad J = \frac{Cl}{r^2} &\quad& (3D). \end{aligned}$$ Using continuity of $J$ and $\langle k_+ \rangle$ at the point $r_*$ we find the values of the parameters $ C, r_*$: $$\label{23d-parameters} \begin{array}{lll} r_* = \frac{\pi l}{4} \frac{\ln \frac{1}{\gamma}}{\ln \frac{R}{l}},& \quad C = \frac{\ln \frac{1}{\gamma}}{\ln \frac{R}{l}}& \quad (2D),\\ r_* = \frac{\pi l}{3\sqrt{3}} \ln \frac{1}{\gamma}, &\quad C =\frac{\pi l}{3\sqrt{3}} \ln^2 \frac{1}{\gamma} & \quad (3D) \end{array}$$ The sketch of the solution is given on the Fig. \[fig-separatrix\]. Long-time Asymptote of Conductance ---------------------------------- We begin with the self-consistency condition (\[self-cons-ballisitcs\]). In the conic coordinates (\[l-vs-k\]) it looks as follows: $$\label{self-cons-conica} \frac{\Delta t}{ \pi \hbar} = \int \frac{d {\bbox r}}{V} \frac{ \langle k_+ - k_- \rangle +2\gamma}{2}\approx \int \frac{d {\bbox r}}{V} \frac{\langle k_+ \rangle}{2},$$ where it is taken into account that for solution (\[separatrix-solution\]) the inequality $k_- \ll k_+$ holds. The last integral can be calculated using the continuity condition (\[discontin\]), where $-i l_2= (k_+ - k_-)/2 $ can be replaced by $k_+/2$. Integrating Eq. (\[discontin\]), we express the integral (\[self-cons-conica\]) through the total current $J$ through the outer boundary of the sample: $$\label{self-cons=total-flux} \int_{r=R} {\bbox J}(r) dS = \frac{\gamma}{2l} \int d {\bbox r} \langle k_+ \rangle.$$ Although the integral (\[self-cons-conica\]) contains comparable contributions from both the reaction and run-out zones, with the help of Eq. (\[self-cons=total-flux\]) it can be expressed through the current in the run-out zone only. Using the asymptote (\[solution-at-large-r\]) we finally arrive at the expression for $\gamma$: \[gamma-vs-t\] $$\begin{aligned} \label{gamma-vs-t-2d} \gamma = \frac{\pi}{2} p_F l \frac{ \tau}{t} \ln \left(\frac{t}{p_F l \tau}\right), \quad (2D),\\ \label{gamma-vs-t-3d} \gamma = \frac{2\pi}{\sqrt{3}} (p_F l)^2 \frac{\tau}{t} \ln^2 \left(\frac{t}{\tau (p_F l)^2} \right), \quad (3D)\end{aligned}$$ The long time asymptote of the conductance $G(t)$ is determined by the value of the action $F$ from Eq. (\[BigVic2\]) for solution (\[separatrix-solution\]) - ( \[23d-parameters\]). This action is a sum of two contributions: from the run-out zone ($F_D$) and from the reaction zone ($F_b$). The action is dominated by $F_D$: $$\begin{aligned} \label{action-D} F_D &=& \frac{\pi \nu D}{2} \int_{r_*}^R d {\bbox r} \left(\frac{d \theta}{d r} \right)^2 = \nonumber \\ && \pi^2 \nu D C^2 \left\{ \begin{array}[c]{lcl} \ln ( R / r_* ) & \quad & (2D) \\ 2 / r_* & \quad & (3D) \end{array} \right.\end{aligned}$$ Substituting the values of the parameters (\[23d-parameters\]) we obtain: $$\begin{aligned} \label{final-action-2D} F_D = \frac{\pi g}{2} \frac{ \ln^2 t/(\tau g) }{ \ln R/l} &\quad & (2D)\\ \label{final-action-3D} F_D = \frac{ \pi}{9 \sqrt{3}} ( p_F l)^2 \ln^3 \frac{t}{\tau g}& \quad & (3D).\end{aligned}$$ where $g= 2\pi \hbar\nu D$ is the dimensionless conductance. The contribution from the reaction zone $F_b$ is calculated, using ballistic action (\[BigVic2\]), in appendix \[appendix-action-1d\]: \[action-b\] $$\begin{aligned} F_b \sim g \frac{\ln (t / g \tau}{\ln(R/l)} &\quad & (2D) \\ F_b \sim (p_F l)^2 \ln^2 \left( \frac{t}{(p_F l)^2 \tau} \right) &\quad& (3D)\end{aligned}$$ Comparing Eqs. (\[final-action-2D\]), (\[final-action-3D\]) and (\[action-b\]) we see that the action is dominated by the contribution from the diffusion zone. In two-dimensions it comes from the whole diffusion zone $r_* \ll r \ll R$ because the integral in Eq. (\[action-D\]) diverges logarithmically. In three-dimensions only the region near $r=r_*$ of width of the order of $r_*$ is important. Since $r_* \gg l$, the contribution from the crossover region, which has the width of the order of $l$, can be neglected. The long time conductance asymptote is given by $$\label{answer-23d} G(t)= \exp \left( -F_D \right)$$ with the action $F_D$ defined Eqs. (\[final-action-2D\]) and (\[final-action-3D\]). Long-time asymptote for Conductance in Disordered Wire ====================================================== In this section we consider a 1D thick wire of length $L$ and cross-section $w$ ( $ w \ll l \ll L$) with specular boundary conditions and assume that the distribution function $g_{\bbox n}$ is uniform across the wire. For not very long times the diffusion equation (\[theta\]) is valid and has the solution $$\label{theta-at-large-r-1d} \theta= \theta_0 - |\frac{x}{\xi}|, \quad \xi= L/\log{t \Delta}.$$ The space gradient $\nabla \theta = 1/\xi $ does not depend on the coordinate $x$ and is smaller than $1/l$ as long as the time is shorter than $\hbar \Delta^{-1} \exp (L/l)$. For longer times the diffusion regime breaks down simultaneously in the whole wire. The separation into reaction and run-out zone is still valid, and the term in $\omega$ in the kinetic equation is important in the reaction zone near the centre of the wire and can be neglected elsewhere. Unlike the 2D and 3D cases however, the space gradients in the run-out zone do not decay towards the outer ends of the wire. We now present a solution of the kinetic equation in the run-out zone which is valid for arbitrary gradients. The distribution function $g_{\bbox n}$ depends on the coordinate $x$ along the wire and the angle $\phi$ between the axis of the wire and the direction ${\bbox n}$ of the momentum (see Fig. \[fig-samples-geom\]). In these coordinates the kinetic equation (\[Eilen\]) takes the form: $$\label{Eilen-1d} 2 l \cos \phi \frac{\partial g}{\partial x} = \left[ \left( \gamma \Lambda - \langle g \rangle \right),g \right]$$ In the run-out zone the term in $\gamma$ is negligible and, using the conic coordinates, from appendix \[appendix-conica\] as $\gamma \rightarrow 0$, we rewrite Eq. (\[Eilen-1d\]): \[Eilen-conica-1d\] $$\begin{aligned} \label{Eilen-conica-1d-k} l \cos \phi \frac{\partial}{\partial x} k_\pm = \mp \langle k_\pm \rangle k_1, \\ \label{Eilen-conica-1d-const} k_1^2 + k_+ k_-=1.\end{aligned}$$ Dividing both sides of Eq. (\[Eilen-conica-1d-k\]) by $\cos \phi$ and averaging over $\phi$, we find a closed equation for $ \langle k_\pm \rangle$ with the solutions: $$\label{1d-k-pm} \langle k_\pm \rangle = q \exp ( \mp \theta),$$ where $q$ is a constant and $\theta$ obeys the equation: $$\label{1d-theta} l \frac{d \theta}{dx}= \langle \frac{k_1}{\cos \phi} \rangle.$$ In one-dimension the current conservation law (\[conserv1\]) reduces to $J \equiv \langle k_1 \cos \phi \rangle = \mbox{const}$ and suggests that $k_1$ does not depend on $x$. Therefore, Eq. (\[1d-theta\]) gives $$\label{1d-theta-2} l \frac{d \theta}{dx}= \hbox{const} \equiv \theta'$$ and, substituting this back into Eq. (\[Eilen-conica-1d-k\]), we obtain the solution in the factorised form: $$\label{1d-factor} k_\pm = \beta(\phi) \langle k_\pm \rangle, \quad k_1 = \theta' \beta(\phi) \cos \phi,$$ where the function $\beta(\phi)$ is determined from Eq. (\[Eilen-conica-1d-const\]) $$\label{1d-beta} \quad \beta = [q^2 + (\theta')^2 \cos^2 \phi]^{-1/2}$$ Finally, the constants $\theta'$ and $q$ are related by the condition $$\label{1d-j0-and-q} 1 = \langle \beta \rangle = \frac{1}{2\pi} \int_0^{2\pi} \frac{d \phi}{\sqrt{q^2 + (\theta')^2 \cos^2 \phi}}$$ Using the asymptotic values of the elliptic integral in Eq. (\[1d-j0-and-q\]), we find $$\label{1d-theta-q } q = \left\{ \begin{array}[c]{cl} 1 -\frac{\theta'^2}{4}, &\qquad \theta' \ll 1,\\ 4 \theta' \exp \Big( -\frac{\pi |\theta'|}{2} \Big) &\qquad \theta' \gg 1 \end{array} \right.$$ and obtain expressions for the current $J= \langle k_1 \cos \phi \rangle$: $$\label{1d-J} J = \left\{ \begin{array}[c]{cl} \frac{\theta'}{2}, & \quad \theta' \ll 1,\\ \frac{2}{\pi} \left\{ 1 - 4 \pi|\theta'| \exp \Big( -\pi |\theta'| \Big)\right \} & \quad \theta' \gg 1 \end{array} \right.$$ The above solution is not valid in the reaction zone in the vicinity of $x=0$, whose contribution to both the action and self-consistency condition is negligible. The boundary condition for the ballistic problem is determined by the requirement that the distribution function matches the one in the bulk electrodes where $g_{\bbox n}=\Lambda$. For small gradients ($\theta' \ll 1$) the angular dependence of the distribution function in the wire is weak and the above condition is fulfilled provided $\theta(\pm L/2)=0$. On the other hand, in the ballistic regime ($\theta' \gg 1$) the distribution function in the wire strongly depends on the angle $\phi$ (see Eq. (\[1d-beta\])). Thus, there is a cross-over region near the outer ends of the wire where the solution of the kinetic equation deviates from Eqs. (\[1d-k-pm\]), (\[1d-factor\]) and (\[1d-beta\]). We consider the wires which are long enough to neglect the change of $\theta$ in the cross-over region. Therefore, the boundary condition $\theta(\pm L/2)=0$ is still valid in the ballistic regime, and the function $\theta(x)$ can be presented in the form (\[theta-at-large-r-1d\]) with $\xi=l/\theta'$ and $\theta_0= \theta' L/(2l)$. Action and self-consistency condition ------------------------------------- The self-consistency condition (\[self-cons-conica\]) gives $$\frac{t\Delta}{\pi \hbar} \frac{L}{l} =\frac{1}{|\theta'|} \exp \left(\frac{ |\theta'| L}{2l}\right) \label{1d-self-cons}$$ and should be compared with the continuity equation (\[self-cons=total-flux\]) which in the 1D case reads: $$J = \frac{\gamma}{2|\theta'|} \exp \left(- \frac{ |\theta'| L}{2l}\right). \label{1d-contin}$$ Using Eq. (\[1d-J\]) for the current and Eqs. (\[1d-self-cons\]), and (\[1d-contin\]) we express $\gamma$ and $\theta'$ through $t$: \[1d-gamma-theta\] $$\begin{aligned} \label{1d-theta'} |\theta'|&=&\frac{2l}{L} \cdot \Big(\ln\frac{t\Delta}{\pi\hbar} + \ln \ln \frac{t\Delta}{\pi\hbar} \Big),\\ \label{1d-gamma} \gamma &=& \frac{2 \pi \hbar}{t\Delta} \frac{l}{L} J = \frac{\tau}{t} \left\{ \begin{array}{cl} 2 g \ln \frac{t\Delta}{\pi\hbar} & \quad \frac{t\Delta}{\pi\hbar} \ll \exp\Big(\frac{L}{l}\Big),\\ 2 N &\quad \frac{t\Delta}{\pi\hbar} \gg \exp\Big(\frac{L}{l}\Big) \end{array} \right.\end{aligned}$$ where $g= 2\pi \hbar \nu D w/ L$ is dimensionless conductance and $N = w p_F/\pi \hbar$ is the number of transverse channels, which is equal to ballistic conductance. The calculation of the ${\cal W}$-term in action (\[BigVic2\]) for the solution (\[1d-factor\]) is performed in appendix \[appendix-action-1d\] and gives: $$\label{action-W-1d} {\cal W}\{g_{\bbox n}\} = -8 L w \frac{\theta' J}{l}$$ Evaluating the other terms in the action we get: $$F = \frac{\pi}{2 \Delta \tau}(2J \theta'+ q^2 -1). \label{action-1d}$$ Thus, the long time asymptote of the conductance is given by: $$\label{1d-answer} \begin{array}[c]{lc} G(t) \sim \exp\left\{ -g \ln^2 \left( \frac{t\Delta}{2\pi \hbar} \right) \right\} & 1 \ll \frac{t\Delta}{2\pi \hbar} \ll e^{L/l} \\ G(t) \sim \left( \frac{t \Delta}{2\pi \hbar} \right)^{-2N} & e^{L/l}\ll \frac{t\Delta}{2\pi \hbar} \ll N e^{L/l} \end{array}$$ The last inequality in Eq. (\[1d-answer\]) ensures the validity of semiclassical approximation (see section VII). physical picture of trapping ============================ In this section we consider the time dispersion of the conductance $G(t)$ in a purely one-dimensional wire. This problem was solved by Altshuler and Prigodin [@Altshuler-Prigodin] who obtained the long time asymptote in the form $$\label{pure-1d-conductance} G(t) \sim \exp \left( - \frac{l}{L} \ln^2 t \Delta \right),$$ which can be treated as a limiting case of a multi-channel formula for a thick wire (see table \[table-summary\]), assuming that in a one-channel case $g$ is given by $l/L$. Formula (\[pure-1d-conductance\]) can be understood as a probability of an optimal potential fluctuation that traps an electron of Fermi energy $E_f$ for time $t$. In a weak potential $ U(x) \ll E_f$ the wave function can be presented in the form $$\label{psi-1d} \Psi(x) = \phi_+(x) e^{i p_F x} + \phi_-(x) e^{-i p_F x}$$ with the amplitudes $\phi_{\pm}(x)$ changing slowly: $\nabla \phi_\pm \ll p_F \phi$. Let us consider a quasi-stationary state obeying the open boundary conditions $$\label{psi-boundary} \phi_+(0)=\phi_-(L)=0,$$ which correspond to the outward flow of current through the ends of the wire. The life time of such a state is inversely proportional to the outward current $$\label{psi-lifetime} t= \frac{\int dx |\Psi|^2 }{ v_F \left( |\phi_-(0)|^2 + |\phi_+(L)|^2 \right)}.$$ The maximum delay time is achieved when the currents through both ends are equal ($\phi_-(-L/2)= \phi_+(L/2)$). Fixing normalisation by $$\label{psi-norm} |\phi_-(0)|= |\phi_+(L)| =1$$ we reduce Eq. (\[psi-lifetime\]) to the from $$\label{pis-lifetime-2} \frac{t \Delta}{\pi \hbar} = \int \frac{dx}{L} |\Psi|^2$$ which resembles the self-consistency condition (\[self-cons-diffusive\]) obtained for arbitrary dimensions. To achieve life times $t \gg \hbar/\Delta$ the wave function must grow towards the middle of the wire; assuming the growth is exponential, $ \Psi \sim \exp[(L/2 - |x|)/\xi]$, we obtain for the localisation length of the quasi-stationary state $$\label{psi-xi} \xi = \frac{L}{ \ln t \Delta}.$$ A typical random potential $\tilde{U}$ causes one-dimensional wave functions to be localised with $\xi \sim l$. The shorter localisation length $ \xi \ll l$ corresponds to life times longer than $\hbar \Delta^{-1} \exp(L/l)$ and can be achieved in the potential $$\label{psi-U} U(x) = \tilde{U} + U_0 \cos (2 p_F x).$$ with the additional $2 p_F$-Fourier component having the amplitude $$\label{psi-amplitude} U_0 = \frac{2 \hbar v_F }{\xi}.$$ The probability of this potential realization in given by the Gaussian distribution: $$\label{psi-probability} \exp - \left( \frac{\pi \nu \tau}{2} \int U(x)^2 dx \right) \sim \exp \left( - \frac{l}{L} \ln^2 t \Delta \right).$$ and coincides with (\[pure-1d-conductance\]) with a correct numerical factor in the exponent. We therefore conclude that the states with long life times are locked by the Bragg reflection and can be found with the probability proportional to that of potential fluctuation with the Bragg mirror of appropriate strength. We believe that the same mechanism is responsible for nearly localised states in multi-channel wires and the samples of higher dimensionals. In multi-channal cases, however, adding a single $2p_F$-Fourier harmonics cannot localise the wave function, because the random part $\tilde U$ mixes different directions of the momentum. To localise the wave function in a two- or three- dimensional sample, the potential fluctuation should be effective for all directions of the momentum; so we expect it to have the form $$\label{psi-multi-u} U(x) = \int d \Omega_{\bbox n} U_{\bbox n}({\bbox r}) \cos ( 2 p_F {\bbox n} {\bbox r} )$$ with the amplitude $U_{\bbox n}({\bbox r})$ slowly depending on ${\bbox n}$ and ${\bbox r}$. discussion ========== This paper continues our study of the conductance $G (t)$ asymptote at long times $t$ that began in Ref. [@MK94]. We use the steepest descent approach which enables us to obtain $g(t)$ for different ranges of time. The purpose of this section is dual: we want to analyse the restrictions of our treatment and compare our results with those in the literature. Let us first discuss the conductance $G(t)$ of a thick wire made from a two-dimensional strip of length $L \gg l$ and width $w < l \ll L$. The solution $g(x,\phi)$ of the kinetic equation has typical gradients (see Eq. (\[action-W-1d\])): $$\label{discussion-gradients} \frac{\partial g}{\partial x} \sim \frac{\theta'}{l} \sim \frac{\ln (t \Delta/\hbar)}{L}$$ At long times the distribution function changes rapidly with the angle $\phi$ acquiring a sharp maximum at $\phi= \pi/2$ with the width $ \delta \phi \sim \exp (- \pi |\theta'|/4)$ (see Eq. (\[1d-beta\])-(\[1d-J\])). The semi-classical approximation remains valid if this width is much larger than the diffraction angle $\delta \phi_Q \sim p_F w$. This imposes an upper limit on the gradients: $$\label{discussion-theta'} \theta' \ll \ln (p_F w /\hbar)$$ Thus, the ballistic asymptote is valid only for times smaller than $$\label{discussion-time-Q} t_Q \sim \Delta^{-1} p_F w e^{L/l}$$ Since the width of the wire is limited by the condition [@wide-wire] $w <l$ the gradient $\theta'$ at times $t \sim t_Q$ is still much smaller than $p_F$. Thus, an intermediate interval of gradients arises $$\label{discussion-theta-tQ} \ln (l p_F ) \ll \theta' \ll l p_F$$ which corresponds to the interval of delay times: $$\label{discussion-tQ} \frac{L}{l} \ln \frac{l}{\lambda} \ll \ln (t \Delta) \ll \frac{L}{\lambda}.$$ We assume that the time dispersion of the conductance in this region can be recovered by some kind of a ballistic treatment with diffrction proper accounted for. A similar phenomenon restricts the range of applicability of ballistic asymptotes in two- and three- dimensional samples. Analogously to the one-dimensional case, the distribution function $g(\phi,r)$ in the reaction zone has a sharp feature at $\phi=0$ with the width $\delta\phi=l/r$. Diffraction can be neglected when $\delta\phi \gg \lambda/l $. Therefore, the reaction zone radius $r_*$ must obey the constraint: $$\label{discusion-2d} r_* \ll l^2/\lambda.$$ Substituting the radius of reaction zone from Eq. (\[23d-parameters\]), we obtain the upper time limit $t$ for which the semi-classical approach is still valid: $$\label{discusion-t-2d} t \ll t_Q \sim \hbar \Delta^{-1} \exp \left( \frac{p_f l}{\hbar} \right) \left\{ \begin{array}{cl} l/R & \quad (2D) \\ (l/R)^3 &\quad (3D) \end{array} \right.$$ Similarly to the one-dimensional case the gradients of $g$ at the time $t_Q$ are still much smaller than $p_F$, being only of the order of $1/l$. The liming times $t_Q$ for a different geometries are summarised in table \[table-summary\]. As it has been mentioned in the introduction, the whole field was pioneered by AKL. They added high powers of frequency and gradients of $Q$ to the diffusive $\sigma$-model and analysed the renormalization flow of the corresponding coupling constants in two dimensions. This gave the growth rate of the coefficients $C_n$ in the expansion $$\label{discussion-G(w)} G(\omega)= \sum_{n=0}^\infty C_n \omega^n.$$ The Fourier transform of Eq. (\[discussion-G(w)\]) with the AKL asymptote for $C_n$ leads to the result presented in Table \[table-summary\]. AKL also put forward a general conjecture that the logarithmically normal asymptote was valid for all dimensions. As one can see from table \[table-summary\], there is a variety of different regimes in the time dispersion of the conductance. The AKL procedure predicts one of them and fails to describe the others. It is also difficult, using this procedure, to find out the criteria when this or that results are valid. Another attack on the problem has been recently carried out by Mirlin [@Mirlin2]. He used our optimal fluctuation method for the diffusive $\sigma$-model in dimensions two and three and supplied Eq. (\[Us\]) by somewhat arbitrary the conditions near the origin. In this way he obtained the asymptotes of conductance for 2D and 3D samples. The reason for his success lies in the fact established in this paper that the action for the dimensions $d \ge 2$ is dominated by the contribution from the run-out zone. The solution of the ballistic problem confirms the result of Ref.[@Mirlin2] for two-dimensional conductance. In the three-dimensional case we obtain the numerical coefficient $A$ in the expression $G(t) \sim \exp(-A \ln^3 t)$. Although the form of the density-matrix fluctuation suggested by Mirlin deviates strongly in the reaction zone from the correct one, the method of his work could be useful for qualitative estimates. It should be noted, however, that the ballistic treatment is needed to find the upper limit $t_Q$ on the delay time interval where these results are valid. We regard the occurance of the the new range of times $t \ge t_Q$ where the diffraction effects are important we regard as one of the most interesting results of this paper. Acknowledgement =============== We thank A.D Mirlin for sending us his paper [@Mirlin2] prior to publication and B.I.Shklovskii for an inspiring discussion. conic coordinates {#appendix-conica} ================= Throughout this paper we use a convenient parametrisation of the bosonic sector of the distribution function $g_{\bbox n}$. To obtain it, we introduce a basis in the space of traceless $2\times 2$ real matrices: $$\hat e_1 = \sigma_1, \qquad \hat e_{\pm}=\frac{\sigma_3 \pm i \sigma_2}{2}. \label{e}$$ The new unit vectors have the following properties: \[e-ort\] $$\begin{aligned} &&\hat e_1^2 = 1, \; \hat e_{\pm}^2 = 0, \; \left[ \hat e_{\pm} , \hat e_1 \right] = \pm 2 \hat e_{\pm}, \label{e-ort-1} \\ &&[\hat e_+ , \hat e_- ] = - \hat e_1 ,\; \{\hat e_+ , \hat e_- \}= 1. \label{e-ort-2}\end{aligned}$$ Now, instead of parametrisation (\[param-g\]) for the $g^B_{\bbox n}$, we use: $$\begin{aligned} \label{conica-param-g} && g_{\bbox n}^B = \frac{1 + \tau_3}{2} \otimes \left( l_1({\bbox n}) \hat e_1 + l_+({\bbox n}) \hat e_+ + l_-({\bbox n}) \hat e_- \right) + \nonumber \\ && + \frac{1 - \tau_3}{2} \otimes \left( l_1(-{\bbox n}) \hat e_1 + l_+(-{\bbox n}) \hat e_+ + l_-(-{\bbox n}) \hat e_- \right),\end{aligned}$$ where, due to requirement (\[imaginary-and-real\]), all components $l_1$ and $l_{\pm}$ are real. The kinetic equation can be further simplified by introducing the functions $$\label{condica-k} k_1 = l_1; \quad k_+ = l_+ - \gamma; \quad k_- = l_- - \gamma$$ The constraint $g^2=1$ now can be written as: $$\label{conica-constraint} k_1^2 + (k_+ +\gamma)(k_- + \gamma)=1,$$ and the kinetic equation (\[Eilen\]) in coordinates (\[nnabla-vs-r-theta\]) has the form: \[conica-Eilen\] $$\begin{aligned} && \frac{\partial}{\partial s} k_+ =- \langle k_+ \rangle k_1, \\ && \frac{\partial}{\partial s} k_- = \langle k_- \rangle k_1, \\ && \frac{\partial k_1}{\partial s} = -k_+ \langle k_- \rangle + \langle k_+ \rangle k_- +\gamma \langle k_+ - k_-\rangle. \label{conica-k_1}\end{aligned}$$ It is more convenient to use the constraint (\[conica-constraint\]) instead of the last equation in this set (see Eq. (\[Eilen-k\]) in the text). integro-differential equation for separatrix {#appendix-solution} ============================================ Solving Eq. (\[k\]) with respect to $k_+$ and considering its average value $\langle k_+ \rangle$ as a given function of radius $r$, we obtain: $$1- \gamma k_+ = \frac{\gamma^2}{4} \left( \int_0^s \langle k_+ \rangle ds'\right)^2. \label{sol}$$ By taking the average of both the right- and left-hand sides, this expression is reduced to an integral equation for $\lambda(r) = \gamma \langle k_+ \rangle$: $$\label{Int} 1 -\lambda (r) = \frac{1}{4}\left\langle \left( \int_0^{r\cos \phi} ds \lambda\left(\sqrt{r^2 \sin^2 \phi + s^2 }\right) \right)^2\right\rangle_{\phi},$$ where $$\begin{array}{cl} \langle \rangle_{\phi} = \int_0^{2\pi} \frac{d\phi}{2\pi} &\quad (2D) \\ \langle \rangle_{\phi} = \int_0^{\pi} \frac{\sin \phi d\phi}{2} &\quad (3D) \end{array}$$ The solution of Eqs. (\[Int\]) has the following asymptotes: $$\label{Int2} \lambda (0) = 1; \qquad \lambda(r) = \frac{a}{r} + \frac{b}{r^2} ,\; \; \; r \gg 1,$$ where the constants $a$ and $b$ are given by Eqs. (\[separatrix-solution\]) in the text. If $r\sin \phi \gg 1$, then the asymptote of (\[Int2\]) can be substituted into kernel of Eq. (\[sol\]), and give the result $$\label{B4} \gamma k_+ = 1 - \frac{a^2}{4} \ln^2 \left(\cot\frac{\phi}{2} \right) + \frac{ab}{2r}\frac{\ln \left((\frac{\pi}{2}-|\phi|) |\cot \frac{\phi}{2}|\right)}{|\sin \phi|}$$ Expression (\[B4\]) is valid only if $r|\sin\phi|\gg 1$ and is free of singularities in this region. Calculation of Ballistic Action in the reaction zone {#appendix-action-2d} ==================================================== The contribution from the reaction zone to the action is given by Eq. (\[BigVic2\]) and contains two terms: $$\label{appendix-2,3action} F_1 = - \frac{\pi \nu v_F}{8} {\cal W}, \quad F_2 = - \frac{\pi\nu}{8 \tau} \int \langle g \rangle^2;$$ the term with $\omega$ dissapears after integration over $\omega$ in Eq. (\[Gt2\]). To caluclate the ${\cal W}$ term we use Eq. (\[W-vs-U\]) and present the solution of kinetic equation in the form $g_{\bbox n} = U \Lambda U^{-1}$. Using parametrisation (\[param-g\]) we find for the bosonic block of $g$-matrix: $$\label{appendix-g-2,3d} g^B = \left( \begin{array}{cc} U(\phi) \sigma_z U^{-1}(\phi)& 0 \\ 0 & - U(\pi -\phi) \sigma_z U^{-1}(\pi -\phi) \end{array} \right),$$ where $2\times2$ matrix $U$ is given by: $$\label{appendix-u-2,3d} U= \exp \left(- \frac{k_1}{\gamma} \hat e_+ \right) \exp \left( \frac{\ln \gamma}{2} \sigma_1 \right).$$ Using $$\frac{ \partial}{\partial r} U = -\frac{1}{\gamma}\frac{ \partial k_1}{\partial r} \hat e_+ U$$ we obtain from Eq. (\[W-vs-U\]): $$\label{appendix-W-2,3} {\cal W} = 8 \int d {\bbox r} \langle {\bbox n} \frac{ \partial k_1}{\partial r} \rangle = 8 \int d S {\bbox J},$$ where the integral in the last expression is taken over the boundary of the reaction zone. This gives $$\label{appendix-F1-2,3} F_1 \sim \left\{ \begin{array}{cl} g \frac{\ln (t / g \tau}{\ln(R/l)} &\quad (2D) \\ (p_F l)^2 \ln^2 \left( \frac{t}{(p_F l)^2 \tau} \right) &\quad (3D) \end{array} \right.$$ which is one power of the logarithm smaller than the contribution from the run-out zone. The calculation of the term $F_2$ is strait-forward: $$\label{appendix-F2-2,3} F_2 = - \frac{\pi\nu}{8 \tau} a^2 \int d {\bbox r} \frac{l^2}{r^2} \sim \left\{ \begin{array}{cl} g \ln \left( \frac{\ln (t / (g \tau)}{\ln(R/l)} \right) &\; (2D) \\ (p_F l)^2 \ln \left( \frac{t}{(p_F l)^2 \tau} \right) &\; (3D) \end{array} \right.$$ Calculation of $\cal W$ term for one-dimensional solution. {#appendix-action-1d} ========================================================== To employ the expression (\[W-vs-U\]) for the $\cal W$-term we should find a decomposition $g_{\bbox n} = U \Lambda U^{-1}$. Combining Eqs. (\[1d-k-pm\]) and (\[1d-factor\]) with the parametrisation (\[param-g\]) and formulae from appendix \[appendix-conica\], we write the bosonic block of $g_{\bbox n}$ in the form of (\[appendix-g-2,3d\]), where the $2\times2$ matrix $U$ is given by: $$\label{appendix-U} U= \exp \left(\frac{\theta}{2} \sigma_x \right) \exp \left(- i \frac{\chi}{2} \sigma_y \right),$$ where $\chi = \mbox{ arcsin } \left( \theta' \beta(\phi) \cos \phi \right)$ does not depend on $x$. The straight-forward application of Eq. (\[W-vs-U\]) gives: $$\label{appendix-W-1d} {\cal W} = -8 L w \frac{|\theta'| J}{l}$$ where the minus sign comes from the super-trace of the bosonic block. The result of averaging over ${\bbox n}$ is expressed through the current $J= \langle \cos \phi k_1 \rangle$. [11]{} B. A. Muzykantskii and D. E. Khmelnitskii, Phys. Rev. B [**51**]{}, 5480 (1995). K.B.Efetov, Advances in Physics [**32**]{}, 53 (1983). B. A. Muzykantskii and D. E. Khmelnitskii, JETP letters [**62**]{}, 76 (1995), pis’ma Zh. Eksp Teor Fiz. vol. 62 p. 68-74 (1995). J.J.M.Verbaarschot, H.A.Weidenmuller, and M.R.Zirnbauer, Physics Reports [ **129**]{}, 367 (1985). Altshuler B.L., V.E.Kravtsov and I.V.Lerner, [*JETP Letters*]{}, [ **45**]{}, 199 (1987); [*JETP* ]{}, [**67**]{}, 695 (1988); in book “Mesoscopic Phenomena in Solids”, ed B.L.Altshuler, P.A.Lee and R.A.Webb, p. 449. North Holland 1991. V. I. Falko and K.B.Efetov (unpublished). A. D. Mirlin, cond-mat (unpublished). A. D. Mirlin, [JETP Lett]{} [**62**]{} (1995) to be published; cond-mat 9508093 (unpublished). This equation has the same form as the Eilenberger equation, first introduced in the theory of superconductivity (see G. Eilenberger [*Z. Phys.*]{},[**214**]{}, 195, (1968); K.D.Usadell [*Phys.Rev.Lett.*]{} [**25**]{}, 507 (1970), A.I.Larkin, Yu.N.Ovchinnikov [*JETP*]{} [**46**]{}, 155 (1977), Schmid A. [*in Non-equilibrium Superconductivity, Phonons and Kapitza Boundaries, edited by K.E.Grey*]{}, p.423 (1981) Plenum Press. NY.). The difference is that our distribution function $g_{\bbox n} ({\bbox r})$ is a super-matrix with both commuting and anti-commuting elements. The value of the $\cal W$-term does not depend on the interpolating function; for detailed discussion see Fradkin E. [ *Field Theory for Condensed Matter Systems*]{}, Addison-Wesley 1991; The symmetry of the solutions of the kinetic equation (\[Eilen\]) differs from the symmetry of $g$-matrices in the functional integral (\[BigVic1\]), i.e. the integration countour should be deformed to reach the saddle point. Analogous situation occurs in the diffusive $\sigma$-model. In the text we describe the symmetries of the solutions of the kinetic equation. B. Altshuler and V. Prigodin, JETP Letters [**47**]{}, 43 (1988), \[Pis’ma Zh.Eksp.Theor.Fiz. vol. 47, p. 36 (1988)\]. If the wire is wider than $l$ the distribution function depends on the transverse coordinate and the problem should be treated as a two-dimensional one $$\begin{array}{|c|c|c|c|c|c|} \hline & t_D \ll t \ll \hbar/\Delta & \hbar / \Delta \ll t \ll t_b & t_b \Delta & t_b \ll t \ll t_Q & t_Q \Delta \\ \hline \mbox{1D}\tablenote{strip of width $w$ and lenth $L$, $N= w p_F/(\pi \hbar)$ is the number of ballisitc channels} & t/t_D & g \ln^2(t \Delta) & \exp (L/l) & 2 N \ln t \Delta & N \exp(L/l) \\ \mbox{2D}\tablenote{disk of radius $R$} & \pi t /t_D & 4 g \ln (t\Delta) & (L/l)^2 & \pi g \cdot \frac{\ln^2(t/g\tau)}{2 \ln (R/l)} & \exp (p_F l/\hbar) \frac{l}{R} \\ \mbox{3D}\tablenote{ball of radius $R$} & \pi t /t_D & \mbox{ none } & 1 & \frac{ \pi }{9 \sqrt{3}} (p_F l)^2 \ln^3 \left[ \frac{t}{\tau (p_F l)^2 }\right] & \exp (p_F l/\hbar) \left(\frac{l}{R}\right)^3 \\ \hline \end{array}$$
--- author: - 'I. Heywood, A. Avison$^{2}$' - 'C. J. Williams$^{3}$' title: The ALMA Observation Support Tool --- Overview ======== The Atacama Large Millimetre/submillimetre Array (ALMA)[^1] is an interferometer consisting of 66 dishes currently under construction on the Chajnantor plateau of northern Chile. Operating between $\sim$90 GHz and $\sim$1 THz, it will be the most sensitive instrument in the world at these frequencies when completed in 2012. Observations with a 16-element ALMA will begin in 2011. The Observation Support Tool (OST) provides a new method for simulating ALMA images. It is designed to be easily accessible to users who may not be interferometry experts, and is interacted with solely via a standard web browser. No additional software needs to be installed by the user, and in contrast to other web-based tools such as the ALMA Sensitivity Calculator[^2] no client-side processing takes place. Instead, simulation jobs are defined via a standard web form and submitted to a remote server. The server is running a custom-built script which processes the submitted jobs sequentially, making use of the CASA[^3] toolkit to perform a full simulation via the generation and imaging of a visibility set. When the simulation is complete the user receives an email containing a URL which points to a web page containing the results of the simulation and several downloadable image products. The front end makes use of an open source Javascript form checking library called LiveValidation[^4] which checks the simulation parameters in real time. Basic errors can thus be trapped before jobs are submitted to the server. A second level of server-side error checking is employed to test for more complex issues (e.g. sources remaining below the horizon at all times) and these error checking routines also replicate the LiveValidation checks for users who choose to disable Javascript in their browsers. This article provides a brief overview of the functionality of the OST by describing the web front-end and the results page. The OST is currently hosted by the UK ALMA Regional Centre at the University of Manchester Jodrell Bank Centre for Astrophysics, and can be accessed by directing a web browser to [http://almaost.jb.man.ac.uk]{}. The web interface ================= The interface is a standard HTML web form with various components of the simulation defined either by text boxes or drop-down menus as shown in Figure \[fig:front\_end\]. It is divided into five main sections, as defined by the leftmost column. The column on the right-hand side provides brief notes as to the purpose and usage of each item, however (evolving) documentation is also available from a hyperlink on the web page. The red and green markers show the LiveValidation library in action, with erroneous parameters highlighted in red. A brief summary of each section on the web interface follows. Array ----- The instrument is defined here using a single drop-down menu. The available options are two Cycle-0 configurations (compact and extended), the 12-element Atacama Compact Array (ACA) and the full 50-element ALMA. Although the full ALMA array will have numerous potential configurations there is only one option for it in this menu. The reasons for this are explained in Section \[sec:obs\_setup\]. Sky Setup {#sec:sky_setup} --------- This section determines the nature of the ‘ideal’ sky that will be used as the input model for the simulation. The OST contains a library of example sky models which may be used, and flexibility as a simulation tool is provided in this section as the user is able to upload an arbitrary sky model in FITS format. The only header parameter in the FITS file that the user must ensure is accurate is the spatial pixel scale. Everything else is ignored by the OST and is instead overwritten by the values defined elsewhere in the web form, however if the brightness unit keyword is Jy/pixel or Jy/beam then this is also taken into account. Right Ascension is conspicuous by its absence next to the Declination field, however this was foregone in favour of an hour angle parameter (see Section \[sec:obs\_setup\]). The final option in this section can be used to re-scale the pixel values of the sky model by setting the brightness of the peak pixel and scaling all the other pixel values in the image array in relation to this. Observation Setup {#sec:obs_setup} ----------------- The spectral and temporal properties of the observation are defined here. The OST at present only does single-channel simulations, however the increased sensitivity offered by large-bandwidth pseudo-continuum observations can be simulated by simply entering a large bandwidth, together with the central frequency. For spectral line simulations the user unfortunately at present has to submit each channel as a unique simulation job, however selecting a few representative channels across the model spectral line cube is suitable for a crude line detection test. Frequency cubes can be uploaded, however the OST will only make use of the central channel. The user will be notified on the results page if this action is taken. The required resolution parameter is only taken into account by the simulation algorithm if the user selects ‘ALMA’ in the Array section. The required resolution is used in conjunction with the central frequency to select a suitable ALMA configuration from the 28 ‘out’ layouts which are bundled with CASA. If the resolution demand falls outside the range of what is offered by the most compact or extended arrays then one of the extreme configurations is selected and the user is notified. A major observing mode for ALMA will be mosaicking. In mosaic mode, the OST takes the sky area demands of the model into account, and calculates a standard hexagonal mosaic pattern[^5]. The OST will calculate the number of pointings required to cover the requested area, but it also makes the assumption that the specified on source time is per pointing, and not divided amongst them. The other option in this section is to simulate a single pointing, whereby a pixel-wise attenuation is applied the sky model pre-simulation, crudely simulating the primary beam response of the array. The attenuation function is a normalized Gaussian, the centre of which is placed at the central pixel of the sky model. Its full-width at half-maximum is an angular value, equal in radians to $$\Theta_{PB}~=~1.22\frac{c}{\nu D}$$ where $c$ is the speed of light, in metres per second, $\nu$ is the central observing frequency in Hertz and $D$ is the dish diameter in metres. As mentioned previously, the Right Ascension parameter for the model sky is not present, and instead the user must specify a starting hour-angle value. Thus the simulation is defined in terms of when the source is observed in relation to its transit. The pointing duration is subsequently specified, and there is also the option to specify a number of visits. This allows the user to simulate cases whereby a large amount of observing time is required but the hour angle ranges are stringent. Finally the number of polarizations can be selected. The OST does not yet perform simulations in full polarization, and this parameter merely affects the noise in the final image. As with spectral lines however, individual polarization maps can be uploaded. This prospect also partially motivated the allowance of negative brightness values in the re-scaling option, although if a negative value is entered here then the user is warned on the results page in case it was unintentional. Corruption {#sec:corruption} ---------- Artifacts in an interferometric image derived from a real observation originate due to a variety of effects, including calibration errors, atmospheric effects and the thermal conditions of the receivers. Even in the case of a perfect observation, the latter is something that cannot be mitigated. The thermal noise determines the absolute noise ‘floor’ in an interferometric map below which sources cannot be detected. The RMS of the noise perturbation to the visibility, that is the single complex number which is the per-polarization, per-channel correlation product of a pair of antennas, is given in units of Janskys by: $$\label{eq:rms} \sigma = \frac{2k_{B}T_{sys}}{\eta A \sqrt{\Delta \nu \Delta t}}$$ where $k_{B}$ is the Boltzmann constant, $T_{sys} = T_{rec} + T_{sky} + T_{cmb}$ is the system temperature in K, $\eta$ is the combined product of a series of efficiency terms $A$ is the effective area of a single antenna in m$^{2}$, $\Delta \nu$ is the channel bandwidth in Hz and $\Delta t$ is the integration time per visibility in seconds. The receiver temperatures have a unique value per ALMA band and $T_{cmb}$ is set to 2.73 K. The sky temperature is derived from a model of the atmospheric transparency at the ALMA site via: $$\label{eq:tsky} T_{sky} = T_{atmos} \left(1 - \gamma\right)$$ where $T_{atmos}$ is the atmospheric temperature (assumed to be 260 K) and $\gamma$ is the transmission fraction. The single menu option in the ‘Corruption’ section relates to three levels of precipitable water vapour (PWV): 0.5, 1.5 and 2.5 mm. The atmospheric transmission fraction as a function of frequency for these three levels is shown in Figure \[fig:trans\]. The values in this plot have been derived from the transmission calculator on the Atacama Pathfinder Experiment (APEX) web site[^6]. Interpolative functions are fitted to these plots so that the transparency can be derived for arbitrary frequencies and the corresponding sky temperature can be calculated via Equation \[eq:tsky\] for the selected level of PWV. To account for the variation of $T_{sky}$ within large bandwidths, the values are calculated at ten points across the band and an average is taken. This value is then added to $T_{rec}$ to form $T_{sys}$ for use in Equation \[eq:rms\]. Imaging ------- The parameters in this section affect how the simulated visibilites are Fourier transformed into a sky image which then undergoes optional deconvolution. The imaging process for an interferometric data set does not necessarily follow a single unique path, thus complete automation of this process is a challenge. How the gridded visibilites are weighted can affect the final map, with weighting schemes offering a trade-off between resolution and sensitivity. The OST offers three weighting options: natural weighting, pure uniform weighting and an intermediate Briggs (1995) weighting scheme. When imaging a genuine observation, deconvolution is often carried out interactively, allowing the user to adaptively define regions to be targeted by the CLEAN algorithm and make an educated call as to when deconvolution should be terminated. If the OST user wishes to deconvolve the simulated map then the OST will set a termination threshold according to the theoretical noise in the map. When the peak of the residual image reaches this value then deconvolution ceases. If the deconvolution process fails to converge or otherwise fails to reach the threshold then it will be terminated by means of a clean component limit which is presently set to 10,000. The final output image format option in this section simply determines whether the OST returns the downloadable data products in FITS format or the CASA image format. ![An example of the results page.[]{data-label="fig:results"}](Heywood3_Fig2.eps){width="\columnwidth"} The results page ================ Once the simulation has been processed the OST will send the user an email containing a link to a web page similar to that shown in Figure \[fig:results\]. The content of this page is influenced by user feedback and is subject to change at the time of writing. The results page is also divided into four sections. Sometimes an extra section will appear at the head of the page containing warning messages. These generally occur when the OST has encountered a non-fatal problem with the simulation and has had to take liberties with the simulation parameters. One example of this would be the requested bandwidth causing the frequency range to spill over a band edge, in which case the OST will truncate the frequency coverage and notify the user. Images are rendered in PNG format and are presented with both a linear pixel intensity scale (on the left) and with histogram equalization[^7] (on the right). Overview -------- A few useful parameters about the simulation are presented here, some of which are repeats of what is entered into the webform and some of which are derived (e.g. maximum elevation, resolution of final map, number of pointings). If the simulation used a non-point-source sky model then this is rendered and displayed here. CASA simdata ------------ This section provides three downloads which are designed to allow the user to easily transfer their OST simulation into the CASA simdata task. The processed sky model is offered as a download and the simdata.last file will set up simdata with the parameters used for the OST simulation by means of the CASA tget command. The pointing file contains a list of the directions for each pointing in the mosaic. Note however that as mentioned earlier the OST uses an hour angle parameter instead of Right Ascension, thus the RA of every OST simulation is forced to zero hours. If the ‘single’ option is chosen for the pointing strategy then in addition to the processed sky model the sky model with primary beam attenuation applied is also returned in the chosen image format. Such features may be useful to users who wish to ignore the simulation components and simply use the OST as an online FITS file re-processing service. Data products ------------- The data products section contains solely graphical output. The most useful image here is probably the rendering of the final simulated map. For mosaics of less than 30 pointings the pointing directions are overlaid onto this image. The link to download the simulated map as either a FITS or CASA image is also here. The dirty beam (or point-spread function) of the observation is also presented. The $uv$ coverage of the simulation is displayed here. This is generated by Fourier transforming the PSF rather than opening the visibility set and extracting the $uvw$ coordinates of each measurement. The advantage of this approach is that it is much faster, and the colour scale in the $uv$ plot gives some indication as to the density of samples in a particular region of the $uv$ plane. The frequency set up is distilled into the fourth row of plots, showing the frequency range of the simulation in the context of ALMA bands 3–10, and the band in which it lies. The red atmospheric transmission curve also reflects the PWV level that was selected. All three possible transmission curves are shown in Figure \[fig:trans\] of this document. Finally this section also presents a plot of elevation against time. Scans are flagged when an antenna drops below an elevation angle of 10 degrees. Such scans are displayed in a lighter tone on the elevation plot so the user can easily see the fraction of their observation which is affected. A message will appear on the results window notifying the user of any elevation issues, and noise values calculated from the on-source time are also scaled accordingly. Diagnostics ----------- This section contains only a single result, which is the number of seconds that elapse between the simulation job being selected from the queue and the job being completed. Turn-around times are very favourable, although the back-end is being modified to exploit the multiple CPU cores of the server, allowing jobs in the queue to be processed in parallel rather than the current sequential implementation. Summary ======= The Observation Support Tool is a flexible imaging simulator for ALMA which is highly accessible, the software requirements it places upon its users consisting of nothing more than a standard web browser (e.g. Figure \[fig:ipod\]). The server-side software which processes the simulations make use of the CASA toolkit, and although the OST is not a web interface to the CASA simdata task, both systems are based on this toolkit and further good agreement is ensured via careful matching of assumed parameters. The OST itself also serves as an example of the potential of using remote computing applicatons for radio astronomical data processing. The new generation of radio interferometers (e.g. EVLA, e-MERLIN, LOFAR and eventually MeerKAT, ASKAP and the Square Kilometre Array) require high-end computers to perform data reduction, and the use of remotely-accessed HPC facilities is likely to become increasingly important, an approach which has already been adopted for the processing of LOFAR data which takes place on a central cluster facility. The system is at present in a state of constant refinement as its performance is checked and user feedback is received. The system is completely open for general use and has to date processed over 1,500 simulations. The authors would like to thank the users of the OST, especially the testers who volunteered to try out an early version of the system and whose feedback and bug detections proved invaluable, in particular Eduardo Ibar, Robert Laing, François Levrier, Tom Muxlow, and Anita Richards. We are also very grateful to Remy Indebetouw and the CASA development teams at NRAO and ESO. We thank STFC for financial support via the ALMA ARClet grant. Briggs, D. .S., “High fidelity deconvolution of moderately resolved sources”, 1995, Ph.D thesis, New Mexico Institute of Mining and Technology [^1]: [http://www.almatelescope.org]{} [^2]: [http://almascience.eso.org/document-and-tools]{} [^3]: [http://casa.nrao.edu]{} [^4]: [http://www.livevalidation.com]{} [^5]: Pointings in any given row are offset by half the primary beam width, and adjacent rows are offset horizontally from one-another by one-quarter of the primary beam width, and vertically by the primary beam width multipled by $sin$(60 degrees) [^6]: [http://www.apex-telescope.org/sites/chajnantor/atmosphere]{} [^7]: Histogram equalization is an image processing technique which adjusts the pixel intensity histogram of an image such that it has a flat distribution. This is particularly useful for enhancing low level structure in images where the dynamic range is governed by the presence of a few very bright pixels.
--- abstract: 'The conditions of quantum-classical correspondence for a system of two interacting spins are investigated. Differences between quantum expectation values and classical Liouville averages are examined for both regular and chaotic dynamics well beyond the short-time regime of narrow states. We find that quantum-classical differences initially grow exponentially with a characteristic exponent consistently larger than the largest Lyapunov exponent. We provide numerical evidence that the time of the break between the quantum and classical predictions scales as log(${\mathcal J}/ \hbar$), where ${\mathcal J}$ is a characteristic system action. However, this log break-time rule applies only while the quantum-classical deviations are smaller than ${\mathcal O}(\hbar)$. We find that the quantum observables remain well approximated by classical Liouville averages over long times even for the chaotic motions of a few degree-of-freedom system. To obtain this correspondence it is not necessary to introduce the decoherence effects of a many degree-of-freedom environment.' address: 'Physics Department, Simon Fraser University, Burnaby, British Columbia, Canada V5A 1S6' author: - 'J. Emerson and L.E. Ballentine' title: 'Characteristics of Quantum-Classical Correspondence for Two Interacting Spins' --- Introduction {#sec1} ============ There is considerable interest in the interface between quantum and classical mechanics and the conditions that lead to the emergence of classical behaviour. In order to characterize these conditions, it is important to differentiate two distinct regimes of quantum-classical correspondence [@Ball94]: (i) Ehrenfest correspondence, in which the centroid of the wave packet approximately follows a classical trajectory. (ii) Liouville correspondence, in which the quantum probability distributions are in approximate agreement with those of an appropriately constructed classical ensemble satisfying Liouville’s equation. Regime (i) is relevant only when the width of the quantum state is small compared to the dimensions of the system; if the initial state is not narrow, this regime may be absent. Regime (ii), which generally includes (i), applies to a much broader class of states, and this regime of correspondence may persist well after the Ehrenfest correspondence has broken down. The distinction between regimes (i) and (ii) has not always been made clear in the literature, though the conditions that delimit these two regimes, and in particular their scaling with system parameters, may be quite different. The theoretical study of quantum chaos has raised the question of whether the quantum-classical break occurs differently in chaotic states, in states of regular motion, and in mixed phase-space systems. This is well understood only in the case of regime (i). There it is well-known [@BZ78; @Haake87; @Chi88] that the time for a minimum-uncertainty wave packet to expand beyond the Ehrenfest regime scales as $\log({\mathcal J} / \hbar )$ for chaotic states, and as a power of ${\mathcal J} / \hbar $ for regular states, where ${\mathcal J}$ denotes a characteristic system action. The breakdown of quantum-classical correspondence, in the case of regime (ii), is less well understood, though it has been argued that this regime may also be delimited by a $\log({\mathcal J}/ \hbar)$ break-time in classically chaotic states [@ZP94; @Zurek98a]. Some numerical evidence in support of this conjecture has been reported in a study of the kicked rotor in the [*anomolous diffusion*]{} regime [@RBWG95]. (On the other hand, in the regime of [*quantum localization*]{}, the break-time for the kicked rotor seems to scale as $({\mathcal J}/ \hbar)^2$ [@Haake91].) Since the $\log({\mathcal J}/ \hbar)$ time scale is rather short, it has been suggested that certain macroscopic objects would be predicted to exhibit non-classical behaviour on observable time scales [@ZP95a; @Zurek98b]. These results highlight the importance of investigating the characteristics of quantum-classical correspondence in more detail. In this paper we study the classical and quantum dynamics of two interacting spins. This model is convenient because the Hilbert space of the quantum system is finite-dimensional, and hence tractable for computations. Spin models have been useful in the past for exploring classical and quantum chaos [@Haake87; @FP83; @B91a; @B91b; @B93; @RR98] and our model belongs to a class of spin models which show promise of experimental realization in the near future [@Mil99]. The classical limit is approached by taking the magnitude of both spins to be very large relative to $\hbar$, while keeping their ratio fixed. For our model a characteristic system action is given by ${\mathcal J} \simeq \hbar l$, where $l$ is a quantum number, and the classical limit is simply the limit of large quantum numbers, [*i.e.*]{} the limit $l \rightarrow \infty$. In the case of the chaotic dynamics for our model, we first show that the widths of both the quantum and classical states grow exponentially at a rate given approximately by the largest Lyapunov exponent (until saturation at the system dimension). We then show that the initially small quantum-classical differences also grow at an exponential rate, with an exponent $\lambda_{qc}$ that is independent of the quantum numbers and at least twice as large as the largest Lyapunov exponent. We demonstrate how this exponential growth of differences leads to a log break-time rule, $t_b \simeq \lambda_{qc}^{-1} \ln( l p / \hbar )$, delimiting the regime of Liouville correspondence. The factor $p$, measured in units of $\hbar$, is some preset tolerance that defines a [*break*]{} between the quantum and classical expectation values. However, we also show that this logarithmic rule holds [*only if*]{} the tolerance $p$ for quantum-classical differences is chosen extremely small, in particular $p < {\mathcal O}(\hbar)$. For larger values of the tolerance, the break-time does not occur on this log time-scale and may not occur until the recurrence time. In this sense, log break-time rules describing Liouville correspondence are not robust. These results demonstrate that, for chaotic states in the classical limit, quantum observables are described approximately by Liouville ensemble averages well beyond the Ehrenfest time-scale, after which both quantum and classical states have relaxed towards equilibrium distributions. This demonstration of correspondence is obtained for a few degree-of-freedom quantum system of coupled spins that is described by a pure state and subject only to unitary evolution. This paper is organised as follows. In section II we describe the quantum and classical versions of our model. Since the model is novel we examine the behaviours of the classical dynamics in some detail. In section III we define the initial quantum states, which are SU(2) coherent states, and then define a corresponding classical density on the 2-sphere which is a good analog for these states. We show in the Appendix that a perfect match is impossible: no distribution on ${\mathcal S}^2$ can reproduce the moments of the SU(2) coherent states exactly. In section IV we describe our numerical techniques. In section V we examine the quantum dynamics in regimes of classically chaotic and regular behaviour and demonstrate the close quantitative correspondence with the Liouville dynamics that persists well after the Ehrenfest break-time. In section VI we characterize the growth of quantum-classical differences in the time-domain. In section VII we characterize the scaling of the break-time for small quantum-classical differences and also examine the scaling of the maximum quantum-classical differences in the classical limit. The Model {#sec2} ========= We consider the quantum and classical dynamics generated by a non-integrable model of two interacting spins, $$H = a (S_z + L_z) + c S_x L_x \sum_{n = -\infty}^{\infty}\delta(t-n) \label{eqn:ham}$$ where ${\bf S} = (S_x,S_y,S_z)$ and ${\bf L} = (L_x,L_y,L_z)$. The first two terms in (\[eqn:ham\]) correspond to simple rotation of both spins about the $z$-axis. The sum over coupling terms describes an infinite sequence of $\delta$-function interactions at times $t=n$ for integer n. Each interaction term corresponds to an impulsive rotation of each spin about the $x$-axis by an angle proportional to the $x$-component of the other spin. The Quantum Dynamics -------------------- To obtain the quantum dynamics we interpret the Cartesian components of the spins as operators satisfying the usual angular momentum commutation relations, $$\begin{aligned} & [ S_i,S_j ] = & i \epsilon_{ijk} S_k \\ & [ L_i,L_j ] = & i \epsilon_{ijk} L_k \\ & [ J_i,J_j ] = & i \epsilon_{ijk} J_k .\end{aligned}$$ In the above we have set $\hbar =1$ and introduced the total angular momentum vector ${\bf J} = {\bf S} + {\bf L}$. The Hamiltonian (\[eqn:ham\]) possesses kinematic constants of the motion, $ [ {\bf S}^2,H] = 0$ and $[ {\bf L}^2,H] =0 $, and the total state vector $|\psi\rangle$ can be represented in a finite Hilbert space of dimension $ (2s+1) \times (2l+1)$. This space is spanned by the orthonormal vectors $|s,m_s \rangle \otimes |l,m_l \rangle$ where $m_s \in \{ s, s-1, \dots, -s \}$ and $m_l \in \{ l, l-1, \dots, -l \}$. These are the joint eigenvectors of the four spin operators $$\begin{aligned} \label{eqn:basis} {\bf S}^2 |s, l, m_s,m_l \rangle & =& s(s+1) | s,l,m_s,m_l \rangle \nonumber \\ S_z | s, l,m_s,m_l \rangle & =& m_s | s,l,m_s,m_l \rangle \\ {\bf L}^2 | s,l,m_s,m_l \rangle &= & l(l+1) | s,l,m_s,m_l \rangle \nonumber \\ L_z | s,l,m_s,m_l \rangle &= & m_l | s,l,m_s,m_l \rangle \nonumber .\end{aligned}$$ The periodic sequence of interactions introduced by the $\delta$-function produces a quantum mapping. The time-evolution for a single iteration, from just before a kick to just before the next, is produced by the unitary transformation, $$\label{eqn:qmmap} | \psi(n+1) \rangle = F \; | \psi(n) \rangle,$$ where $F$ is the single-step Floquet operator, $$\label{eqn:floquet} F = \exp \left[ - i a (S_z + L_z) \right] \exp \left[ - i c S_x L_x \right].$$ Since $a$ is a rotation its range is $2\pi$ radians. The quantum dynamics are thus specified by two parameters, $a$ and $c$, and two quantum numbers, $s$ and $l$. An explicit representation of the single-step Floquet operator can be obtained in the basis (\[eqn:basis\]) by first re-expressing the interaction operator in (\[eqn:floquet\]) in terms of rotation operators, $$\begin{aligned} \exp \left[ - i c S_x \otimes L_x \right] & = & [R^{(s)}(\theta,\phi) \otimes R^{(l)}(\theta,\phi)] \; \exp \left[ - i c S_z \otimes L_z \right] \nonumber \\ & & \times [R^{(s)}(\theta,\phi) \otimes R^{(l)}(\theta,\phi)]^{-1}, \end{aligned}$$ using polar angle $\theta=\pi/2$ and azimuthal angle $\phi=0$. Then the only non-diagonal terms arise in the expressions for the rotation matrices, which take the form, $$\langle j,m'| R^{(j)}(\theta,\phi) | j,m \rangle = \exp(-i m' \phi ) d^{(j)}_{m',m}(\theta).$$ The matrix elements, $$d^{(j)}_{m',m}(\theta) = \langle j,m'| \exp( - i \theta J_y ) | j,m \rangle$$ are given explicitly by Wigner’s formula [@sakurai]. We are interested in studying the different time-domain characteristics of quantum observables when the corresponding classical system exhibits either regular or chaotic dynamics. In order to compare quantum systems with different quantum numbers it is convenient to normalize subsystem observables by the subsystem magnitude $\sqrt{\langle {\bf L}^2\rangle} = \sqrt{l(l+1)}$. We denote such normalized observables with a tilde, where $$\langle \tilde{L}_z (n) \rangle = { \langle \psi(n)| L_z |\psi(n) \rangle \over \sqrt{l(l+1)} }$$ and the normalized variance at time $n$ is defined as, $$\Delta {\tilde {\bf L} }^2 (n) = {\langle {\bf L}^2 \rangle - \langle {\bf L }(n) \rangle^2 \over l(l+1) } .$$ We are also interested in evaluating the properties of the quantum probability distributions. The probability distribution corresponding to the observable $L_z$ is given by the trace, $$P_z(m_l) = {\mathrm Tr} \left[ \rho^{(l)}(n) | l, m_l \rangle \langle l, m_l | \right] = \langle l,m_l | \rho^{(l)}(n) | l,m_l\rangle ,$$ where $\rho^{(l)}(n)= {\mathrm Tr}^{(s)} \left[ \; | \psi(n) \rangle \langle \psi(n) | \; | s, m_s \rangle \langle s, m_s | \; \right]$ is the reduced state operator for the spin ${\bf L}$ at time $n$ and ${\mathrm Tr}^{(s)}$ denotes a trace over the factor space corresponding to the spin ${\bf S}$. Classical Map ------------- For the Hamiltonian (\[eqn:ham\]) the corresponding classical equations of motion are obtained by interpreting the angular momentum components as dynamical variables satisfying, $$\begin{aligned} & \{ S_i,S_j \} = & \epsilon_{ijk} S_k \\ & \{ L_i,L_j \} = & \epsilon_{ijk} L_k \\ & \{ J_i,J_j \} = & \epsilon_{ijk} J_k ,\end{aligned}$$ with $\{ \cdot,\cdot \}$ denoting the Poisson bracket. The periodic $\delta$-function in the coupling term can be used to define surfaces at $t=n$, for integer $n$, on which the time-evolution reduces to a stroboscopic mapping, $$\begin{aligned} \label{eqn:map} \tilde{S}_x^{n+1} & = & \tilde{S}_x^n \cos( a) - \left[ \tilde{S}_y^n \cos( \gamma r \tilde{L}_x^n ) - \tilde{S}_z^n \sin( \gamma r \tilde{L}_x^n)\right] \sin( a), \nonumber \\ \tilde{S}_y^{n+1} & = & \left[ \tilde{S}_y^n \cos( \gamma r \tilde{L}_x^n) - \tilde{S}_z^n \sin( \gamma r \tilde{L}_x^n ) \right] \cos( a) + \tilde{S}_x^n \sin( a), \nonumber \\ \tilde{S}_z^{n+1} & = & \tilde{S}_z^n \cos( \gamma r \tilde{L}_x^n) + \tilde{S}_y^n \sin( \gamma r \tilde{L}_x^n), \\ \tilde{L}_x^{n+1} & = & \tilde{L}_x^n \cos( a) - \left[\tilde{L}_y^n\cos(\gamma \tilde{S}_x^n) - \tilde{L}_z^n \sin(\gamma\tilde{S}_x^n )\right]\sin(a), \nonumber \\ \tilde{L}_y^{n+1} & = & \left[ \tilde{L}_y^n \cos( \gamma \tilde{S}_x^n) - \tilde{L}_z^n \sin( \gamma \tilde{S}_x^n) \right] \cos( a) + \tilde{L}_x^n \sin( a), \nonumber \\ \tilde{L}_z^{n+1} & = & \tilde{L}_z^n \cos( \gamma \tilde{S}_x^n )+ \tilde{L}_y^n \sin( \gamma \tilde{S}_x^n), \nonumber \end{aligned}$$ where ${\tilde {\bf L}} = {\bf L} / |{\bf L}|$ , ${\tilde {\bf S}} = {\bf S} / |{\bf S}|$ and we have introduced the parameters $ \gamma = c |{\bf S}| $ and $ r = | {\bf L}| / | {\bf S}| $. The mapping equations (\[eqn:map\]) describe the time-evolution of (\[eqn:ham\]) from just before one kick to just before the next. Since the magnitudes of both spins are conserved, $ \{ {\bf S}^2,H \} = \{ {\bf L}^2,H \} =0$, the motion is actually confined to the four-dimensional manifold ${\mathcal P} ={\mathcal S}^2 \times {\mathcal S}^2$, which corresponds to the surfaces of two spheres. This is manifest when the mapping (\[eqn:map\]) is expressed in terms of the four [*canonical*]{} coordinates ${\bf x} = (S_z, \phi_s, L_z , \phi_l )$, where $\phi_s = \tan (S_y /S_x)$ and $\phi_l = \tan(L_y/L_x) $. We will refer to the mapping (\[eqn:map\]) in canonical form using the shorthand notation ${\bf x}^{n+1} = {\bf F}({\bf x}^n)$. It is also useful to introduce a complete set of spherical coordinates $ \vec{\theta} = (\theta_s,\phi_s,\theta_l,\phi_l) $ where $\theta_s = \cos^{-1} (S_z / |{\bf S}|) $ and $\theta_l = \cos^{-1} (L_z / |{\bf L}|) $. The classical flow (\[eqn:map\]) on the reduced surface ${\mathcal P}$ still has a rather large parameter space; the dynamics are determined from three independent dimensionless parameters: $a \in [0,2\pi)$, $\gamma \in (-\infty,\infty)$, and $r \ge 1$. The first of these, $a$, controls the angle of free-field rotation about the $z$-axis. The parameter $\gamma= c |{ \bf S }|$ is a dimensionless coupling strength and $r = |{\bf L}|/ |{ \bf S }| $ corresponds to the relative magnitude of the two spins. We are particularly interested in the effect of increasing the coupling strength $\gamma$ for different fixed values of $r$. In Fig. \[regimes\] we plot the dependence of the classical behaviour on these two parameters for the case $a=5$, which produces typical results. The data in this figure was generated by randomly sampling initial conditions on ${\mathcal P}$, using the canonical measure, $$\label{eqn:measure} d \mu( {\bf x} ) = d \tilde{S}_z d \phi_s d \tilde{L}_z d \phi_l ,$$ and then calculating the largest Lyapunov exponent associated with each trajectory. Open circles correspond to regimes where at least $99\%$ of the initial conditions were found to exhibit regular behaviour and crosses correspond to regimes where at least $99\%$ of these randomly sampled initial conditions were found to exhibit chaotic behaviour. Circles with crosses through them (the superposition of both symbols) correspond to regimes with a mixed phase space. For the case $a=5$ and with $r$ held constant, the scaled coupling strength $\gamma$ plays the role of a perturbation parameter: the classical behaviour varies from regular, to mixed, to predominantly chaotic as $|\gamma|$ is increased from zero. The fixed points of the classical map (\[eqn:map\]) provide useful information about the parameter dependence of the classical behaviour and, more importantly, in the case of mixed regimes, help locate the zones of regular behaviour in the 4-dimensional phase space. We find it sufficient to consider only the four trivial (parameter-independent) fixed points which lie at the poles along the $z$-axis: two of these points correspond to parallel spins, $(S_z,L_z) = \pm (|{\bf S}|,|{\bf L}|)$, and the remaining two points correspond to anti-parallel spins, $(S_z,L_z) = (\pm |{\bf S}|, \mp|{\bf L}|)$. The stability around these fixed points can be determined from the eigenvalues of the tangent map matrix, ${\bf M} = \partial {\bf F} / \partial {\bf x}$, where all derivatives are evaluated at the fixed point of interest. (It is easiest to derive $M$ using the six [*non-canonical*]{} mapping equations (\[eqn:map\]) since the tangent map for the [*canonical*]{} mapping equations exhibits a coordinate system singularity at these fixed points.) The eigenvalues corresponding to the four trivial fixed points are obtained from the characteristic equation, $$[ \xi^2 - 2 \xi \cos a + 1 ]^2 \pm \xi^2 \gamma^2 r \sin^2 a = 0,$$ with the minus (plus) sign corresponding to the parallel (anti-parallel) cases and we have suppressed the trivial factor $(1 -\xi)^2$ which arises since the six equations (\[eqn:map\]) are not independent. For the parallel fixed points we have the four eigenvalues, $$\begin{aligned} \label{eqn:parallel} \xi_{1,2}^P & = & \cos a \pm {1 \over 2} \sqrt{r \gamma^2 \sin^2 a} + {1 \over 2} \sqrt{ \pm 4 \cos a \sqrt{ \gamma^2 r \sin^2 a} - \sin^2 a (4 - \gamma^2 r) }, \nonumber \\ \xi_{3,4}^P & = & \cos a \pm {1 \over 2} \sqrt{r \gamma^2 \sin^2 a} - {1 \over 2} \sqrt{ \pm 4 \cos a \sqrt{ \gamma^2 r \sin^2 a} - \sin^2 a (4 - \gamma^2 r) }, \end{aligned}$$ and the eigenvalues for the anti-parallel cases, $\xi^{AP}$, are obtained from (\[eqn:parallel\]) through the substitution $r \rightarrow -r $. A fixed point becomes unstable if and only if $|\xi| > 1$ for at least one of the four eigenvalues. ### Mixed Phase Space: $\gamma = 1.215$ We are particularly interested in the behaviour of this model when the two spins are comparable in magnitude. Choosing the value $r=1.1$ (with $a=5$ as before), we determined by numerical evaluation that the anti-parallel fixed points are unstable for $|\gamma| > 0$. In the case of the parallel fixed points, all four eigenvalues remain on the unit circle, $|\xi^{P}| = 1$, for $|\gamma| < 1.42$. This stability condition guarantees the presence of regular islands about the parallel fixed points [@LL]. In Fig. \[r1.1.ric2\] we plot the trajectory corresponding to the parameters $a=5$, $r=1.1$, $\gamma=1.215$ and with initial condition $\vec{\theta}(0) =(5^o,5^o,5^o,5^o)$ which locates the trajectory near a stable fixed point of a mixed phase space (see Fig. \[regimes\].) This trajectory clearly exhibits a periodic pattern which we have confirmed to be regular by computing the associated Lyapunov exponent ($\lambda_L=0$). In contrast, the trajectory plotted in Fig. \[r1.1.cic2\] is launched with the same parameters but with initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$, which is close to one of the unstable anti-parallel fixed points. This trajectory explores a much larger portion of the surface of the two spheres in a seemingly random manner. As expected, a computation of the largest associated Lyapunov exponent yields a positive number ($\lambda_L = 0.04$). ### Global Chaos: $\gamma = 2.835$ If we increase the coupling strength to the value $\gamma =2.835 $, with $a=5$ and $r=1.1$ as before, then all four trivial fixed points become unstable. By randomly sampling ${\mathcal P}$ with $3 \times 10^4$ initial conditions we find that less than $0.1$% of the kinematically accessible surface ${\mathcal P}$ is covered with regular islands (see Fig. \[regimes\]). This set of parameters produces a connected chaotic zone with largest Lyapunov exponent $\lambda_L=0.45$. We will refer to this type of regime as one of ‘global chaos’ although the reader should note that our usage of this expression differs slightly from that in [@LL]. ### The Limit $r \gg 1$ Another interesting limit of our model arises when one of the spins is much larger than the other, $r \gg 1$. We expect that in this limit the larger spin (${\bf L}$) will act as a source of essentially external ‘driving’ for the smaller spin (${\bf S}$). Referring to the coupling terms in the mapping (\[eqn:map\]), the ‘driving’ strength, or perturbation upon ${\bf S}$ from ${\bf L}$, is determined from the product $\gamma r = c |{\bf L}|$, which can be quite large, whereas the ‘back-reaction’ strength, or perturbation upon ${\bf L}$ from ${\bf S}$, is governed only by the scaled coupling strength $\gamma = c|{\bf S}|$, which can be quite small. It is interesting to examine whether a dynamical regime exists where the larger system might approach regular behaviour while the smaller ‘driven’ system is still subject to chaotic motion. In Fig. \[r100.cic3\] we plot a chaotic trajectory for $r=100$ with initial condition $\vec{\theta}(0) = (27^o,27^o,27^o,27^o)$ which is located in a chaotic zone ($\lambda_L= 0.026$) of a mixed phase space (with $a=5$ and $\gamma=0.06$). Although the small spin wanders chaotically over a large portion of its kinematically accessible shell ${\mathcal S}^2$, the motion of the large spin remains confined to a ‘narrow’ band. Although the band is narrow relative to the large spin’s length, it is not small relative to the smaller spin’s length. The trajectories are both plotted on the unit sphere, so the effective area explored by the large spin (relative to the effective area covered by the small spin) scales in proportion to $r^2$. The Liouville Dynamics ---------------------- We are interested in comparing the quantum dynamics generated by (\[eqn:qmmap\]) with the corresponding Liouville dynamics of a classical distribution. The time-evolution of a Liouville density is generated by the partial differential equation, $$\label{eqn:liouville} { \partial \rho_c({\bf x},t) \over \partial t } = - \{ \rho_c , H \},$$ where $H$ stands for the Hamiltonian (\[eqn:ham\]) and ${\bf x} = (S_z, \phi_s,L_z,\phi_l)$. The solution to (\[eqn:liouville\]) can be expressed in the compact form, $$\label{eqn:soln} \rho_c({\bf x},t) = \int_{\mathcal P} {\mathrm d} \mu({\bf y}) \; \delta({\bf x} - {\bf x}(t,{\bf y})) \; \rho_c({\bf y},0),$$ with measure $d \mu({\bf y})$ given by (\[eqn:measure\]) and each time-dependent function ${\bf x}(t,{\bf y}) \in {\mathcal P}$ is solution of the equations of motion for (\[eqn:ham\]) with initial condition$ {\bf y} \in {\mathcal P}$. This integral solution (\[eqn:soln\]) simply expresses that Liouville’s equation (\[eqn:liouville\]) describes the dynamics of a classical density $\rho_c({\bf x},t)$ of points evolving in phase space under the Hamiltonian flow. We exploit this fact to numerically solve (\[eqn:liouville\]) by randomly generating initial conditions consistent with an initial phase space distribution $\rho_c({\bf x},0)$ and then time-evolving each of these initial conditions using the equations of motion (\[eqn:map\]). We then calculate the ensemble averages of dynamical variables, $$\langle \tilde{L}_z(n) \rangle_c = \int_{\mathcal P} {\mathrm d} \mu({\bf x}) { L_z \over |{\bf L}| } \rho_c({\bf x},n). \label{eqn:Lave}$$ by summing over this distribution of trajectories at each time step. Correspondence Between Quantum and Classical Models --------------------------------------------------- For a quantum system specified by the four numbers $\{a,c,s,l\}$, the corresponding classical parameters $\{a,\gamma,r\}$ are determined if we associate the magnitudes of the classical angular momenta with the quantum spin magnitudes, $$\begin{aligned} |{\bf S}|_c & = & \sqrt{s(s+1)} \nonumber \\ | {\bf L}|_c & = & \sqrt{l(l+1)}.\end{aligned}$$ This prescription produces the classical parameters, $$\begin{aligned} r & = & \sqrt{l(l+1) \over s(s+1)} \nonumber \\ \gamma & = & c \sqrt{s(s+1)}, \end{aligned}$$ with $a$ the same number for both models. We are interested in determining the behaviour of the quantum dynamics in the limit $s \rightarrow \infty$ and $l \rightarrow \infty$. This is accomplished by studying sequences of quantum models with $s$ and $l$ increasing though chosen such that the classical $r$ and $\gamma$ are held fixed. Since $s$ and $l$ are restricted to integer (or half-integer) values, the corresponding classical $r$ will actually vary slightly for each member of this sequence (although $\gamma$ can be matched exactly by varying the quantum parameter $c$). In the limit $s \rightarrow \infty$ and $l \rightarrow \infty$ this variation becomes increasingly small since $r= \sqrt{l(l+1)/s(s+1)} \rightarrow l/s$. For convenience, the classical $r$ corresponding to each member of the sequence of quantum models is identified by its value in this limit. We have examined the effect of the small variations in the value of $r$ on the classical behaviour and found the variation to be negligible. Initial States ============== Initial Quantum State --------------------- We consider [*initial*]{} quantum states which are pure and separable, $$| \psi(0) \rangle = | \psi_s(0) \rangle \otimes |\psi_l(0) \rangle.$$ For the initial state of each subsystem we use one of the directed angular momentum states, $$\label{eqn:cs} % | \psi_j(0) \rangle = | \theta,\phi \rangle = R^{(j)}(\theta,\phi) | j,j \rangle,$$ which correspond to states of maximum polarization in the direction $(\theta,\phi)$. It has the properties: $$\begin{aligned} \langle \theta, \phi | J_z | \theta, \phi \rangle & = & j \cos \theta \nonumber \\ \langle \theta, \phi |J_x \pm i J_y | \theta, \phi \rangle & = & j e^{\pm i\phi} \sin \theta,\end{aligned}$$ where $j$ in this section refers to either $l$ or $s$. The states (\[eqn:cs\]) are the SU(2) coherent states, which, like their counterparts in the Euclidean phase space, are minimum uncertainty states [@cs]; the normalized variance of the quadratic operator, $$\Delta {\bf \tilde J}^2 = { \langle \theta, \phi | {\bf J}^2 | \theta, \phi \rangle - \langle \theta, \phi | {\bf J} | \theta, \phi \rangle^2 \over j(j+1) } = {1 \over (j+1)},$$ is minimised for given $j$ and vanishes in the limit $j \rightarrow \infty$. The coherent states $| j,j\rangle $ and $| j,-j\rangle $ also saturate the inequality of the uncertainty relation, $$\langle J_x^2 \rangle \langle J_y^2 \rangle \ge { \langle J_z \rangle^2 \over 4 },$$ although this inequality is not saturated for coherent states polarized along other axes. Initial Classical State and Correspondence in the Macroscopic Limit ------------------------------------------------------------------- We compare the quantum dynamics with that of a classical Liouville density which is chosen to match the initial probability distributions of the quantum coherent state. For quantum systems with a Euclidean phase space it is always possible to construct a classical density with marginal probability distributions that match exactly the corresponding moments of the quantum coherent state. This follows from the fact that the marginal distributions for a coherent state are positive definite Gaussians, and therefore all of the moments can be matched [*exactly*]{} by choosing a Gaussian classical density. For the SU(2) coherent state, however, we show in the Appendix that no classical density has marginal distributions that can reproduce even the low order moments of the quantum probability distributions (except in the limit of infinite $j$). Thus from the outset it is clear that any choice of initial classical state will exhibit residual discrepancy in matching some of the initial quantum moments. We have examined the initial state and dynamical quantum-classical correspondence using several different classical distributions. These included the vector model distribution described in the Appendix and the Gaussian distribution used by Fox and Elston in correspondence studies of the kicked top [@Fox94b]. For a state polarized along the $z$-axis we chose the density, $$\begin{aligned} \label{eqn:rho} \rho_c (\theta,\phi) \; \sin \theta d \theta d \phi & = & \; C \exp \left[ - { 2 \sin^2({\theta\over 2}) ) \over \sigma^2}\right] \sin \theta d \theta d\phi \\ %\sin\theta d\theta d\phi \nonumber \\ % \rho_c (J_z,\phi) d (\cos \theta) d \phi & = & \; C \exp \left[ - {(1 -\tilde{J}_z)\over \sigma^2 } \right] \; d \tilde{J}_z d\phi, \nonumber \end{aligned}$$ with $ C = \left[ 2 \pi \sigma^2 \left( 1 - \exp( -2 \sigma^{-2}) \right) \right]^{-1} $, instead of those previously considered, because it is periodic under $2\pi$ rotation. An initial state directed along $(\theta_o,\phi_o)$ is then produced by a rigid body rotation of (\[eqn:rho\]) by an angle $\theta_o$ about the $y$-axis followed by rotation with angle $\phi_o$ about the $z$-axis. The variance $\sigma^2$ and the magnitude $|{\bf J}|_c$ are free parameters of the classical distribution that should be chosen to fit the quantum probabilities as well as possible. It is shown in the Appendix that no classical density has marginal distributions which can match all of the quantum moments, so we concentrate only on matching the lowest order moments. Since the magnitude of the spin is a kinematic constant both classically and quantum mechanically, we choose the squared length of the classical spin to have the correct quantum value, $$\label{eqn:mag} |{\bf J}|_c^2 = \langle J_x^2 \rangle + \langle J_y^2 \rangle + \langle J_z^2 \rangle= j(j+1).$$ For a state polarized along the $z$-axis, we have $\langle J_x \rangle = \langle J_y \rangle = 0 $ and $\langle J_y^2 \rangle = \langle J_x^2 \rangle $ for both distributions as a consequence of the axial symmetry. Furthermore, as a consequence of (\[eqn:mag\]), we will automatically satisfy the condition, $$2 \langle J_x^2 \rangle_c + \langle J_z^2 \rangle_c = j(j+1).$$ Therefore we only need to consider the classical moments, $$\begin{aligned} \label{eqn:cm} \langle J_z \rangle_c = |{\bf J}| \; G(\sigma^2) \\ \langle J_x^2 \rangle_c = |{\bf J}|^2 \sigma^2 \; G(\sigma^2), %\\ \langle J_z^2 \rangle_c = |{\bf J}|^2 %( 1 - 2 \sigma^2 \; G(\sigma^2)), \end{aligned}$$ calculated from the density (\[eqn:rho\]) in terms of the remaining free parameter, $\sigma^2$, where, $$G(\sigma^2) = \left[ {1 + \exp(-2 \sigma^{-2}) \over 1 - \exp (-2 \sigma^{-2}) }\right] - \sigma^2.$$ We would like to match both of these classical moments with the corresponding quantum values, $$\begin{aligned} \label{eqn:qmz} \langle J_z \rangle = j, \\ \langle J_x^2 \rangle = j/2, \label{eqn:qmx2} % \langle J_z^2 \rangle = j^2. \end{aligned}$$ calculated for the coherent state (\[eqn:cs\]). However, no choice of $\sigma^2$ will satisfy both constraints. If we choose $\sigma^2$ to satisfy (\[eqn:qmz\]) exactly then we would obtain, $$\sigma^2 = \frac{1}{2j} - \frac{3}{8j^2}+{\mathcal O}(j^{-3}).$$ If we choose $\sigma^2$ to satisfy (\[eqn:qmx2\]) exactly then we would obtain, $$\sigma^2 = \frac{1}{2j} + \frac{1}{4j^2}+{\mathcal O}(j^{-3}).$$ (These expansions are most easily derived from the approximation $G(\sigma^2) \simeq 1 - \sigma^2$, which has an exponentially small error for large $j$.) We have chosen to compromise between these values by fixing $\sigma^2$ so that the ratio $\langle J_z \rangle_c / \langle J_x^2 \rangle_c $ has the correct quantum value. This leads to the choice, $$\label{eqn:sigma} \sigma^2 = \frac{1}{2 \sqrt{j(j+1)}} = \frac{1}{2j} - \frac{1}{4j^2}+{\mathcal O}(j^{-3}).$$ These unavoidable initial differences between the classical and quantum moments will vanish in the “classical” limit. To see this explicitly it is convenient to introduce a measure of the quantum-classical differences, $$\delta J_z(n) = | \langle J_z(n) \rangle - \langle J_z(n) \rangle_c|,$$ defined at time $n$. For an initial state polarised in direction $(\theta,\phi)$, the choice (\[eqn:sigma\]) produces the initial difference, $$\label{eqn:initdiff} \delta J_z(0) %= {\delta J_z(n) \over |{\bf J}| } = { \cos (\theta) \over 8j } + {\mathcal O}(j^{-2}),$$ which vanishes as $j \rightarrow \infty$. Numerical Methods {#sect:nummethods} ================= We have chosen to study the time-periodic spin Hamiltonian (\[eqn:ham\]) because the time-dependence is then reduced to a simple mapping and the quantum state vector is confined to a finite dimensional Hilbert space. Consequently we can solve the exact time-evolution equations (\[eqn:qmmap\]) numerically without introducing any artificial truncation of the Hilbert space. The principal source of numerical inaccuracy arises from the numerical evaluation of the matrix elements of the rotation operator $ \langle j, m' | R(\theta,\phi) | j, m \rangle = \exp( - i \phi m' ) d_{m'm}^{(j)} (\theta)$. The rotation operator is required both for calculation of the initial quatum coherent state, $ |\theta, \phi \rangle = R(\theta,\phi) | j,m=j \rangle $, and evaluation of the unitary Floquet operator. In order to maximise the precision of our results we calculated the matrix elements $d_{m'm}^{(j)}(\theta) = \langle j, m' | \exp(-i \theta J_y | j,m \rangle$ using the recursion algorithm of Ref. [@Haake96] and then tested the accuracy of our results by introducing controlled numerical errors. For small quantum numbers ($j<50$) we are able to confirm the correctness of our coded algorithm by comparing these results with those obtained by direct evaluation of Wigner’s formula for the matrix elements $d_{m'm}^{(j)}(\theta)$. The time evolution of the Liouville density was simulated by numerically evaluating between $10^8$ and $10^9$ classical trajectories with randomly selected initial conditions weighted according to the initial distribution (\[eqn:rho\]). Such a large number of trajectories was required in order to keep Monte Carlo errors small enough to resolve the initial normalized quantum-classical differences, which scale as $1/8j^2$, over the range of $j$ values we have examined. We identified initial conditions of the classical map as chaotic by numerically calculating the largest Lyapunov exponent, $\lambda_L$, using the formula, $$\label{eqn:liap} \lambda_L = { 1 \over N } \sum_{n=1}^N \ln d(n)$$ where $d(n) = \sum_i | \delta x_i(n) | $ , with $d(0) = 1$. The differential ${\bf \delta x}(n)$ is a difference vector between adjacent trajectories and thus evolves under the action of the tangent map $ {\bf \delta x}(n+1) = {\bf M} \cdot {\bf \delta x}(n)$, where ${\bf M}$ is evaluated along some fiducial trajectory [@LL]. Since we are interested in studying quantum states, and corresponding classical distributions which have non-zero support on the sphere, it is also important to get an idea of the size of these regular and chaotic zones. By comparing the size of a given regular or chaotic zone to the variance of an initial state located within it, we can determine whether most of the state is contained within this zone. However, we can not perform this comparison by direct visual inspection since the relevant phase space is 4-dimensional. One strategy which we used to overcome this difficulty was to calculate the Lyapunov exponent for a large number of randomly sampled initial conditions and then project only those points which are regular (or chaotic) onto the plane spanned by $\tilde{S}_z = \cos \theta_s$ and $\tilde{L}_z = \cos \theta_l$. If the variance of the initial quantum state is located within, and several times smaller than, the dimensions of a zone devoid of any of these points, then the state in question can be safely identified as chaotic (or regular). Characteristics of the Quantum and Liouville Dynamics {#sec:qcc} ===================================================== Mixed Phase Space ----------------- We consider the time-development of initial quantum coherent states (\[eqn:cs\]) evolved according to the mapping (\[eqn:qmmap\]) using quantum numbers $s=140$ and $l=154$ and associated classical parameters $\gamma=1.215$, $r \simeq 1.1$, and $a=5$, which produce a mixed phase space (see Fig. \[regimes\]). The classical results are generated by evolving the the initial ensemble (\[eqn:rho\]) using the mapping (\[eqn:map\]). In Fig. \[vargrowth.1.215.ric2\] we compare the time-dependence of the normalized quantum variance, $\Delta {\tilde {\bf L}}^2 = [\langle {\bf L}^2 \rangle - \langle {\bf L} \rangle^2 ] / l(l+1) $, with its classical counterpart, $\Delta {\tilde {\bf L}}^2_c = [\langle {\bf L}^2 \rangle_c - \langle {\bf L} \rangle_c^2 ] / |{\bf L}|^2 $. Squares (diamonds) correspond to the dynamics of an initial quantum (classical) state centered at $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$, which is located in the connected chaotic zone near one of the unstable fixed points of the classical map. Crosses (plus signs) correspond to an initial quantum (classical) state centered on the initial condition $\vec{\theta}(0) =(5^o,5^o,5^o,5^o)$, which is located in the regular zone near one of the stable fixed points. For both initial conditions the quantum and classical results are nearly indistinguishable on the scale of the figure. In the case of the regular initial condition, the quantum variance remains narrow over long times and, like its classical counterpart, exhibits a regular oscillation. In the case of the chaotic initial condition the quantum variance also exhibits a periodic oscillation but this oscillation is superposed on a very rapid, approximately exponential, growth rate. This exponential growth persists until the variance approaches the system size, that is, when $\Delta {\tilde {\bf L}}^2 \simeq 1$ . The initial exponential growth of the quantum variance in classically chaotic regimes has been observed previously in several models and appears to be a generic feature of the quantum dynamics; this behaviour of the quantum variance is mimicked very accurately by the variance of an initially well-matched classical distribution [@Ball98; @Fox94b; @Fox94a]. For well-localized states, in the classical case, the exponential growth of the distribution variance in chaotic zones is certainly related to the exponential divergence of the underlying trajectories, a property which characterizes classical chaos. To examine this connection we compare the observed exponential rate of growth of the widths of the classical (and quantum) state with the exponential rate predicted from the classical Lyapunov exponent. For the coherent states the initial variance can be calculated exactly, $\Delta {\tilde {\bf L}}^2(0) = 1/(l+1)$. Then, assuming exponential growth of this initial variance we get, $$\label{eqn:expvar} \Delta {\tilde {\bf L}}^2(n) \simeq { 1 \over l } \exp( 2 \lambda_w n) \; \;\;\;\;\;\;\; {\mathrm for } \;\;\; n < t_{sat},$$ where a factor of $2$ is included in the exponent since $\Delta {\tilde {\bf L}}^2 $ corresponds to a squared length. The dotted line in Fig. \[vargrowth.1.215.ric2\] corresponds to the prediction (\[eqn:expvar\]) with $\lambda_w = \lambda_L = 0.04$, the value of the largest classical Lyapunov exponent. As can be seen from the figure, the actual growth rate of the classical (and quantum) variance of the chaotic initial state is significantly larger than that predicted using the largest Lyapunov exponent. For comparison purposes we also plot a solid line in Fig. \[vargrowth.1.215.ric2\] corresponding to (\[eqn:expvar\]) using $\lambda_w = 0.13 $, which provides a much closer approximation to the actual growth rate. We find, for a variety of initial conditions in the chaotic zone of this mixed regime, that the actual classical (and quantum) variance growth rate is consistently larger than the simple prediction (\[eqn:expvar\]) using $\lambda_L$ for the growth rate. This systematic bias requires some explanation. As pointed out in [@Fox94b], the presence of some discrepancy can be expected from the fact that the Lyapunov exponent is defined as a geometric mean of the tangent map eigenvalues sampled over the entire connected chaotic zone (corresponding to the infinite time limit $n\rightarrow \infty$) whereas the [*actual*]{} growth rate of a given distribution over a small number of time-steps will be determined largely by a few eigenvalues of the local tangent map. In mixed regimes these local eigenvalues will vary considerably over the phase space manifold and the product of a few of these eigenvalues can be quite different from the geometric mean over the entire connected zone. However, we find that the actual growth rate is consistently [*larger*]{} than the Lyapunov exponent prediction. It is well known that in mixed regimes the remnant KAM tori can be ‘sticky’; these sticky regions can have a significant decreasing effect on a calculation of the Lyapunov exponent. In order to identify an initial condition as chaotic, we specifically choose initial states that are concentrated away from these KAM surfaces (regular islands). Such initial states will then be exposed mainly to the larger local expansion rates found away from these surfaces. This explanation is supported by our observations that, when we choose initial conditions closer to these remnant tori, we find that the growth rate of the variance is significantly reduced. These variance growth rates are still slightly larger than the Lyapunov rate, but this is not surprising since our initial distributions are concentrated over a significant fraction of the phase space and the growth of the distribution is probably more sensitive to contributions from those trajectories subject to large eigenvalues away from the KAM boundary than those stuck near the boundary. These explanations are further supported by the results of the following section, where we examine a phase space regime that is nearly devoid of regular islands. In these regimes we find that the Lyapunov exponent serves as a much better approximation to the variance growth rate. Regime of Global Chaos ---------------------- If we increase the dimensionless coupling strength to $\gamma=2.835$, with $a=5$ and $r \simeq 1.1$ as before, then the classical flow is predominantly chaotic on the surface ${\mathcal P}$ (see Fig. \[regimes\]). Under these conditions we expect that generic initial classical distributions (with non-zero support) will spread to cover the full surface ${\mathcal P}$ and then quickly relax close to microcanonical equilibrium. We find that the initially localised quantum states also exhibit these generic features when the quantum map is governed by parameters which produce these conditions classically. For the non-autonomous Hamiltonian system (\[eqn:map\]) the total energy is not conserved, but the two invariants of motion ${\bf L}^2$ and ${\bf S}^2$ confine the dynamics to the 4-dimensional manifold ${\mathcal P} = {\mathcal S}^2 \times {\mathcal S}^2 $, which is the surface of two spheres. The corresponding microcanonical distribution is a constant on this surface, with measure (\[eqn:measure\]), and zero elsewhere. From this distribution we can calculate microcanonical equilibrium values for low order moments, where, for example, $\{ L_z \} = ( 4 \pi)^{-2} \int_{\mathcal P} L_z d \mu = 0$ and $\{ \Delta {\bf L}^2 \} = \{ {\bf L}^2 \}-\{ {\bf L} \}^2 = |{\bf L}|^2$. The symbols $\{ \cdot \}$ denote a microcanonical average. To give a sense of the accuracy of the correspondence between the classical ensemble and the quantum dynamics in Fig. \[qmlmnm\] we show a direct comparison of the dynamics of the quantum expectation value $ \langle \tilde{L}_z \rangle $ with $l=154$ and the classical distribution average $ \langle \tilde{L}_z \rangle_c $ for an initial coherent state and corresponding classical distribution centered at $\vec{\theta} = (45^o,70^o,135^o,70^o)$. To guide the eye in this figure we have drawn lines connecting the stroboscopic points of the mapping equations. The quantum expectation value exhibits essentially the same dynamics as the classical Liouville average, not only at early times, that is, in the initial Ehrenfest regime [@Ball94; @HB94], but for times well into the equilibrium regime where the classical moment $ \langle L_z \rangle$ has relaxed close to the microcanonical equilibrium value $\{ L_z \} = 0 $. We have also provided results for a single trajectory launched from the same initial condition in order to emphasize the qualitatively distinct behaviour it exhibits. In Fig. \[vargrowth.2.835.140\] we show the exponential growth of the normalized quantum and classical variances on a semilog plot for the same set of parameters and quantum numbers. Numerical data for (a) correspond to initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$ and those for (b) correspond to $\vec{\theta}(0) = (45^o,70^o,135^o,70^o)$. As in the mixed regime case, the quantum-classical differences are nearly imperceptible on the scale of the figure, and the differences between the quantum and classical variance growth rates are many orders of magnitude smaller than the small differences in the growth rate arising from the different initial conditions. In contrast with the mixed regime case, in this regime of global chaos the prediction (\[eqn:expvar\]) with $\lambda_w= \lambda_L=0.45$ now serves as a much better approximation of the exponential growth rate of the quantum variance, and associated relaxation rate of the quantum and classical states. In this regime the exponent $\lambda_w$ is also much larger than in the mixed regime case due to the stronger degree of classical chaos. As a result, the initially localised quantum and classical distributions saturate at system size much sooner. It is useful to apply (\[eqn:expvar\]) to estimate the time-scale at which the quantum (and classical) distributions saturate at system size. From the condition $\Delta {\tilde {\bf L}}^2(t_{sat}) \simeq 1$ and using (\[eqn:expvar\]) we obtain, $$\label{eqn:nsat} t_{sat} \simeq (2 \lambda_w)^{-1} \ln( l)$$ which serves as an estimate of this characteristic time-scale. In the regimes for which the full surface ${\mathcal P}$ is predominately chaotic, we find that the actual exponential growth rate of the width of the quantum state, $\lambda_w$, is well approximated by the largest Lyapunov exponent $\lambda_L$. For $a=5$ and $r=1.1$, the approximation $\lambda_w \simeq \lambda_L$ holds for coupling strengths $\gamma > 2$, for which more than 99% of the surface ${\mathcal P}$ is covered by one connected chaotic zone (see Fig. \[regimes\]). By comparing the quantum probability distribution to its classical counterpart, we can learn much more about the relaxation properties of the quantum dynamics. In order to compare each $m_l$ value of the quantum distribution, $P_z(m_l)$, with a corresponding piece of the continuous classical marginal probability distribution, $$P_c(L_z) = \int \! \! \int \! \! \int \! d \tilde{S}_z d \phi_s d \phi_l \; \rho_c(\theta_s,\phi_s, \theta_l, \phi_l),$$ we discretize the latter into $2j+1$ bins of width $\hbar=1$. This procedure produces a discrete classical probability distribution $P_z^c(m_l)$ which prescribes the probability of finding the spin component $L_z$ in the interval $[m_l+1/2,m_l-1/2]$ along the $z$-axis. To illustrate the time-development of these distributions we compare the quantum and classical probability distributions for three successive values of the kick number $n$, using the same quantum numbers and initial condition as in Fig. \[qmlmnm\]. In Fig. \[probdist0\] the initial quantum and classical states are both well-localised and nearly indistinguishable on the scale of the figure. At time $n=6 \simeq t_{sat}$, shown in Fig. \[probdist6\], both distributions have grown to fill the accessible phase space. It is at this time that the most significant quantum-classical discrepancies appear. For times greater than $t_{sat}$, however, these emergent quantum-classical discrepencies do not continue to grow, since both distributions begin relaxing towards equilibrium distributions. Since the dynamics are confined to a [*compact*]{} phase space, and in this parameter regime the remnant KAM tori fill a negligibly small fraction of the kinematicaly accessible phase space, we might expect the classical equilibrium distribution to be very close to the microcanonical distribution. Indeed such relaxation close to microcanonical equilibrium is apparent for both the quantum and the classical distribution at very early times, as demonstrated in Fig. \[probdist15\], corresponding to $n=15$. Thus the signature of a classically hyperbolic flow, namely, the exponential relaxation of an arbitrary distribution (with non-zero measure) to microcanonical equilibrium [@Dorfman], holds to good approximation in this model in a regime of global chaos. More suprisingly, this classical signature is manifest also in the dynamics of the quantum distribution. In the quantum case, however, as can be seen in Fig. \[probdist15\], the probability distribution is subject to small irreducible time-dependent fluctuations about the classical equilibrium. We examine these quantum fluctuations in detail elsewhere [@EB00b]. Time-Domain Characteristics of Quantum-Classical Differences ============================================================ We consider the time dependence of quantum-classical differences defined along the $z$-axis of the spin ${\bf L}$, $$\label{eqn:diff} \delta L_z(n) = | \langle L_z(n) \rangle - \langle L_z(n) \rangle_c |,$$ at the stroboscopic times $t=n$. In Fig. \[delta.1.215.140.n200\] we compare the time-dependence of $\delta L_z(n)$ on a semi-log plot for a chaotic state (filled circles), with $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$, and a regular state (open circles), $\vec{\theta}(0) = (5^o,5^o,5^o,5^o)$, evolved using the same mixed regime parameters ($\gamma=1.215$ and $r\simeq1.1$) and quantum numbers ($l=154$) as in Fig. \[vargrowth.1.215.ric2\]. We are interested in the behaviour of the upper envelope of the data in Fig. \[delta.1.215.140.n200\]. For the regular case, the upper envelope of the quantum-classical differences grows very slowly, as some polynomial function of time. For the chaotic case, on the other hand, at early times the difference measure (\[eqn:diff\]) grows exponentially until saturation around $n=15$, which is well before reaching system dimension, $|{\bf L}| \simeq l = 154$. After this time, which we denote $t^*$, the quantum-classical differences exhibit no definite growth, and fluctuate about the equilibrium value $\delta L_z \sim 1 \ll |{\bf L}|$. In Fig. \[delta.1.215.140.n200\] we also include data for the time-dependence of the Ehrenfest difference $| \langle L_z \rangle - L_z| $, which is defined as the difference between the quantum expectation value and the dynamical variable of a single trajectory initially centered on the quantum state. In contrast to $\delta L_z$, the rapid growth of the Ehrenfest difference continues until saturation at the system dimension. In Fig. \[delta.1.215.dvsl\] we compare the time-dependence of the quantum-classical differences in the case of the chaotic initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$ for quantum numbers $l=22$ (filled circles) and $l=220$ (open circles), using the same parameters as in Fig. \[delta.1.215.140.n200\]. This demonstrates the remarkable fact that the exponential growth terminates when the difference measure reaches an essentially fixed magnitude ($\delta L_z \sim 1$ as for the case $l=154$), although the system dimension differs by an order of magnitude in the two cases. In Fig. \[delta.2.835.140\] we consider the growth of the quantum-classical difference measure $\delta L_z(n)$ in a regime of global chaos, for $l=154$, and using the same set of parameters as those examined in Fig. \[vargrowth.2.835.140\] ($\gamma=2.835$ and $r\simeq1.1$). Again the upper envelope of the difference measure $\delta L_z(n)$ exhibits exponential growth at early times, though in this regime of global chaos the exponential growth persists only for a very short duration before saturation at $t^* \simeq 6$. The initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$ is a typical case (filled circles), where, as seen for the mixed regime parameters, the magnitude of the difference at the end of the exponential growth phase saturates at the value $\delta L_z(t^*) \simeq 1$, which does not scale with the system dimension (see Fig. \[deltamax.vs.l\]). The initial condition $\vec{\theta}(0) = (45^o,70^o,135^o,70^o)$ (open circles) leads to an anomolously large deviation at the end of the exponential growth phase, $\delta L_z(t^*) \simeq 10$, though still small relative to the system dimension $|{\bf L}| \simeq 154$. This deviation is transient however, and at later times the magnitude of quantum-classical differences fluctuates about the equilibrium value $\delta L_z \sim 1$. The quantum-classical differences are a factor of $1/l$ smaller than typical differences between the quantum expectation value and the single trajectory, which are of order system dimension (see Fig. \[qmlmnm\]) as in the mixed regime case. In all cases where the initial quantum and classical states are launched from a chaotic zone we find that the initial time-dependence of quantum-classical differences compares favorably with the exponential growth ansatz, $$\label{eqn:expansatz} \delta L_z(n) \simeq { 1 \over 8 l } \exp ( \lambda_{qc} n ) \; \;\;\;\;\;\;\; {\mathrm for } \;\;\; n < t^*,$$ where the exponent $\lambda_{qc}$ is a new exponent subject to numerical measurement [@Ball98]. The prefactor $1 / 8 l $ is obtained from (\[eqn:initdiff\]) though we have dropped the $\cos \theta$ factor that specifies the [*exact*]{} initial difference for $L_z$. Since contributions from the initial differences in other mismatched moments will generally mix under the dynamical flow, it is appropriate to consider an effective initial difference for the prefactor in (\[eqn:expansatz\]). The prefactor $1/8l$ is obtained by accounting for the initial contributions from the 3 cartesian components, $ [ \delta^2 L_x(0) + \delta^2 L_y(0) + \delta^2 L_z(0 ]^{1/2} = 1/8l $. We are interested in whether the Lyapunov exponent $\lambda_L$ is a good approximation to $\lambda_{qc}$. In Fig. \[delta.1.215.dvsl\] we plot (\[eqn:expansatz\]) with $\lambda_{qc} = \lambda_L = 0.04$ (dotted line) for $l=220$. Clearly the largest Lyapunov exponent severly underestimates the exponential growth rate of the quantum-classical differences, in this case by more than an order of magnitude. The growth rate of the state width, $\lambda_w = 0.13$ , is also several times smaller than the initial growth rate of the quantum-classical differences. In the case of Fig. \[delta.2.835.140\], corresponding to a regime of global chaos with a much larger Lyapunov exponent, we plot (\[eqn:expansatz\]) with $\lambda_{qc} = \lambda_L = 0.45$ (dotted line), demonstrating that, in this regime too the largest Lyapunov exponent underestimates the initial growth rate of the quantum-classical difference measure $\delta L_z(n)$. We also find, from inspection of our results, that the time $t^*$ at which the exponential growth (\[eqn:expansatz\]) terminates can be estimated from $t_{sat}$, the time-scale on which the distributions saturate at or near system size (\[eqn:nsat\]). In the case of the chaotic initial condition of Fig. \[vargrowth.1.215.ric2\], for which $\gamma=1.215$, visual inspection of the figure suggests that $t_{sat} \simeq 18$. This should be compared with Fig. \[delta.1.215.140.n200\], where the exponential growth of $\delta L_z(n)$ ends rather abruptly at $t^* \simeq 15$. In Fig. \[vargrowth.2.835.140\], corresponding to a regime of global chaos ($\gamma=2.835$), the variance growth saturates much earlier, around $t_{sat} \simeq 6$ for both initial conditions. From Fig. \[delta.2.835.140\] we can estimate that the initial exponential growth of the quantum-classical differences for these two initial conditions also ends around $n \simeq 6$. As we increase $\gamma$ further, we find that the exponential growth phase of quantum-classical differences $\delta L_z(n)$ is shortened, lasting only until the corresponding quantum and classical distributions saturate at system size. For $\gamma \simeq 12$, with $\lambda_L \simeq 1.65$, the chaos is sufficiently strong that the initial coherent state for $l=154$ spreads to cover ${\mathcal P}$ within a single time-step. Similarly the initial difference measure $\delta L_z(0) \simeq 0.001$ grows to the magnitude $\delta L_z(1) \simeq 1$ within a single time-step and subsequently fluctuates about that equilibrium value. We have also inspected the variation of $t^*$ with the quantum numbers and found it to be consistent with the logarithmic dependence of $t_{sat}$ in (\[eqn:nsat\]). Correspondence Scaling in the Classical Limit {#sect:scaling} ============================================= We have assumed in (\[eqn:expansatz\]) that the exponent $\lambda_{qc}$ is independent of the quantum numbers. A convenient way of confirming this, and also estimating the numerical value of $\lambda_{qc}$, is by means of a break-time measure. The break-time is the time $t_b(l,p) $ at which quantum-classical differences exceed some fixed tolerance $p$, with the classical parameters and initial condition held fixed. Setting $\delta L_z ( t_b) = p $ in (\[eqn:expansatz\]), we obtain $t_b$ in terms of $p$, $l$ and $\lambda_{qc}$, $$\label{eqn:tb} t_b \simeq \lambda_{qc}^{-1} \ln ( 8 \; p \; l ) \;\;\;\;\;\; {\mathrm provided } \;\;\ p < {\mathcal O}(1).$$ The restriction $p < {\mathcal O}(1)$, which plays a crucial role in limiting the robustness of the break-time measure (\[eqn:tb\]), is explained and motivated further below. The explicit form we have obtained for the argument of the logarithm in (\[eqn:tb\]) is a direct result of our estimate that the initial quantum-classical differences arising from the Cartesian components of the spin provide the dominant contribution to the prefactor of the exponential growth ansatz (\[eqn:expansatz\]). Differences in the mismatched higher order moments, as well as intrinsic differences between the quantum dynamics and classical dynamics, may also contribute to this effective prefactor. We have checked that the initial value $\delta L_z(0) \simeq 1 /8l$ is an adequate estimate by comparing the intercept of the quantum-classical data on a semilog plot with the prefactor of (\[eqn:expansatz\]) for a variety of $l$ values (see [*e.g*]{}. Fig. \[delta.1.215.dvsl\]). In Fig. \[break\_time\_p0.1\] we examine the scaling of the break-time for $l$ values ranging from $11$ to $220$ and with fixed tolerance $p=0.1$. The break-time can assume only the integer values $t=n$ and thus the data exhibits a step-wise behaviour. For the mixed regime parameters, $\gamma=1.215$ and $r\simeq 1.1$ (filled circles), with initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$, a non-linear least squares fit to (\[eqn:tb\]) gives $\lambda_{qc} = 0.43$. This fit result is plotted in the figure as a solid line. The close agreement between the data and the fit provides good evidence that the quantum-classical exponent $\lambda_{qc}$ is independent of the quantum numbers. To check this result against the time-dependent $\delta L_z(n)$ data, we have plotted the exponential curve (\[eqn:expansatz\]) with $\lambda_{qc} = 0.43$ in Fig. \[delta.1.215.140.n200\] using a solid line and in Fig. \[delta.1.215.140.n200\] using a solid line for $l=22$ and a dotted line for $l=220$. The exponent obtained from fitting (\[eqn:tb\]) serves as an excellent approximation to the initial exponential growth (\[eqn:expansatz\]) of the quantum-classical differences in each case. In Fig. \[break\_time\_p0.1\] we also plot break-time results for the global chaos case $\gamma=2.835$ and $r\simeq 1.1$ (open circles) with initial condition $\vec{\theta}(0) = (45^o,70^o,135^o,70^o)$. In this regime the quantum-classical differences grow much more rapidly and, consequently, the break-time is very short and remains nearly constant over this range of computationally accessible quantum numbers. Due to this limited variation, in this regime we can not confirm (\[eqn:tb\]), although the data is consistent with the predicted logarithmic dependence on $l$. Moreover, the break-time results provide an effective method for estimating $\lambda_{qc}$ if we assume that (\[eqn:tb\]) holds. The same fit procedure as detailed above yields the quantum-classical exponent $\lambda_{qc} = 1.1$. This fit result is plotted in Fig. \[break\_time\_p0.1\] as a solid line. More importantly, the exponential curve (\[eqn:expansatz\]), plotted with fit result $\lambda_{qc} = 1.1$, can be seen to provide very good agreement with the initial growth rate of Fig. \[delta.2.835.140\] for either initial condition, as expected. In the mixed regime ($\gamma=1.215$), the quantum-classical exponent $\lambda_{qc} = 0.43$ is an order of magnitude greater than the largest Lyapunov exponent $\lambda_L=0.04$ and about three times larger than the growth rate of the width $\lambda_w =0.13$. In the regime of global chaos ($\gamma=2.835$) the quantum-classical exponent $\lambda_{qc} = 1.1$ is a little more than twice as large as the largest Lyapunov exponent $\lambda_L=0.45$. The condition $p< {\mathcal O}(1) $ is a very restrictive limitation on the domain of application of the log break-time (\[eqn:tb\]) and it is worthwhile to explain the significance of this restriction. In the mixed regime case of Fig. \[delta.1.215.140.n200\], with $l=154$, we have plotted the tolerance values $p=0.1$ (dotted line) and $p=15.4$ (sparse dotted line). The tolerance $p=0.1$ is exceeded at $t=11$, while the quantum-classical differences are still growing exponentially, leading to a log break-time for this tolerance value. For the tolerance $p=15.4 \ll |{\bf L}|$, on the other hand, the break-time does not occur on a measurable time-scale, whereas according to the logarithmic rule (\[eqn:tb\]), with $l = 154$ and $\lambda_{qc} = 0.43$, we should expect a rather short break-time $t_b \simeq 23$. Consequently the break-time (\[eqn:tb\]), applied to delimiting the end of the Liouville regime, is not a robust measure of quantum-classical correspondence. Our definition of the break-time (\[eqn:tb\]) requires holding the tolerance $p$ fixed in absolute terms (and not as fraction of system dimension as in [@Haake87]) when comparing systems with different quantum numbers. Had we chosen to compare systems using a fixed relative tolerance, $f$, then the break-time would be of the form $t_b \simeq \lambda_{qc}^{-1} \ln ( 8 \; f \; l^2 )$ and subject to the restriction $f < {\mathcal O}(1/l)$. Since $f \rightarrow 0$ in the classical limit, this form emphasizes that the log break-time applies only to differences that are vanishing fraction of the system dimension in that limit. Although we have provided numerical evidence (in Fig. \[delta.1.215.dvsl\]) of one mixed regime case in which the largest quantum-classical differences occuring at the end of the exponential growth period remain essentially constant for varying quantum numbers, $\delta L_z (t^*) \sim {\mathcal O}(1)$, we find that this behaviour represents the typical case for all parameters and initial conditions which produce chaos classically. To demonstrate this behaviour we consider the the scaling (with increasing quantum numbers) of the maximum values attained by $\delta L_z(n)$ over the first 200 kicks, $\delta L_z^{max}$. Since $t^* \ll 200$ over the range of $l$ values examined, the quantity $\delta L_z^{max}$ is a rigorous upper bound for $\delta L_z(t^*)$. In Fig. \[deltamax.vs.l\] we compare $\delta L_z^{max}$ for the two initial conditions of Fig. \[delta.2.835.140\] and using the global chaos parameters ($\gamma =2.835$, $r \simeq 1.1$). The filled circles in Fig. \[deltamax.vs.l\] correspond to the initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$. As in the mixed regime, the maximum deviations exhibit little or no scaling with increasing quantum number. This is the typical behaviour that we have observed for a variety of different initial conditions and parameter values. These results motivate the generic rule, $$\delta \tilde {L}_z(t^*) \le \delta \tilde{L}_z^{max} \sim {\mathcal O}(1/l).$$ Thus the magnitude of quantum-classical differences reached at the end of the exponential growth regime, expressed as a fraction of the system dimension, approaches zero in the classical limit. However, for a few combinations of parameters and initial conditions we do observe a ‘transient’ discrepancy peak occuring at $ t \simeq t^* $ that exceeds ${\mathcal O}(1)$. This peak is quickly smoothed away by the subsequent relaxation of the quantum and classical distributions. This peak is apparent in Fig. \[delta.2.835.140\] (open circles), corresponding to the most conspicuous case that we have identified. This case is apparent as a small deviation in the normalized data of Fig. \[qmlmnm\]. The scaling of the magnitude of this peak with increasing $l$ is plotted with open circles in Fig. \[deltamax.vs.l\]. The magnitude of the peak initially increases rapidly but appears to become asymptotically independent of $l$. The other case that we have observed occurs for the classical parameters $\gamma =2.025$, with $r\simeq 1.1$ and $a=5$, and with initial condition $\vec{\theta}(0) = (20^o,40^o,160^o,130^o)$. We do not understand the mechanism leading to such transient peaks, although they are of considerable interest since they provide the most prominent examples of quantum-classical discrepancy that we have observed. Discussion {#sec:discussion} ========== In this study of a non-integrable model of two interacting spins we have characterized the correspondence between quantum expectation values and classical ensemble averages for intially localised states. We have demonstrated that in chaotic states the quantum-classical differences initially grow exponentially with an exponent $\lambda_{qc}$ that is consistently larger than the largest Lyapunov exponent. In a study of the moments of the Henon-Heiles system, Ballentine and McRae [@Ball98; @Ball00] have also shown that quantum-classical differences in chaotic states grow at an exponential rate with an exponent larger than the largest Lyapunov exponent. This exponential behaviour appears to be a generic feature of the short-time dynamics of quantum-classical differences in chaotic states. Since we have studied a spin system, we have been able to solve the quantum problem without truncation of the Hilbert space, subject only to numerical roundoff, and thus we are able to observe the dynamics of the quantum-classical differences well beyond the Ehrenfest regime. We have shown that the exponential growth phase of the quantum-classical differences terminates well before these differences have reached system dimension. We find that the time-scale at which this occurs can be estimated from the time-scale at which the distribution widths approach the system dimension, $t_{sat} \simeq (2 \lambda_w)^{-1} \ln (l)$ for initial minimum uncertainty states. Due to the close correspondence in the growth rates of the quantum and classical distributions, this time-scale can be estimated from the classical physics alone. This is useful because the computational complexity of the problem does not grow with the system action in the classical case. Moreover, we find that the exponent $\lambda_w$ can be approximated by the largest Lyapunov exponent when the kinematic surface is predominantly chaotic. We have demonstrated that the exponent $\lambda_{qc}$ governing the initial growth rate of quantum-classical differences is independent of the quantum numbers, and that the effective prefactor to this exponential growth decreases as $1/l$. These results imply that a log break-time rule (\[eqn:tb\]) delimits the dynamical regime of Liouville correspondence. However, the exponential growth of quantum-classical differences persists only for short times and small differences, and thus this log break-time rule applies only in a similarly restricted domain. In particular, we have found that the magnitude of the differences occuring at the end of the initial exponential growth phase does not scale with the system dimension. A typical magnitude for these differences, relative to the system dimension, is ${\mathcal O}(1/l)$. Therefore, $\log(l)$ break-time rules characterizing the end of the Liouville regime are not robust, since they apply to quantum-classical differences only in a restricted domain, [*i.e*]{}. to relative differences that are smaller than ${\mathcal O}(1/l)$. This restricted domain effect does not arise for the better known log break-time rules describing the end of the Ehrenfest regime [@Ball94; @BZ78; @Haake87]. The Ehrenfest log break-time remains robust for arbitrarily large tolerances since the corresponding differences grow roughly exponentially until saturation at the system dimension [@Fox94b; @Fox94a]. Consequently, a $\log(l)$ break-time indeed implies a [*breakdown*]{} of Ehrenfest correspondence. However, the logarithmic break-time rule characterizing the end of the Liouville regime does not imply a breakdown of Liouville correspondence because it does not apply to the observation of quantum-classical discrepancies larger than ${\mathcal O}(1/l)$. The appearance of residual ${\mathcal O}(1/l)$ quantum-classical discrepancies in the description of a macroscopic body is, of course, consistent with quantum mechanics having a proper classical limit. We have found, however, that for certain exceptional combinations of parameters and initial conditions there are relative quantum-classical differences occuring at the end of the exponential growth phase that can be larger than ${\mathcal O}(1/l)$, though still much smaller than the system dimension. In absolute terms, these transient peaks seem to grow with the system dimension for small quantum numbers but become asymptotically independent of the system dimension for larger quantum numbers. Therefore, even in these least favorable cases, the [*fractional*]{} differences between quantum and classical dynamics approach zero in the limit $l \rightarrow \infty$. This vanishing of fractional differences is sufficient to ensure a classical limit for our model. Finally, contrary to the results found in the present model, it has been suggested that a log break-time delimiting the Liouville regime implies that certain isolated macroscopic bodies in chaotic motion should exhibit non-classical behaviour on observable time scales. However, since such non-classical behaviour is not observed in the chaotic motion of macroscopic bodies, it is argued that the observed classical behaviour emerges from quantum mechanics only when the quantum description is expanded to include interactions with the many degrees-of-freedom of the ubiquitous environment [@ZP95a; @Zurek98b]. (This effect, called decoherence, rapidly evolves a pure system state into a mixture that is essentially devoid of non-classical properties.) However, in our model classical behaviour emerges in the macroscopic limit of a simple few degree-of-freedom quantum system that is described by a pure state and subject only to unitary evolution. Quantum-classical correspondence at both early and late times arises in spite of the log break-time because this break-time rule applies only when the quantum-classical difference threshold is chosen smaller than ${\mathcal O}(\hbar)$. In this sense we find that the decoherence effects of the environment are not necessary for correspondence in the macroscopic limit. Of course the effect of decoherence may be experimentally significant in the quantum and mesoscopic domains, but it is not required [*as a matter of principle*]{} to ensure a classical limit. Acknowledgements ================ We wish to thank F. Haake and J. Weber for drawing our attention to the recursion algorithm for the rotation matrix elements published in [@Haake96]. J. E. would like to thank K. Kallio for stimulating discussions. Appendix ======== Ideally we would like to construct an initial classical density that reproduces all of the moments of the initial quantum coherent states. This is possible in a Euclidean phase space, in which case all Weyl-ordered moments of the coherent state can be matched exactly by the moments of a Gaussian classical distribution. However, below we prove that no classical density $\rho_c(\theta,\phi)$ that describes an ensemble of spins of fixed length $|{\bf J}|$ can be constructed with marginal distributions that match those of the SU(2) coherent states (\[eqn:cs\]). Specifically, we consider the set of distributions on ${\mathcal S}^2$ with continuous independent variables $\theta \in [ 0 , \pi ]$ and $ \phi \in [ 0,2\pi) $, measure $ d \mu = \sin \theta d \theta d \phi$, and subject to the usual normalization, $$\int_{{\mathcal S}^2} d \mu \; \rho_c(\theta,\phi) = 1.$$ For convenience we choose the coherent state to be polarized along the positive $z$-axis, $\rho = | j,j \rangle \langle j,j |$. This state is axially symmetric: rotations about the $z$-axis by an arbitrary angle $\phi$ leave the state operator invariant. Consequently we require axially symmetry of the corresponding classical distribution, $$\label{eqn:cazisymm} \rho_c(\theta,\phi)= \rho_c(\theta).$$ We use the expectation of the quadratic operator, $ \langle {\bf J}^2 \rangle = j(j+1) $, to fix the length of the classical spins, $$\label{eqn:length} |{\bf J}| = \sqrt{\langle J^2 \rangle_c} = \sqrt{j(j+1)}.$$ Furthermore, the coherent state $|j,j\rangle$ is an eigenstate of $J_z$ with moments along the $z$-axis given by $\langle J_z^n \rangle = j^n$ for integer $n$. Therefore we require that the classical distribution produces the moments, $$\label{eqn:cjz} \langle J_z^n \rangle_c = j^n.$$ These requirements are satisfied by the $\delta$-function distribution, $$\label{eqn:vm} \rho_v(\theta) = { \delta(\theta - \theta_o) \over 2 \pi \sin \theta_o },$$ where $\cos \theta_o = j/|{\bf J}|$ defines $\theta_o$. This distribution is the familiar vector model of the old quantum theory corresponding to the intersection of a cone with the surface of the sphere. However, in order to derive an inconsistency between the quantum and classical moments we do not need to assume that the classical distribution is given explicitly by (\[eqn:vm\]); we only need to make use of the the azimuthal invariance condition (\[eqn:cazisymm\]), the length condition (\[eqn:length\]), and the first two even moments of (\[eqn:cjz\]). First we calculate some of the quantum coherent state moments along the $x$-axis (or any axis orthogonal to $z$), $$\begin{aligned} \label{eqn:xmoments} \langle J_x^m \rangle & = & 0 \; \; \; {\mathrm for \; odd} \; m \nonumber \\ \langle J_x^2 \rangle & = & j/2 \nonumber \\ \langle J_x^4 \rangle & = & 3j^2/4 -j/4 \nonumber. \end{aligned}$$ In the classical case, these moments are of the form, $$\label{eqn:jxm} \langle J_x^m \rangle_c = \int d J_z \int d \phi \rho_c(\theta) |{\bf J}|^m \cos^m(\phi) \sin^m(\theta).$$ For $m$ odd the integral over $\phi$ vanishes, as required for correspondence with the odd quantum moments. For $m$ even we can evaluate (\[eqn:xmoments\]) by expressing the r.h.s. as a linear combination of the $z$-axis moments (\[eqn:cjz\]) of equal and lower order. For $m=2$ this requires substituting $\sin^2(\theta) = 1 - \cos^2(\theta) $ into (\[eqn:jxm\]) and then integrating over $\phi$ to obtain $$\begin{aligned} \langle J_x^2 \rangle_c & = & \pi \int d J_z \rho_c(\theta) |{\bf J}|^2 - \pi \int d J_z \rho_c(\theta) |{\bf J}|^2 \cos^2(\theta) \nonumber \\ & = & |{\bf J}| /2 - \langle J_z^2 \rangle /2. \nonumber \end{aligned}$$ Since $\langle J_z^2 \rangle$ is determined by (\[eqn:cjz\]) and the length is fixed from (\[eqn:length\]) we can deduce the classical value without knowing $\rho(\theta)$, $$\langle J_x^2 \rangle_c = j/2.$$ This agrees with the value of corresponding quantum moment. For $m=4$, however, by a similar procedure we deduce $$\langle J_x^4 \rangle_c = 3 j^2/8,$$ that differs from the quantum moment $ \langle J_x^4 \rangle $ by the factor, $$\delta J_x^4 = | \langle J_x^4 \rangle - \langle J_x^4 \rangle_c | = | 3 j^2/8 - j/4 |,$$ concluding our proof that no classical distribution on ${\mathcal S}^2$ can reproduce the quantum moments. L.E. Ballentine, Y. Yang, and J.P. Zibin, Phys. Rev. A [**50**]{}, 2854 (1994). G.P. Berman and G.M. Zaslavsky, Physica [**91A**]{}, 450 (1978). F. Haake, M. Kus and R. Scharf, Z. Phys. B [**65**]{}, 361 (1987). B.V. Chirikov, F.M. Israilev, and D.L. Shepelyansky, Physica D [**33**]{}, 77 (1988). W.H. Zurek and J.P. Paz, Phys. Rev. Lett. [**72**]{}, 2508 (1994). S. Habib, K. Shizume and W.H. Zurek, Phys. Rev. Lett. [**80**]{}, 4361 (1998). R. Roncaglia, L. Bonci, B.J. West, and P. Grigolini, Phys. Rev. E [**51**]{}, 5524 (1995). F. Haake, [*Quantum Signatures of Chaos*]{} (Springer-Verlag, New York, 1991). W.H. Zurek and J.P. Paz, Phys. Rev. Letters [**75**]{}, 351 (1995). W.H. Zurek, Physica Scripta [**T76**]{}, 186 (1998). M. Feingold and A. Peres, Physica [**9D**]{} 433 (1983). L. E. Ballentine, Phys. Rev. A [**44**]{}, 4126 (1991). L. E. Ballentine, Phys. Rev. A [**44**]{}, 4133 (1991). L. E. Ballentine, Phys. Rev. A [**47**]{}, 2592 (1993). D. T. Robb and L. E. Reichl, Phys. Rev. E [**57**]{}, 2458 (1998). G.J. Milburn, quant-ph/9908037 (1999). L.E. Ballentine and S.M. McRae, Phys. Rev. A [**58**]{}, 1799 (1998). L.E. Ballentine, Phys. Rev. A [**63**]{}, 024101 (2001). J.J. Sakurai, [*Modern Quantum Mechanics*]{} (Benjamin-Cummings, Menlo Park Calif., 1985). A.J. Lichtenberg and M.A. Lieberman, [*Regular and Chaotic Motion*]{} (Springer-Verlag, New York, 1992). A. Perelomov, [*Generalized Coherent States and Their Applications*]{}, (Springer-Verlag, New York , 1986). R.F. Fox and T.C. Elston, Phys. Rev. E [**50**]{}, 2553 (1994). A. Braun, P. Gerwinski, F. Haake, H. Schomerus, Z. Phys. B [**100**]{}, 115 (1996). R.F. Fox and T.C. Elston, Phys. Rev. E [**49**]{}, 3683 (1994). B.S. Helmkamp and D.A. Browne, Phys. Rev. E [**49**]{}, 1831 (1994). J.R. Dorfman, [*An Introduction to Chaos in NonEquilibrium Statistical Mechanics*]{} (Cambridge University Press, Cambridge, 1999). J. Emerson and L.E. Ballentine, submitted to Phys. Rev. E, quant-ph/0103050 (2001).
--- abstract: 'We study the exchange constants of MnV$_{2}$O$_{4}$ using magnetic force theorem and local spin density approximation of density functional theory supplemented with a correction due to on–site Hubbard interaction $U$. We obtain the exchanges for three different orbital orderings of the Vanadium atoms of the spinel, two sizes of trigonal distortion, and several values of Coulomb parameter $U$. We then map the exchange constants to a Heisenberg model with single–ion anisotropy and solve for the spin–wave excitations in the non–collinear, low temperature phase of the spinel. The single–ion anisotropy parameters are obtained from an atomic multiplet exact–diagonalization program, taking into effect the crystal–field splitting and the spin–orbit coupling. We find good agreement between the spin waves of one of our orbital ordered setups with previously reported experimental spin waves as determined by neutron scattering. We can therefore determine the correct orbital order from various proposals that exist in the literature.' author: - 'R. Nanguneri, S. Y.  Savrasov' title: 'Exchange constants and spin waves of the orbital ordered, non–collinear spinel MnV$_{2}$O$_{4}$' --- Introduction ============ Transition metal oxides (TMO) are a class of solid–state materials that exhibit a rich variety of physical phenomena[@tokura00]. Among them, magnetic cubic spinels AV$_{2}$O$_{4}$ have recently attracted much attention due to geometrically frustrated corner sharing tetrahedral network formed by the V atoms (also known as a pyrochlore lattice)[@plum87]. An interesting example is represented by MnV$_{2}$O$_{4}$ which is the spinel having additional magnetic Mn ions. It exhibits an orbital ordering (OO)that occurs at finite $T$ as a thermal phase transition: At room temperature, crystalline MnV$_{2}$O$_{4}$ is a cubic paramagnet (PM) where Mn sites occupy the centers of oxygen tetrahedra (MnO$_{4}$ units), while V sites occupy the centers of oxygen octahedra (VO$_{6}$ units) which exhibit slight trigonal distortions consistent with the $Fd\overline{3}m$ cubic symmetry. As $T$ is lowered there occur two phase transitions: \[1\] A magnetic transition at $T_{F}=56$ K from the high–$T$ PM phase to a cubic ferrimagnetic (FEM) phase, with the Mn and V moments anti–aligned; \[2\] followed by a second transition at $T_{S}=53$ K to a tetragonal, non–collinear FEM with orbital ordering of $V^{3+}$ $3d^{2}$ electrons[@gar08]. The orbital ordered phase is accompanied by a reduction of the V magnetic moments due to the formation of the electron orbital moment (finite orbital angular momentum). The orbital moment, $m_{o}\approx 0.34$, is anti–aligned with the spin moment, $m_{s}\approx 1.65$, giving the total moment of $m\approx 1.31$ [gar08]{}. The reduced value of V moment has been reproduced by an earlier first–principles work in Ref. , and is explained by the spin–orbit coupling (SOC) on the V $3d^{2}$ which generally favors anti–alignment of spin and orbital angular momenta for $T$ below the energy scale of SOC [@plum87]. The local tetrahedral and octahedral coordination of the Mn and V sites results in the crystal–field (CF) splitting of their 5-fold $3d$ orbital degeneracy. Tetrahedrally coordinated Mn has an $e_{g}$ lower in energy than $t_{2g}$, while the splitting is opposite for octahedrally coordinated V. Inter–electron Coulomb interactions and exchange anti–symmetry lead to Hund’s rule splitting of up and down spins, which is greater than the CF splitting. In the stoichiometric crystalline environment, Mn has an outer shell high–spin $% S=5/2$ configuration of $3d^{5}$ and a valence of $+2$: all 5 up–spin $3d$ orbitals are occupied giving $L_{z}=0$ (quenched total orbital moment), and the down spin ones are empty. V has a valence of $+3$, an outer shell configuration of $3d^{2}$, and $S=1$: in this case, 2 electrons must occupy the 3 $t_{2g}$ orbitals. In the high temperature cubic phase, these latter three are nearly degenerate, while in the low temperature tetragonal phase, where the unit cell is slightly compressed along the $c$–axis, the $xy$ is lowered in energy while $yz$, $% zx$ remain degenerate. Thus, in the tetragonal (low–$T$) phase, one electron on V occupies the $xy$, and the second electron has the freedom to occupy either $yz$, or $zx$. Unlike Mn, the orbital angular momentum of V is not fully quenched: The partial occupation of the $yz$ and $zx$ gives an effective orbital angular momentum $% L=1$ for V. The fact that $L\neq 0$ implies that there maybe non–negligible effects of SOC in the V atoms [@plum87]. Further, this is a hint that the $yz$ and $zx$ could form complex linear combinations of one–electron states if it happens that $L_z=\pm1$, since only such a complex state can have a non–zero $L_z$. The freedom of the second electron of V to occupy $yz$, $zx$, or some linear combination of the two gives rise to the possibility of long–range orbital order in the low–$T$ phase. Two simple choices has been proposed for the orbital ordering in this spinel and both have been studied theoretically in mean field models. One is the Antiferro–Orbital Order (AFOO) with alternate occupation of the $yz$ and $zx$ along the c–axis, i.e.: the same orbital is occupied in a given $% ab$–plane but the other orbital is occupied in the adjacent planes above and below[@tsun03; @gar08; @suzuki07], as shown in Fig. \[fig1\](a). This order has the space–group symmetry $I4_{1}/a$. The second is the Ferro–Orbital Order (FOO) where the same orbital is occupied on all V–atoms[@adachi05], giving the space–group $I4_{1}/amd$, as shown in Fig. \[fig1\](b). In the latter, if the orbital order is a complex linear combination of $yz$ and $zx$ there will be a non–zero orbital angular momentum and a magnetic moment associated with it[@tchern04]. Spin–orbit coupling can stabilize the finite orbital moment, since the energy is lower for anti–parallel alignment of $\vec{L}$ and $\vec{S}$. ![(a) Schematic illustration of the *initial* real antiferro–orbital order of the type I (AFOO I) with $L=0$ on the four corners of the V tetrahedron. The lower and upper horizontal bonds are in the $ab$ plane. The red spheres are the V atoms. The lower $ab$ plane has $yz$ orbitals occupied on all V, while the upper $ab$ plane has $zx$ occupied on all V. (b) Schematic illustration of *initial* ferro orbital order on all four corners of the V tetrahedron where an electron occupies the same real linear combination of $yz$ and $zx$ on all V sites. Note that the self–consistent solution breaks this symmetry and results in an electron occupying alternately $\protect\psi _{+}=(\protect\psi _{yz}+\protect\psi _{zx})/% \protect\sqrt{2}$ and $\protect\psi _{-}=(\protect\psi _{yz}-\protect\psi % _{zx})/\protect\sqrt{2}$ along the $c$–axis. We refer to this order as antiferro–orbital order of type II (AFOO II). The indices [i]{}, [j]{}, [k]{}, [l]{} denote the inequivalent V sites in the FCC primitive cell. []{data-label="fig1"}](OrbitalOrders.jpg){width="0.5\columnwidth"} In both of the above proposals, trigonal distortion of the VO$_{6}$ octahedra in the low–$T$ phase is not taken into account, but it is known to be large in MnV$_{2}$O$_{4}$ as compared to other vanadates. While a slight trigonal distortion is present even in the high–$T$ cubic phase, there is a qualitative symmetry–lowering change and an increase in this distortion in the low–$T$ phase which lifts the residual degeneracy between $yz$, $zx$, and of the $e_{g}$ manifold, and combined with the tetragonal distortion results in the mixing all 5 $3d$ orbtials. In this case, the above OO proposals are not necessarily correct as these assume degeneracy between the $yz$, $zx$ orbitals. This low–$T$ trigonal distortion has indeed been observed in the previous first–principles work[@sarkar09] that used local spin density approximation (LSDA) of density functional theory (DFT) [@DFTBook] supplemented by the correction due to on–site Hubbard interaction $U$ [@AnisimovLDA+U] for correlation strengths $U>2$ eV. In that work, in addition to a tetragonal relaxation (compression) along the $c$–axis, structural relaxation of the O positions is performed and a trigonal distortion of the VO$_{6}$ octahedron with a concomitant lowering of symmetry from $I4_{1}/amd$ to $I4_{1}/a$ is found. By projecting the converged density onto an atomic orbital basis using so called N–th order muffin–tin orbital (NMTO) downfolding [AndersenNMTO]{}, the authors of Ref.  find a different electron occupation order from the ones proposed above, namely, the first electron occupies the lowest energy eigenstate, and the second occupies the next higher energy eigenstate. The $3d$ energy eigenstates are the same on all V sites, but rotated alternatively by $45^{\circ }$ along the $ab$–chains due to the staggered trigonal distortion. Thus, the same orbitals are occupied on all V sites, akin to the FOO, but nevertheless the space-group symmetry is $% I4_{1}/a$ expected of AFOO due to the trigonal distortion. The low–$T$ magnetic excitations of the compound have been mapped along high–symmetry directions using inelastic neutron scattering[chung08,gar08]{}. At the $\Gamma $ point, these excitations are gapped for the acoustic modes, indicating the presence of single–ion anisotropy, which essentially occurs due to the interplay between SOC and crystal–fields[alders01]{}. In Ref. , the authors start with a nearest–neighbor Heisenberg Hamiltonian including the anisotropy term and calculate spin–wave spectra and corresponding eigenmodes using linear spin–wave theory (LSWT) for the non–collinear, tetragonal phase. By fitting the spectrum to inelastic neutron scattering data, they were able to determine the exchange couplings between Mn–V, V–V in $ab$–plane, and V–V between $ab$–planes along the $c$–axis. They find all exchanges to be AFM with the following values: - $J_{\rm Mn-V} = -2.82$ meV - $J_{\rm V-V}^{ab} = -9.89$ meV - $J_{\rm V-V}^{c} = -3.08$ meV The authors point out the interplanar coupling between V atoms, $J_{\rm V-V}^{c}$, along the $c$–axis is unusually large for AFOO because such an alternate orbital occupation in the vertical direction would yield negligible orbital overlap, and would also be ferromagnetic (wrong sign) by the Goodenough–Kanamori rules[good63]{}. The alternate proposal, FOO, would be consistent with these results, but would have the wrong symmetry, $I4_{1}/amd$. The symmetry group of this spinel vanadate has been established conclusively as $I4_{1}/a$ by a synchrotron x–ray study[@suzuki07] which supports AFOO, but contradicts with the large value of $J_{\rm V-V}^{c}$. A possible resolution of this puzzle is that trigonal distortion has been ignored in these simple proposals. With trigonal distortion, we expect a more complex orbital ordering which has the requisite symmetry $I4_{1}/a$ and would give the observed (or fitted) $% J_{\rm V-V}^{c}$ along the $c$–axis[@chung08]. This is exactly what has been found in the ab–initio work of Ref. . Their physical picture has received some support by a recent $^{51}$V NMR work of Ref.  and by analytical model of Ref. . In this work, we report our study of MnV$_{2}$O$_{4}$ based on the LSDA+$U$ method and using linear muffin–tin orbital (LMTO) basis set to solve the electronic structure problem [@Andersen1975; @Savrasov1996]. We calculate the pair–wise interatomic magnetic exchange interactions ($J$) between all magnetic atoms using linear response theory and magnetic force theorem [liech87,wan06]{}, including the single–ion anisotropies ($D$) for Mn and V found by the exact diagonalization procedure[@alders01]. We then use the obtained $J$ and $D$ as parameters in a Heisenberg Hamiltonian with anisotropy to derive the spin–wave spectra in a semiclassical approximation. We explore three initial orbital ordering scenarios: \[1\] Antiferro, \[2\] Ferro, and \[3\] Complex ferro + SOC in the density matrix of the $3d$ shell of V to see how they affect the obtained exchange interactions. We also performed non–collinear magnetic electronic structure calculations. In our low-$T$ tetragonal structures, we explore the effects of two types of trigonal distortions of the ${\rm VO}_6$ octahedra: A small trigonal distortion, of order $2\%$ of the undistorted structure, with $I4_1/amd$ symmetry; and a larger trigonal distortion, of the type used in the relaxed structure of Ref.  (about $10\%$ of the undistorted structure), with an $I4_1/a$ symmetry. We find that the $J^{\prime}$s depend on both the size of trigonal distortion and Coulomb parameter $U$; we are thus faced with a two-parameter ‘trigonal–distortion/Coulomb–$U$’ space within which to search for a good match between experimental and theoretical $J^{\prime}$s. We find that SOC complex ferro–orbital order give $J^{\prime}$s which best match the experimental ones for small trigonal distortion and low-$U$, and also for larger trigonal distortion and higher-$U$. Our paper is organized as follows. We begin with a discussion of the proposed orbital orders and their electronic structures in Section II. We present our results for exchange interactions and comparisons with experiment in Section III. We end with the conclusions in Section IV. Proposed Orbital Orders and their Electronic Structures ======================================================= We have done LSDA+$U$ calculations to model the electronic structure for all three thermodynamic phases of MnV$_{2}$O$_{4}$. We describe our results in the following subsections for the $T=0$ phase only since this is the phase which exhibits orbital ordering and non–collinear magnetism. Our results for the other finite–$T$ phases may be found in Ref. . For the magnetic phases we use the same values of $U$ and $J_H$ for both Mn and V correlated $3d$ shells. The use of the same $U$ on Mn and V is justified because these elements have atomic numbers 25 and 23, and are thus expected to have similar interaction strengths[@sarkar09]. The Coulomb and exchange parameters in the solid state are generally screened, and hence reduced by a considerable amount from their bare atomic values[@miyake08]. The structural parameters for all three phases are taken from experiment[@gar08]: In the cubic phase, the lattice constant is $16.0746$ a.u., and in the tetragonal phase it is $16.12$ a.u. with a small tetragonal distortion ratio of $\frac{c}{a}=0.98$. The non–collinear orbital ordered phase occurs when the temperature is reduced below $T_{S}=53$ K. This phase transition results simultaneously in: \[1\] a structural transition from cubic to tetragonal; \[2\] the canting of V moments from a *collinear* ferrimagnetic (FEM) to a $\mathbf{q}=0$ *non-collinear* FEM spin order with non–zero components in the $ab$–plane; and \[3\] a long–range orbital order in the V $t_{2g}$ manifold. We model the electronic structure of this phase using LSDA+$U$ method with $U=5$ eV and $J_H=1$ eV, but starting the self–consistency loop after imposition of the initial orbital order(s) in the Hubbard–$U$ density matrix (further described below), along with tetragonal distortion and two different magnetic configurations: \[1\] *collinear*, as in the intermediate phase, and \[2\] *non–collinear*, which is in fact the correct magnetic order for this phase. The converged charge density for the low–$T$ *collinear* calculation was used as the initial charge density for the correct low–$T$ *non-collinear* calculation. The orbital order that is finally obtained after reaching the self–consistency is taken to be the correct metastable solution within this approximation and specified initial condition(s). We initialize the V $3d$ density matrix to a particular orbital order by specifying orbital occupation numbers in the atomic basis. This means we initially specify only the diagonal components (occupation numbers) $\langle n_{xy\uparrow }\rangle $, $\langle n_{yz\uparrow }\rangle $, $\langle n_{zx\uparrow }\rangle $ of the density matrix for all four V atoms’ $3d$ shells and set the off–diagonal elements to zero. The full complex density matrix in the atomic basis is $\langle n_{m\sigma ,m^{\prime }\sigma ^{\prime }}\rangle $ (where $m,m^{\prime }$ and $\sigma ,\sigma ^{\prime }$ are the $3d$ orbital and spin indices respectively) and includes off–diagonal components as well. As a result of the electron–electron interactions, during the self–consistent cycle non–zero off–diagonal components of the density matrix develop (since the interactions mix the single–particle $3d$ orbitals at the Hartree–Fock (HF) mean–field level). This means the true occupied orbitals are some linear combination of the atomic basis functions. After convergence is reached, the final density matrix, which is no longer diagonal in the $(m\sigma ,m^{\prime }\sigma ^{\prime })$ basis is diagonalized. The resulting eigenvectors and eigenvalues give the ‘correct’ single–particle HF wave functions and their occupation numbers respectively. In the basis of these eigenfunctions, the density matrix is once again diagonal, and its non–zero entries signify the true orbitals which are occupied within the mean–field approximation of LSDA+$U$. We are thus able to identify the orbital ordering that results after convergence is attained. We describe the final orbital orders below for the *collinear* magnetic solutions only, since we find that the electronic structures, and therefore the orbital orders, of the *non-collinear* configurations are not significantly different from the corresponding *collinear* ones as discussed below. Anti-Ferro Orbital Order I: $I4_{1}/a$ symmetry ----------------------------------------------- ![(a) The V $t_{2g}$-$\uparrow$ bands for $U=5$ eV in the low–$T$ tetragonal phase with a *collinear* ferrimagnetic spin configuration and real antiferro–orbital order of the type I (AFOO-I) as discussed in text. (b) Bands for the same setup as in (a) but with a *non-collinear* ferrimagnetic spin configuration. In both panels, the partial characters of the $xy$-$\uparrow$, $yz$-$\uparrow$, $zx$-$\uparrow$ orbitals are for the sublattice [i, j]{} V atoms. We see that due to the orbital ordering, the $zx$-$\uparrow$ is occupied while the $yz$-$\uparrow$ is somewhat less occupied. The occupations of these two orbitals are reversed for the sublattice [k, l]{} V atoms on the adjacent parallel $ab$ planes along the $c$-axis. The sublattice indices are defined in Figs. \[fig1\], \[fig5\]. There is a band–gap of $E_{% \mathrm{gap}}=1.67$ eV. []{data-label="fig2"}](AFOO1-V-i.jpg){width="1.0\columnwidth"} In the low–$T$ orbitally ordered phase, the tetragonal distortion occurs to break the degeneracy of the $t_{2g}$ in both V and Mn. We first describe the case of small trigonal distortion. There is no orbital freedom to place the electrons in the Mn $3d$. In the V, the energy of $xy$ gets lowered, so the first electron occupies $xy$. The second electron then has the freedom to occupy the remaining degenerate orbitals $yz$ or $zx$. Figure \[fig1\](a) shows the initial orbital occupations with $I4_{1}/a$ symmetry. In this scenario, the second electron of V occupies either $yz$ or $zx$ alternately along the $c$–axis (antiferro OO), and the same orbital within each $ab$ plane[@suzuki07; @gar08]. (Each V chain within an $ab$ plane has the same orbital occupied.) The final converged density matrices of the V $3d$ subspace show that the converged orbital order is not the same as the initial order, but one which is similar to that found in Ref. . That is, when we rotate the $3d$ density matrix from the global tetragonal coordinate system to the local trigonal one, a rotation by $45^{\circ}$, we find the same set of eigenstates for all V atoms, and the lowest two of these states in energy are occupied. We label this order ‘AFOO-I’, since it preserves the $I4_{1}/a$ symmetry. The *collinear* spin fat bands of V $t_{2g}$ electrons are shown in Fig. \[fig2\](a), and the same for *non–collinear* spins in Fig. [fig2]{}(b). The occupations of the $t_{2g}$-$\uparrow$ bands, as shown by the partial characters, reflects the converged orbital order, as well as the FEM spin configuration. We also find that imposition of orbital order opens a gap of about $E_{% \mathrm{gap}}=1.67$ eV at the Fermi level leading to an insulator state. The qualitative features of the band structure and partial characters does not change upon canting the V moments to the non–collinear configuration: The band gap remains robust and the phase is still insulating. Next we describe the case of large trigonal distortion with the space-group symmetry of $I4_1/a$. In this case we use $U=4.5$ eV, $J_H=1$ eV, along with muffin-tin sphere radii specified in Ref. . We start with an initial uniform orbital order in which the *three* $t_{2g}-\uparrow$ are equally occupied, but the two $e_g$ are almost empty. We also start with a second initial orbital order consisting of equal occupations of all *five* $3d-\uparrow$ orbitals. In both cases, we found that the converged density matrices, partial DOS, and fat-bands are identical with those of the calculation with small distortion. Our charge density on the V sites are alternately rotated within and between the V chains in the $ab$ plane, and as well, when we transform to the *local* trigonal coordinate system at each of the V sites, we obtain the same single-particle wavefunctions, showing that the same orbitals are occupied on each V site, but are rotated alternately by 45$^{\circ}$ due to the trigonal distortion. Thus, both small and large trigonal distortions result in the *same* orbital order, ‘AFOO-I’. Anti-Ferro Orbital Order II: $I4_{1}/a$ symmetry ------------------------------------------------ ![(a) The V $t_{2g}$-$\uparrow$ bands for $U=5$ eV in the low–$T$ tetragonal phase with a *collinear* ferrimagnetic spin configuration and real antiferro–orbital order of the type II (AFOO-II) as discussed in text. (b) Bands for the same setup as in (a) but with a *non-collinear* ferrimagnetic spin configuration. In both panels, the partial characters of the $xy$-$\uparrow$, $yz$-$\uparrow$, $zx$-$\uparrow$ orbitals are for the sublattice [i, j]{} V atoms. Note that the $yz$ and $zx$ partial characters have nearly identical dispersions due to their equal weight in the occupied orbital. There is again a band–gap for this orbital–order too. []{data-label="fig3"}](AFOO2-V-i.jpg){width="1.0\columnwidth"} The next simplest *initial* order has the second $t_{2g}$ electron occupying the same real linear combination of $yz$ and $zx$ on all V sites, with equal weight for both orbitals, see Fig. \[fig1\](b). This initial order has $I4_{1}/amd$ symmetry, and we implement only the small trigonal distortion. The real linear combination implies that the orbital angular momentum is zero, $L=0$. We implement this by setting the initial mean occupations: $\langle n_{xy}\rangle =1$, $\langle n_{yz}\rangle =\langle n_{zx}\rangle =1/2$ (ferro OO), and the off–diagonal elements to be zero. For this setup, the initial order *does not* persist until convergence is reached. Instead, there are significant non–zero off–diagonal elements, on the same order as the occupied diagonal elements, in the final density matrix. Upon diagonalizing this final matrix, the orbital order we get has the second electron occupying alternately $\psi _{+}=(\psi _{yz}+\psi _{zx})/\sqrt{2}$ and $\psi _{-}=(\psi _{yz}-\psi _{zx})/\sqrt{2}$ along the $c$–axis, which again has the same $I4_{1}/a$ symmetry considered in the preceding subsection. Thus, we start with an orbital order with $I4_{1}/amd$ symmetry, but the self–consistent solution breaks certain discrete symmetries and results in an order with $I4_{1}/a$ symmetry. We thus label this order ‘AFOO-II’. We note that this order is similar to the one obtained for ZnV$_2$O$_4$ using the same LSDA+$U$ scheme [@maitra07]. The *collinear* spin fat bands of V $t_{2g}$ and $e_{g}$ are shown in Fig. \[fig3\](a), and the same for *non-collinear* spins in Fig. [fig3]{}(b). Qualitative features of the band structures do not change significantly between the collinear and non–collinear spin configurations. In both plots, the occupations and dispersions of the $yz$ and $zx$ bands are nearly identical since these orbitals contribute equal weights to the true orbitals, although their relative signs in the linear combinations might differ in these depending on the particular V atom. We also find an insulating band–gap, which in this case is smaller than for ‘AFOO-I.’ Complex Ferro Orbital Order: $I4_{1}/amd$ symmetry -------------------------------------------------- ![(a) The V $t_{2g}$-$\uparrow$ band characters for the low-$T$ tetragonal phase with a *collinear* FEM spin configuration and complex ferro–orbital order with spin–orbit coupling (SOC-FOO) as discussed in text. (b) Bands for the same setup as in (a) but with a *non-collinear* ferrimagnetic spin configuration. In both panels, the partial characters of the $xy$-$\uparrow$, $yz$-$\uparrow$, $yz$-$\downarrow$ orbitals are for the sublattice [i, j, k, l]{} V atoms. There is a band-gap of $E_{\rm gap}=1.76$ eV. The V $xy$-$\downarrow$ band lies above $E_F$, the $zx$-$\uparrow$ bands coincide with the $yz$-$\uparrow$ so we omit it, and finally the $zx$-$\downarrow$ bands are in the same energy region as the $yz$-$\downarrow$ so again we omit it. []{data-label="fig4"}](SOCFOO-V-i.jpg){width="1.0\columnwidth"} We focus first on the case of small trigonal distortion. The last OO has one electron in $xy$ as before, and the second electron in the spherical harmonic $L_{z}=-1$, $S_{z}=+1/2$ state, which is a complex linear combination of $yz$ and $zx$, on all V sites. This is an initial ferro–orbital order, but with SOC switched on and non–zero orbital angular momentum. The initial density matrix configuration persists until convergence. This calculation is carried out using LSDA+$U$+SO. This scenario is also illustrated by Fig. \[fig1\](b), except that each V atom now carries a non–zero orbital angular momentum of magnitude one due to the complex linear combination; hence, there is a uniform orbital order on all V atoms with $L=1$ in the $3d$ density matrix. The reason for choosing the opposite $% z$-projections for $\vec{L}$ and $\vec{S}$ is that spin–orbit interaction lowers the energy for such a setup, compared to the case of having the same sign for both $z$–projections. In Fig. \[fig4\](a) we present the band structure of MnV$_{2}$O$_{4}$, for the *collinear* magnetic configuration, with the V $t_{2g}$-$\uparrow $ partial characters of the SOC uniform orbital order. We find that the V $% t_{2g}$-$\downarrow $ and $e_{g}$ characters are above the $E_{F}$ as expected. For Mn, all the $3d$-$\downarrow $ are below $E_{F}$, while the $% 3d $-$\uparrow $ are above. There is a band gap of $1.76$ eV. Within LSDA+$U$ a half–metallic solution was found in Ref. , with only the $\uparrow $–spin bands of V atoms crossing the $E_{F}$ level, a result which we have also confirmed [@thesis]. Our result is that inclusion of SOC in LSDA+$U$ opens a band gap, signaling a half-metal-to-insulator transition as the SO coupling parameter is switched on. Since we argue that the uniform complex ferro order is the correct orbital order based on exchange constant calculations, we predict a half-metal-to-insulator transition to occur in single crystalline MnV$_{2}$O$_{4}$ as the temperature goes below $T_{S}$. In Fig. \[fig4\](b) we present the corresponding band structure of the *non-collinear* magnetic configuration for this order, with the partial characters of V $t_{2g}$ shown. We find that the Mn atoms carry no orbital moment as expected, but the V atoms have an orbital moment $m_{o}=1.03$. The spin moments are $m_{s}=4.33$ for Mn, and $m_{s}=1.71$ for V. Since the spin and orbital moments are antiparallel due to SOC coupling, the total moment for V is $m\approx 0.7$ in this phase. When we perform the corresponding LSDA+$U$+SO calculation with an $I4_1/a$ symmetry large trigonal distortion, and $U=4.5$ eV, $J_H=1.0$ eV, we find that the converged density matrices are not significantly different from the ones obtained with the small $I4_1/amd$ trigonal distortion, therefore, with respect to the density matrices, the larger trigonal distortion has a minor effect. However, the trigonal distortion does seem to have a rather large effect on the exchange interactions as described further below. The magnetic moments with the larger trigonal distortion are, for Mn atoms: $m_{s}=4.26$, $m_{o}=0.0$ (since the orbital moment is quenched); and for V atoms: $m_{s}=1.65$, $m_{o}=0.87$, giving a total $m=0.78$, similar to what we obtained with a small trigonal distortion. We label the order obtained with spin-orbit coupling as ‘SOC-FOO’. Results for Exchange Interactions ================================= Here we outline the spin–wave model, the ground state spin configuration, and present the results for our calculated exchange constants $J$ and single–site anisotropy parameters $D$. Our obtained spin wave spectra of MnV$_{2}$O$_{4}$ and comparisons with the neutron scattering experiments are also given. Spin Wave Model --------------- The parameters of the model are: \[1\] the exchange constants $J$ derived from the LSDA+$U$(+SO) converged charge densities using linear response theory and the magnetic force theorem [@liech87; @wan06], and \[2\] the single–ion anisotropy parameters $D$ calculated using an exact–diagonalization atomic multiplet procedure [@alders01]. We input these parameters into the Heisenberg model Hamiltonian with anisotropy terms, minimize the classical energy to find the stable ground state configuration, and calculate the spin–wave excitation spectra. The model Hamiltonian is: $$\begin{aligned} H_{\rm spin} &=& - \sum_{\langle {\rm ij} \rangle} J_{\rm ij} \vec{S_{\rm i}} \cdot \vec{S_{\rm j}} - \sum_{\langle {\rm ik} \rangle}J_{\rm ik} \vec{S_{\rm i}} \cdot \vec{S_{\rm k}} - \sum_{\langle {\rm il} \rangle}J_{\rm il} \vec{S_{\rm i}} \cdot \vec{S_{\rm l}} - \sum_{\langle {\rm jk} \rangle}J_{\rm jk}\vec{S_{\rm j}} \cdot \vec{S_{\rm k}} - \sum_{\langle {\rm jl} \rangle}J_{\rm jl} \vec{S_{\rm j}} \cdot \vec{S_{\rm l}} - \sum_{\langle {\rm kl} \rangle}J_{\rm kl}\vec{S_{\rm k}} \cdot \vec{S_{\rm l}} \nonumber \\ &-& J_{\rm Mn-V}\sum_{\langle {\rm (p,q)(i,j,k,l)} \rangle}(\vec{S_{\rm p}} + \vec{S_{\rm q}}) \cdot (\vec{S_{\rm i}}+\vec{S_{\rm j}}+\vec{S_{\rm k}}+\vec{S_{\rm l}}) - \sum_{\langle {\rm pq} \rangle}J_{\rm pq} \vec{S_{\rm p}} \cdot \vec{S_{\rm q}} + \sum_{\rm x=i,j,k,l,p,q}\vec{S_{\rm x}} \cdot \bar{D}_{\rm x} \cdot \vec{S_{\rm x}}. %+ \sum_{\rm j}\vec{S_{\rm j}} \cdot \bar{D}_{\rm j} \cdot \vec{S_{\rm j}} + \sum_{\rm k}\vec{S_{\rm k}} \cdot \bar{D}_{\rm k}\cdot \vec{S_{\rm k}} + %\sum_{\rm l}\vec{S_{\rm l}}\cdot \bar{D}_{\rm l}\cdot \vec{S_{\rm l}} + \sum_{\rm p}\vec{S_{\rm p}} \cdot \bar{D}_{\rm p}\cdot \vec{S_{\rm p}} + %\sum_{\rm q}\vec{S_{\rm q}}\cdot \bar{D}_{\rm q}\cdot \vec{S_{\rm q}} \label{eqn1}\end{aligned}$$ The subscripts on the $J$ label the four inequivalent V sublattices, $\mathrm{i} $, $\mathrm{j}$, $\mathrm{k}$, $\mathrm{l}$, and two inequivalent Mn sublattices, $\mathrm{p}$, $\mathrm{q}$. The $J_{\mathrm{Mn-V}}$ is taken outside the summation because it has the same value for all pairs of Mn and V atoms. All the $J$ couplings are between nearest–neighbor atoms of two different sublattices, and each pair is counted only once in the summation over all sites. We ignore the next–nearest–neighbor couplings because we found them to be much smaller in magnitude. Spin Configuration ------------------ ![The $T=0$ non–collinear spin configuration of the V spins from Ref. .[]{data-label="fig5"}](NCSpinConfig3.jpg){width="0.9\columnwidth"} The low–$T$ spin structure is non–collinear for V atoms, and collinear for Mn atoms with respect to the $c$–axis. The pyrochlore lattice on which the V atoms sit is geometrically frustrated for nearest–neighbor isotropic ($% J^{ab}=J^{c}$) AFM exchange. The frustrated pyrochlore interactions mean that there could be a macroscopic ground state degeneracy. But this frustration is partially relieved in the low–$T$ phase by the presence of additional nearest–neighbor exchange interactions with Mn atoms, tetragonal distortion, and orbital ordering. The last one has the effect of making the V–V AFM exchange anisotropic: $J^{ab}\neq J^{c}$. It is well–known that the orbital or magnetic degeneracy can be lifted by the coupling of these degrees of freedom with the lattice via the Jahn–Teller effect [jahn37,tokura00]{}. The ground–state spin configuration selected by the system in the low–$T$ phase is non–collinear due to the combined effect of the frustration and coupling of the V spins to Mn spins, V $t_{2g}$ orbitals, and the lattice. In this structure, the V–atom spins develop components in the $ab$ plane perpendicular to each other. The amount of canting away from the $c$–axis can be characterized by a single canting angle $\theta $. Given the values of all $J$ and $D$ in Eq. \[eqn1\], one can find the angle $\theta $ as a function of $J$ and $D$ that will minimize the classical ground–state energy of the configuration (derivation given in Ref. ): $$\theta =\arccos [-\frac{3J_{\rm Mn-V}S_{\rm Mn}}{% (D_{\rm V}^{z}-D_{\rm V}^{x,y}-2J_{\rm V-V}^{c}-2J_{\rm V-V}^{ab})S_{\rm V}}]. \label{eqn2}$$ The non–collinear spin configuration that achieves this energy minimum is shown in Figure \[fig5\] [@xcrysden]. Exchange Constants ------------------ meV No OO $U=0$ eV No OO $U=5$ eV AFOO-I $U=5$ eV AFOO-II $U=5$ eV SOC-FOO $U=5$ eV Expt.[@chung08] ------------------------- ---------------- ---------------- ----------------- ------------------ ------------------ ----------------- $J_{\mathrm{ii}}$ -2.72 0.136 0.3264 -0.04488 0.1496 - $J_{\mathrm{V-V}}^{ab}$ -20.4 -21.76 -14.96 -19.04 -10.88 -9.89 $J_{\mathrm{V-V}}^{c}$ -20.4 -18.36 -3.536 -7.072 -2.72 -3.08 $J_{\mathrm{Mn-V}}$ -10.2 -2.992 -5.44 -5.44 -4.76 -2.82 $J_{\mathrm{pq}}$ 1.2 2.167 2.72 2.72 2.72 - $J_{\mathrm{pp}}$ -0.476 0.204 0.204 0.272 0.272 - meV $U=4.5$ eV $U=5.0$ eV $U=5.5$ eV $U=6.0$ eV Expt.[@chung08] ------------------------- ------------ ------------ ------------ ------------ ----------------- $J_{\mathrm{ii}}$ 0.449 0.35 0.3 0.272 - $J_{\mathrm{V-V}}^{ab}$ -17.7 -14.28 -12.92 -11.56 -9.89 $J_{\mathrm{V-V}}^{c}$ -4.624 -4.352 -3.808 -3.4 -3.08 $J_{\mathrm{Mn-V}}$ -6.8 -5.712 -5.304 -4.896 -2.82 $J_{\mathrm{pq}}$ 2.72 2.584 2.448 2.312 - $J_{\mathrm{pp}}$ 0.204 0.2 0.2 0.1768 - meV $U=4.5$ eV $U=5.0$ eV $U=5.5$ eV $U=6.0$ eV Expt.[@chung08] ------------------------- ------------ ------------ ------------ ------------ ----------------- $J_{\mathrm{ii}}$ 0.272 0.204 0.177 0.15 - $J_{\mathrm{V-V}}^{ab}$ -15.64 -12.24 -10.61 -9.11 -9.89 $J_{\mathrm{V-V}}^{c}$ -5.8 -4.352 -3.536 -2.788 -3.08 $J_{\mathrm{Mn-V}}$ -6.12 -5.44 -4.896 -4.352 -2.82 $J_{\mathrm{pq}}$ 2.72 2.72 2.45 2.329 - $J_{\mathrm{pp}}$ 0.272 0.272 0.231 0.231 - In Table \[table1\] we present the $J$ parameters that we calculate using LSDA+$U$(+SO) method and magnetic force theorem for the small trigonal distortions and the indicated $U$ values. In Tables \[table3\] and \[table4\] we present the $J^{\prime}$s for the large trigonal distortion: Table \[table3\] for ‘AFOO-I’ and Table \[table4\] for ‘SOC-FOO’. As the method computes the exchange constants in reciprocal space, we fourier transform them and show only nearest–neighbor exchange interactions between atoms of each sublattice. For the spinel structure, any of nearest–neighbor pairs always belongs to a different sublattice. The values of $J_{\rm V-V}$ for no orbital order and $U=0$ eV are the same for all V-V pairs; but when $U=5$ eV, there is a tendency for anisotropy to develop: the in–plane $J_{\rm V-V}^{ab}$ becomes unequal from the out–of–plane $J_{\rm V-V}^{c}$. This shows that the anisotropy in the $J_{\rm V-V}$, and the orbital ordering which causes it, could both be interaction driven. When there is an orbital order, $J_{\rm V-V}$ is different along the $ab$ V chains and between the chains (along the $c$–axis), as expected. This is true even for the case of the uniform orbital orders, because the exchange matrix elements of the Coulomb operator will be different within the $ab$ plane and between the planes, as can be seen from the shapes of the occupied orbitals in Fig. \[fig1\](a,b). We have calculated the exchange constants within LSDA+$U$ and LSDA+$U$+SO taking into account both small and large trigonal distortions with $I4_1/amd$ and $I4_1/a$ symmetry respectively. The larger trigonal distortion result in $J^{\prime}$s that are  $50\%-80\%$ larger, with and without SOC. Thus the effect of increasing trigonal distortion keeping $U=4.5$ eV affects the agreement with the experimental spin-waves. One explanation for why this is so could be that $U=4.5$ eV is too small to describe the correlation effects in V. In order to check this, we also tried $U=4.5$, $5.0$, $5.5$, $6.0$ eV for large trigonal distortions and found that the $J^{\prime}$s indeed decrease as $U$ increases, see Tables \[table3\], \[table4\]. By varying both the size of the trigonal distortion and the $U$, we are dealing with a two-parameter problem. Since neither the trigonal distortion nor $U$ are exactly known, we have presented our results as an exploration of the trends in $J^{\prime}$s within this two-parameter space. The increase in $U$ will bring down the values of the $J^{\prime}$s as they typically scale as $t_{dd\sigma}^2/U$ for direct exchange between the V atoms. The trends in the variation of $J^{\prime}$s within the two-parameter space indicate that the $J^{\prime}$s for SOC-FOO best describes the experimental spin waves for small trigonal distortion with $U=5$ eV, Table \[table1\], and larger trigonal distortion with $U=6$ eV, Table \[table4\]. Single–Ion Anisotropy --------------------- - CF $E_{xy}$ eV CF $E_{e_g}$ eV Theory meV Expt.[@chung08] meV ------------- ---------------- ----------------- ------------ --------------------- Mn $D^z$ -0.016 1.0 -0.1123 -0.1024 V $D^{x,y}$ -0.024 0.4 -4.056 -4.04 V $D^z$ -0.024 0.4 7.34 2.79 : Table of calculated anisotropy constants for V$^{3+}$ $3d^{2}$ and Mn$^{2+}$ $3d^{5}$ atomic shells. The energies of $E_{xy}$ and $E_{e_g}$ due to the crystal–fields are measured with respect to $E_{yz,zx}=0$ eV. []{data-label="table2"} The calculation of single–ion anisotropy requires first the total energies of the interacting atomic shell in a crystal–field environment along with the spin–orbit coupling. The method for its computation is described in Ref. , so here we merely present our results. The input parameters used in the total energy calculation are: the SOC parameter $0.15$ eV, and Slater integrals $F_{0}=5.0$ eV, $F_{2}=7.6$ eV, $F_{4}=4.7$ eV. We then vary the direction of the magnetic moment by applying a small external magnetic field. The CF levels have the following values: the energies of $yz$ and $% zx$ are both set to the reference value of $0.0$ eV. The $e_{g}$ level is varied from $0.2$ eV to $1.0$ eV in steps of $0.2$ eV; and the $xy$ level has the energies $-0.024$ eV, $-0.016$ eV, $-0.008$ eV, $0.0$ eV. This gives a set of 20 different CF configurations. The $E_{xy}=E_{yz/xz}$ represents cubic CF, and $E_{xy}\neq E_{yz/xz}$ represents tetragonal CF. The total atomic shell energies thus obtained are fitted to a parabolic function of the polar angle $\theta $ representing moment orientation, centered at $\theta =0$ in the case of $z$–axis anisotropy, and centered at $\theta =\pi /2$ in the case of $x/y$–axis anisotropy. The results of the parabolic fit that best match the experimentally known $D$ values are given in Table \[table2\] for both the V $3d^{2}$ and Mn $3d^{5}$ shells. The easy axis for Mn is $z(c)$–axis, and for V it is either $x$ or $y$. The easy axis always has a negative anisotropy parameter, which means the energy is lowered when the spin projection along the easy axis is maximized. For Mn, the spin projection along $z$ tends to be maximized. However, V also has a positive anisotropy parameter along the $z$–axis. So, V spin projection likes to be maximized along $y$, and minimized along $z$[@chung08]. Thus, the V spin moment has a tendency to be in a non–collinear direction with respect to the $z$–axis. Our anisotropy computation is able to reproduce these signs as well as magnitudes for Mn ($3d^{5}$) and V ($3d^{2}$) shells. Looking at the anisotropy fit values we see that the value of Mn anisotropy reported in Ref.  are obtained for $E_{xy}=-0.016$ eV, $% E_{e_{g}}=1.0$ eV, namely $D_{\rm Mn}^{z}=-0.1123$ meV, which is similar to the literature value. For our fitted values of V anisotropy, we do not find such a close match, but there are several CF values which give the anisotropy of Ref.  up to the correct sign and order of magnitude. For example, $E_{xy}=-0.024$ eV, $E_{e_{g}}=0.4$ eV give $D_{\rm V}^{z}=7.34$ meV, and $D_{\rm V}^{x,y}=-4.056$ meV, which can be compared to $D_{\rm V}^{z}=2.79$ meV, and $D_{\rm V}^{x,y}=-4.04$ meV of Ref. . One reason why our calculated $D_{\rm V}^{z}$ parameter differs by a large amount from the experimental value is because we have to tune the CF energy levels to simultaneously match two different single–ion anisotropies, and it was not possible to get them both to match the experimental $D$ values of V. Spin–Wave Spectra ----------------- ![In all panels, the red lines are experimental and black lines are theoretical spin–waves: (a) Spin wave spectrum for $I4_{1}/amd$ spin–orbit coupled ferro–orbital order (SOC–FOO) along the high–symmetry lines of the Brillouin zone. We find an excellent match between our theoretical and previous experimental data of Ref. . (b) Spin–waves corresponding to the $I4_{1}/a$ symmetry AFOO–I order. The upper four V oscillation branches of the theoretical spin–waves are both too high in energy and have a larger dispersion compared to the experimental plot. (c) Same as in (b), but for the AFOO–II order. Here the overestimate in the $J_{\rm V-V}$ is even greater than in (b). All theoretical spin-wave plots are for $U=5$ eV and small trigonal distortion.[]{data-label="fig6"}](SpinWaves.jpg){width="0.83\columnwidth"} We developed a code to compute the linear spin wave spectra for the non–collinear spin configuration. The program takes as an input our computed values of $J$ and $D$. We first find the ground state which will in general be a non–collinear configuration with the spins pointing along the local quantization axis as given by $\theta $ in Eq. \[eqn2\]. We second find the Heisenberg equations of motion, and numerically diagonalize the resulting system of linear equations. The resulting spin–wave spectra are plotted in Fig. \[fig6\](a) for the ‘SOC-FOO’ uniform orbital order, along with the experimental spin waves. We find that the spin waves obtained from the $J$ and $D$ values of the SOC ferro–orbital order with small trigonal distortion and $U=5$ eV matches well with experiment, although other combinations of trigonal distortion and $U$ could also yield similar $J^{\prime}$s. We also note that the lower two modes are due to the oscillations of Mn spins: The lower energy being the symmetric mode, and the higher energy the anti–symmetric mode[@chung08]. The upper four modes are oscillations of the V spins[@chung08]. For comparison, we show the spin waves for the other orbital orders, also obtained with small trigonal distortion and $U=5$ eV, that *do not* match well with the experimental data. The model parameters for these orbital orders do give a reasonable spin canting angle when using Eq. \[eqn2\], but the upper branches of the spin waves corresponding to the V oscillations are too high in energy and have a larger band–width in these plots (due to considerable overestimate of V–V exchange) compared to the correct one in Fig. \[fig6\](a). Figure \[fig6\](b) shows the spin waves for the ‘AFOO-I’ order. Figure \[fig6\](c) shows the spin waves for the ‘AFOO-II’, which is composed of real linear combinations of $yz$ and $zx$ orbitals with the relative sign between $yz$ and $zx$ alternating between $ab$ layers along the $c$ axis. As Table \[table1\] shows, this order again gives a considerably greater V–V exchange compared to the experiment and therefore the upper branches are much higher in energy and have a greater dispersion relative to the experimental plot. We conclude that the excellent agreement between our theoretical and experimental spin–wave dispersions for all the six oscillation modes can be obtained for a sample setup with SOC ferro–orbital order with $I4_{1}/amd$ small trigonal distortion, where the second $t_{2g}$ electron occupies a complex linear combination of $|yz\rangle \pm i|zx\rangle $ uniformly on all V–sites. The incorporation of the low symmetry $I4_1/a$ large trigonal distortion tends to increase the $J_{\rm V-V}$’s by 50$\%$-80$\%$, but we find that by a reasonable increase of the Coulomb parameter to $U=6$ eV, we can obtain $J^{\prime}$s that match the experimental ones. The trend we notice is that small trigonal distortion + lower $U$ as well as large trigonal distortion + higher $U$ both give $J^{\prime}$s that are close to the experimental $J^{\prime}$s, however, the former case with ‘SOC-FOO’ seems to give the best match of all the combinations we have tried. The other two orbital orders, ‘AFOO-I’ and ‘AFOO-II’, do not give such a good match with experiment throughout the Brillouin zone for the same value of distortion and $U$ so these orders may be ruled out. We further note that the spin–orbit coupling plays an important role in the orbital physics of V–atoms in MnV$% _{2}$O$_{4}$. This is also justified by the fact that the single–ion anisotropy is relatively high, as evidenced by the large gaps for the would–be acoustic modes at $\Gamma$. Conclusion ========== By theoretical computations of the interatomic exchange constants using LSDA+$U$(+SO) method, magnetic force theorem and by imposing various orbital ordering scenarios we have shown that the orbital order on the V sites of MnV$_{2}$O$_{4}$ is similar to a complex linear combination of $zx$ and $yz$ on all V sites. Our calculated spin wave spectra for this order come closest to the experimental data. Further support in evidence of the complex order is the strong single–ion anisotropy experienced by the spin moments on the V sites, as well as the *reduction* of the V magnetic moment in the low–$T$ phase[@gar08] which could not be captured by LSDA+$U$ alone. We also predict, based on our $U=5$ eV, orbital–ordered band–structures, that the low–$T$ phase of MnV$_2$O$_4$ is a Mott–type insulator, and that a half–metal–to–insulator transition accompanies the simultaneous orbital ordering, structural distortion, and non-collinear moment transitions at $T_S=53$ K. **Acknowledgements.** The authors acknowledge useful discussions with Myung Joon Han, Rajiv Singh and Nick Curro. The work was supported by DOE SciDAC Grant No. SE-FC02-06ER25793 and by DOE Computational Material Science Network (CMSN) Grant No. DE-SC0005468. [99]{} Y. Tokura and N. Nagaosa, Science **288**, 462, (2000). R. Plumier and M. Sougi, Solid State Commun. **64**, 53 (1987); Physica B **155**, 315 (1989). V. O. Garlea, R. Jin, D. Mandrus, B. Roessli, Q. Huang, M. Miller, A. J. Schultz, and S. E. Nagler, Phys. Rev. Lett. **100**, 066404 (2008). S. Sarkar, T. Maitra, Roser Valentí, and T. Saha-Dasgupta, Phys. Rev. Lett. **102**, 216405 (2009). H. Tsunetsugu and Y. Motome, Phys. Rev. B **68**, 060405(R) (2003). T. Suzuki, M. Katsumura, K. Taniguchi, T. Arima, and T. Katsufuji, Phys. Rev. Lett. **98**, 127203 (2007). K. Adachi, T. Suzuki, K. Kato, K. Osaka, M. Takata, and T. Katsufuji, Phys. Rev. Lett. **95**, 197202 (2005). O. Tchernyshyov, Phys. Rev. Lett. **93**, 157206 (2004). H. A. Jahn and E. Teller, Proc. R. Soc. A **161**, 200 (1937). For a review, see, e.g., Theory of the Inhomogeneous Electron Gas, edited by S. Lundqvist and S. H. March (Plenum, New York, 1983). , edited by V. I. Anisimov (Gordon and Breach Science Publishers, Amsterdam, 2000). O. K. Andersen, T. Saha–Dasgupta, Phys. Rev. B **62**, 16219, (2000). J.-H. Chung, J.-H. Kim, S.-H. Lee, T. J. Sato, T. Suzuki, M. Katsumura, and T. Katsufuji, Phys. Rev. B **77**, 054412 (2008). D. Alders, R. Coehoorn, W. J. M. de Jonge, Phys. Rev. B **63**, 054407 (2001). J. B. Goodenough, *Magnetism and the Chemical Bond* (Interscience, New York, 1963); J. Kanamori, J. Phys. Chem. Solids **10**, 87 (1959). S.-H. Baek, N. J. Curro, K.-Y. Choi, A. P. Reyes, P. L. Kuhns, H. D. Zhou, and C. R. Wiebe, Phys. Rev. B **80**, 140406(R) (2009). Gia-Wei Chern, Natalia Perkins, and Zhihao Hao, Phys. Rev. B 81, 125127 (2010). O. K. Andersen, Phys. Rev. B **12**, 3060 (1975). S. Y. Savrasov, Phys. Rev. B **54**, 16470 (1996). A. I. Liechtenstein, M. I. Katsnelson, V. P. Antropov, and V. A. Gubanov, J. Magn. Magn. Mater. 67, 65 (1987). X. Wan, Q. Yin, and S. Y. Savrasov, Phys. Rev. Lett. **97**, 266403 (2006). R. Nanguneri, Ph.D. thesis, (2012). T. Miyake and F. Aryasetiawan, Phys. Rev. B **77**, 085122 (2008). T. Maitra and R. Valentí, Phys. Rev. Lett. **99**, 126401 (2007). A. Kokalj, Comp. Mater. Sci., **2003**, Vol. 28, p. 155.
--- abstract: 'Using ATLAS data corresponding to $70 \pm 8\,{\mathrm{nb}^{-1}}$ of integrated luminosity from the 7 TeV proton-proton collisions at the LHC, distributions of relevant supersymmetry-sensitive variables are shown for the final state containing jets, missing transverse momentum and one isolated electron or muon. With increased integrated luminosities, selections based on these distributions will be used in the search for supersymmetric particles: it is thus important to show that the Standard Model backgrounds to these searches are under good control.' author: - 'M-H. GENEST on behalf of the ATLAS Collaboration' title: 'Distributions for one-lepton SUSY Searches with the ATLAS Detector' --- INTRODUCTION ============ If supersymmetry exists at the TeV scale and R-parity is conserved, the SUSY particles should be produced in pairs and decay to the lightest SUSY particle which would escape detection, thus leading to signatures containing jets, large missing transverse momentum and potentially one or more leptons. We present here a first comparison of the ATLAS data corresponding to $70 \pm 8\,{\mathrm{nb}^{-1}}$ of integrated luminosity from the $\sqrt{s}=7$ TeV proton-proton collisions to Monte Carlo simulations for some of the most important kinematical variables that are expected to be sensitive for SUSY searches. A more detailed description of these results can be found in [@confnote]. THE ONE-LEPTON SUSY ANALYSIS {#sec:analysis} ============================ In the one-lepton analysis, after a leptonic trigger requirement and a set of cleaning cuts to reject events containing jets which are consistent with calorimeter noise, cosmic rays or out-of-time energy deposits, the events are preselected by asking for at least two jets with transverse momentum ${p_\mathrm{T}}>30$ GeV and one isolated lepton (electron or muon) with ${p_\mathrm{T}}>20$ GeV. The signal region is then defined by applying two further cuts: ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}>30$ GeV and ${m_\mathrm{T}}>100$ GeV, where ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ is the missing transverse momentum, calculated as the opposite of the vector sum of the transverse energies of all three-dimensional topological clusters in the calorimeter plus the transverse momenta of the selected well-isolated muons in the analysis and ${m_\mathrm{T}}$ is the transverse mass of the lepton and ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ defined as $m_\mathrm{T}^2 \equiv 2|{\bf p}_\mathrm{T}^{\ell}||{\mathrm{E}_\mathrm{T}^\mathrm{miss}}| - 2{\bf p}_\mathrm{T}^{\ell} \cdot \vec {\mathrm{E}_\mathrm{T}^\mathrm{miss}}$. The data is compared to the full-detector GEANT4 simulation which is reconstructed using the same algorithms as for the data. The Standard Model background processes considered in this analysis are QCD (PYTHIA), W/Z+jets (ALPGEN + HERWIG + Jimmy) and $t\bar{t}$ (MC@NLO + HERWIG + Jimmy). The PYTHIA QCD predictions were compared to a set of ALPGEN QCD samples; the differences were found to be well within the experimental uncertainties for the kinematic region explored. The QCD and W+jets backgrounds are normalized to the data in control regions defined as ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}<40$ GeV and ${m_\mathrm{T}}<40$ GeV for the QCD background and $30<{\mathrm{E}_\mathrm{T}^\mathrm{miss}}<50$ and $40<{m_\mathrm{T}}<80$ GeV for the W+jets background. As an example, the SU4 supersymmetric point (ISAJET + HERWIG + PROSPINO) is also shown in the plots with its cross section multiplied by 10; SU4 is a low-mass benchmark point close to the Tevatron limits and is defined as $m_0=200$ GeV, $m_{1/2}=160$ GeV, $A_0=-400$ GeV, $tan\beta=10$ and $\mu>0$. The most important sources of systematic uncertainties are considered: the uncertainty on the jet energy scale (which varies from 7-10$\%$ as a function of the jet ${p_\mathrm{T}}$ and $\eta$), the uncertainty on the W+jets and QCD normalizations (50$\%$), the uncertainty on the Z+jets normalization (60$\%$) and the uncertainty on the luminosity (11$\%$). The results are shown in Figures \[fig:met\]-\[fig:meff\]. Figure \[fig:met\] shows ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ after the preselection for the electron and muon channels: there is reasonable agreement between the data and the Monte Carlo. While the low ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ region is dominated by the QCD background, the W+jets background dominates at higher values; the supersymmetry model would yield even higher ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ values. After applying the ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}>30$ GeV cut, the ${m_\mathrm{T}}$ distributions for both channels are shown in Fig. \[fig:mot\] and exhibit good agreement between the data and Monte Carlo. Finally, Fig. \[fig:meff\] shows the effective mass distribution (${M_\mathrm{eff}}$, defined as ${M_\mathrm{eff}}\equiv \sum_{i=1}^{2} p_\mathrm{T}^{{jet},i} + p_\mathrm{T}^{{lep}} + {\mathrm{E}_\mathrm{T}^\mathrm{miss}})$ for the signal region, i.e. after a further cut on ${m_\mathrm{T}}>100$ GeV. The number of events found in the signal region is consistent with the expectations. The expected number of events is compared with the data at different stages of the analysis in Table \[tab:results\]. ![${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ distribution after the preselection for the electron (left) and muon (right) channels. The statistical and systematic uncertainties on the Monte Carlo prediction, added in quadrature, are shown as a yellow band on the plots. []{data-label="fig:met"}](hEtmiss_el_FREIBURG "fig:"){height="2.5in"} ![${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ distribution after the preselection for the electron (left) and muon (right) channels. The statistical and systematic uncertainties on the Monte Carlo prediction, added in quadrature, are shown as a yellow band on the plots. []{data-label="fig:met"}](hEtmiss_mu_FREIBURG "fig:"){height="2.5in"} ![${m_\mathrm{T}}$ distribution after the cut on ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ for the electron (left) and muon (right) channels.[]{data-label="fig:mot"}](hMt_after_METCut_el_FREIBURG "fig:"){height="2.5in"} ![${m_\mathrm{T}}$ distribution after the cut on ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ for the electron (left) and muon (right) channels.[]{data-label="fig:mot"}](hMt_after_METCut_mu_FREIBURG "fig:"){height="2.5in"} ![${M_\mathrm{eff}}$ distribution after the cuts on ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ and ${m_\mathrm{T}}$ for the electron (left) and muon (right) channels.[]{data-label="fig:meff"}](hMeff_after_METMTCut_el_FREIBURG "fig:"){height="2.5in"} ![${M_\mathrm{eff}}$ distribution after the cuts on ${\mathrm{E}_\mathrm{T}^\mathrm{miss}}$ and ${m_\mathrm{T}}$ for the electron (left) and muon (right) channels.[]{data-label="fig:meff"}](hMeff_after_METMTCut_mu_FREIBURG "fig:"){height="2.5in"} +:------------+------------:+------------:+------------:+------------:+ | | | | | | +-------------+-------------+-------------+-------------+-------------+ | Selection | Data | Monte Carlo | Data | Monte Carlo | +-------------+-------------+-------------+-------------+-------------+ | $p_\mathrm{ | 143 | 157 $\pm$ | 40 | 37 $\pm$ 14 | | T}(\ell)>20 | | 85 | | | | \,$GeV$ \ \ | | | | | | cap$\ | | | | | | $\geq2$ | | | | | | jets with | | | | | | $p_\mathrm{ | | | | | | T}>30\,$GeV | | | | | +-------------+-------------+-------------+-------------+-------------+ | $\cap~ {\ma | 13 | 16 $\pm$ 7 | 17 | 15 $\pm$ 7 | | thrm{E}_\ma | | | | | | thrm{T}^\ma | | | | | | thrm{miss}} | | | | | | > 30\,$GeV | | | | | +-------------+-------------+-------------+-------------+-------------+ | $\cap~ {m_\ | 2 | 3.6 $\pm$ | 1 | 2.8 $\pm$ | | mathrm{T}}> | | 1.6 | | 1.2 | | 100\,$GeV | | | | | +-------------+-------------+-------------+-------------+-------------+ : \[tab:results\] Number of events observed and predicted at several stages of the single lepton selection. As described in Section \[sec:analysis\], the Monte Carlo predictions have been normalised to the data in control regions which overlap all but the final selection. CONCLUSION ========== The first $70 \pm 8\,{\mathrm{nb}^{-1}}$ of integrated luminosity collected with the ATLAS detector are analysed in an early search for new physics in the channel containing jets, missing transverse momentum and one lepton (electron or muon). The measurements are compared to simulations of the expected Standard Model background and generally show agreement with these expectations. The author would like to acknowledge support by the DFG cluster of excellence “Origin and Structure of the Universe” (www.universe-cluster.de). [9]{} The ATLAS Collaboration, “Early supersymmetry searches with jets, missing transverse momentum and one or more leptons with the ATLAS Detector”,ATLAS-CONF-2010-066, 2010.
-3mm -3mm ANALYTICAL INVESTIGATION OF ANTICIPATING CHAOS SYNCHRONIZATION IN TIME-DELAYED AND CASCADED SYSTEMS\ E. M. Shahverdiev [^1], S.Sivaprakasam and K. A. Shore [^2]\ School of Informatics, University of Wales, Bangor, Dean Street, Bangor, LL57 1UT, Wales, UK\ For the first time, using a modified Ikeda model it is demonstrated analytically that anticipating synchronization can be obtained in chaotic time-delay systems governed by two characteristic delay times. We derive existence and stability conditions for the dual-time anticipating synchronization manifold. We also show that increased anticipation times for chaotic time-delay systems with two characteristic delay times can be obtained by the use of cascaded systems.\  \ PACS number(s):05.45.Xt, 05.45.Vx, 42.55.Px, 42.65.Sf\  \ 1.Introduction\ Seminal papers on chaos synchronization \[1\] have stimulated a wide range of research activity in laser physics, electronic circuits, chemical, biological systems, and secure communications; a recent comprehensive review of the subject is found in \[2\]. Time delay systems \[3\] are ubiquitous in nature and technology, due to finite signal transmission times, switching speeds and memory effects and therefore the study of synchronization phenomena in such systems is of great practical importance. Time delay systems are interesting because the dimension of their chaotic dynamics can be increased by increasing the delay time sufficiently \[4\]. From this point of view these systems are especially appealing for secure communication schemes. In addition, time delay systems can be considered as a special case of spatio-temporal systems, see e.g. \[5\] and references therein. Recently \[6\] it was discovered that dissipative chaotic systems with a time-delayed feedback (memory) can drive identical systems in such a way that the driven system anticipates the driver by synchronizing with its future states. Such a behavior is a result of the interplay between delayed feedback and dissipation \[6\]. Also it was demonstrated that for small anticipation times, anticipating synchronization also occurs in chaotic systems described by ordinary differential equations which includes delay due to the finite propagation time of the signal from the driver to the driven system (the so-called coupling delay). Anticipating synchronization \[6\] appears as a coincidence of shifted-in-time states of two coupled systems, but in this case, in contrast to lag synchronization, the driven system $y$ anticipates the driver $x$,$y(t)=x(t+\tau)$ or equivalently $x(t)=y_{\tau}(t)\equiv y(t-\tau$) with $\tau >0$. In \[6\] anticipating chaos synchronization was studied in the case of a single delay time. In \[7\] it is demonstrated that by augmenting the phase space of the driven system (by considering a chain of driven systems), one can accomplish anticipation times that are multiples of the coupling delay time. Anticipating chaos synchronization for systems with two delay times: a delay in the coupled systems themselves and a coupling delay was investigated [*numerically*]{} in \[8\]. The first experimental observation of anticipating synchronization in semiconductor lasers with optical feedback has been reported recently \[9\]. This experimental work opens up possibilities for practical use of anticipating synchronization phenomenon. Synchronization of coupled chaotic systems restricts the evolution of synchronized systems to the synchronization manifold and therefore eliminates some degrees of freedom of the joint system, thus leading to significant reduction of complexity. In this context from a fundamental point of view, new types of chaos synchronization, including anticipating synchronization can be considered as a novel ways of reducing unpredictability of chaotic dynamics. Possible practical applications of anticipating chaos synchronization may exploit the fact that driven system [*anticipates*]{} the driver.For example this phenomenon can be used for a fast prediction-because no computation is involved- by simply coupling the identical response system to the master system; in secure communications anticipation of the future states of the transmitter (master laser) at the receiver (slave laser) end allows more time to decode the message); another possibility can be the control of delay-induced instabilites in a wide range of non-linear systems. Also anticipating synchronization may be of interest for the understanding of natural information processing systems.\ In this paper, using a modified Ikeda model [*analytically*]{} generalize the concept of anticipating synchronization to the cases, when there are two delay times in the coupled systems: where the delay time in the coupling is different from the delay time in the coupled systems themselves. We derive existence and stability conditions for the corresponding anticipating synchronization manifold. Furthermore, we show analytically that increased anticipation times for chaotic time-delay systems both with a single delay time in the driver system and with dual delay times can be obtained by the use of cascaded chaotic systems.\ 2.Anticipating chaos synchronization in time delayed systems with two characteristic delay times\ For clarity of presentation we reproduce here the definition of anticipating chaos synchronization in \[6\]:\ The driver system $$\hspace*{5cm}\frac{dx}{dt}=-\alpha x + f(x_{\tau}) \hspace*{8.3cm}(1)$$ synchronizes with a driven system of the form $$\hspace*{5cm}\frac{dy}{dt}=-\alpha y + f(x) \hspace*{8.6cm} (2)$$ on the anticipating synchronization manifold $$\hspace*{8cm}x=y_{\tau}.\hspace*{7.5cm}(3)$$ From eqs.(1-2) it follows that $\frac{dx}{dt}-\frac{dy_{\tau}}{dt}=-\alpha (x-y_{\tau}) + f(x_{\tau})-f(x_{\tau})=-\alpha (x-y_{\tau})$. We define the error signal by symbol $\Delta$: $\Delta=x-y_{\tau}$. Then $\frac{d\Delta}{dt}=-\alpha \Delta$. In many representative cases, chaos synchronization can be understood from the existence of a global Lyapunov function of the error signals \[10\]. Thus by introducing the Lyapunov function $L=\frac{1}{2}\Delta^{2}$ we obtain that for $\alpha >0 $ the anticipating synchronization manifold $x=y_{\tau}$ is globally attracting and asymptotically stable.\ Throughout this paper to enhance the accessibility and practicality of our presentation, we confine ourselves to the demonstration of principles using specific examples- the modified Ikeda model \[8\] and the (conventional) Ikeda model \[6\].\ Consider the following modified version of the unidirectionally coupled Ikeda model\[8\]. $$\hspace*{-5cm}\frac{dx}{dt}=-\alpha x + m_{1} \sin x_{\tau_{1}},$$ $$\hspace*{4cm}\frac{dy}{dt}=-\alpha y + m_{2} \sin y_{\tau_{1}} + m_{3}\sin x_{\tau_{2}},\hspace*{6.5cm}(4)$$ where $\alpha$ is a positive constant; $m_{1},m_{2}$ and $m_{3}$ are constants; $\tau_{1}$ is the feedback delay in the coupled systems; $\tau_{2}$ is the coupling delay.\ Now we shall analytically demonstrate that $x=y_{\tau_{1} - \tau_{2}}$ with $\tau_{1} > \tau_{2}$ can be the anticipating synchronization manifold; find the existence and stability conditions for anticipating synchronization, and then compare the analytical results with numerical simulations.\ From eqs.(4) it follows that under the condition $$\hspace*{7cm}m_{1} = m_{2}+ m_{3},\hspace*{6.7cm}(5)$$ the dynamics of the error $\Delta =x-y_{\tau_{1} - \tau_{2}}$ obeys the following equation: $$\hspace*{6cm}\frac{d\Delta}{dt}=-r\Delta + s \Delta_{\tau_{1}},\hspace*{7.1cm}(6)$$ with $r=\alpha$ and $s=(m_{1}-m_{3})\cos x_{\tau_{1}}$. It is obvious that $\Delta=0$ is the solution of eq.(6). The stability condition for the trivial solution $\Delta=0$ of eq.(6) can be found by investigating the positively defined Krasovskii-Lyapunov fuctional $V(t)=\frac{1}{2}\Delta^{2} + \mu\int_{-\tau}^{0}\Delta^{2}(t+t_{1})dt_{1}$ (where $\mu >0$ is an arbitrary positive parameter). According to \[3-4\] the sufficient stability condition for the trivial solution of eq.(6) is: $r>\vert s \vert$. Then the sufficient stability condition for the anticipating synchronization manifold $x=y_{\tau_{1} - \tau_{2}}$ reads: $$\hspace*{6cm}\alpha > \vert m_{2}\vert.\hspace*{8.8cm}(7)$$ The condition $m_{1}=m_{2} + m_{3}$ is the existence (necessary) condition for anticipating synchronization for the unidirectionally coupled modified Ikeda model.\ Thus, in this section of the paper, for the first time, we have derived analytically existence and sufficient stability conditions for anticipating synchronization in dual-time coupled modified Ikeda model.\ 2.Cascaded anticipation of chaos synchronization\ From the practical application point of view it is of great importance to achieve larger anticipation times between the driver and the driven systems. In this section we demonstrate that cascaded response systems configuration can be used to achieve that aim. Synchronization between cascaded systemswas demonstrated experimentally for the first time in \[12\] in semiconductor laser diodes with optical feedback. In \[7\] it is demonstrated that by augmenting the phase space of the driven system (by considering a chain of driven systems), one can accomplish anticipation times that are multiples of the coupling delay time. Here we demonstrate that increased anticipation times for chaotic time-delay systems both with a single delay time in the driver system and with dual delay times also can be obtained by the use of cascaded chaotic systems configuration.\ Consider the situation when the driven system $y$ in eqs.(1-2) itself drives another response system $z$: $$\hspace*{0.1cm}\frac{dx}{dt}=-\alpha x + f(x_{\tau}),$$ $$\hspace*{0.1cm}\frac{dy}{dt}=-\alpha y + f(x),$$ $$\hspace*{7cm}\frac{dz}{dt}=-\alpha z + f(y).\hspace*{6.4cm} (8)$$ We demonstrate analytically that the driven system $z$ synchronizes with the driver system $x$ with the anticipation time $2\tau$. Let us calculate the following difference: $\frac{dx}{dt}-\frac{dz_{2\tau}}{dt}=-\alpha (x-z_{2\tau})+ f(x_{\tau})-f(y_{2\tau})$. Assume that anticipiating synchronization between $x$ and $y$ state variables has already taken place; then from $x=y_{\tau}$ we obtain that $x_{\tau}=y_{2\tau}$. Then we arrive at the error $\Delta=x-z_{2\tau}$ dynamics $\frac{d\Delta}{dt}=-\alpha \Delta$. In other words having two driven systems it is possible to double the anticipation time. It is straightforward to verify that having $n$ driven systems allows for the anticipation times $n\tau$. Thus, using cascaded driven systems it is possible to obtain the anticipation times that are multiples of the delay time in the coupled systems themselves; As mentioned above the anticipation times that are multiples of the coupling delay time is accomplished in \[7\]. We consider cascaded anticipating synchronization in the following coupled Ikeda systems with a single delay \[6\]: $$\hspace*{0.7cm}\frac{dx}{dt}=-\alpha x - \beta \sin x_{\tau},$$ $$\hspace*{0.6cm}\frac{dy}{dt}=-\alpha y - \beta \sin x,$$ $$\hspace*{7cm}\frac{dz}{dt}=-\alpha z - \beta \sin y,\hspace*{6cm}(9)$$ where $\alpha >0, \beta >0$. Using the error dynamics approach one can find that $x=y_{\tau}$ and $x=z_{2\tau}$ are the anticipating chaos synchronization manifolds for the system (9).\ Next we investigate the possibility of obtaining large anticipating times for chaotic systems with [*dual*]{} delay times. Let the driven system $y$ from eqs.(4) drive another response system $z$: $$\hspace*{5cm}\frac{dz}{dt}=-\alpha z + m_{4} \sin z_{\tau_{1}} + m_{5}\sin y_{\tau_{2}}.\hspace*{5.5cm}(10)$$ We shall show that systems (4) and (10) provide an anticipation time of $2(\tau_{1} - \tau_{2})$. To verify this, we investigate the error dynamics for $\Delta=x-z_{2(\tau_{1}-\tau_{2})}$: $$\hspace*{5cm}\frac{d\Delta}{dt}=-\alpha \Delta + m_{1} \sin x_{\tau_{1}} - m_{4}\sin z_{3\tau_{1}-2\tau_{2}} - m_{5} \sin y_{2\tau_{1}-\tau_{2}}.\hspace*{2cm}(11)$$ Assuming that synchronization between the driver $x$ and slave system $y$ already had taken place, i.e. using $x=y_{\tau_{1} - \tau_{2}}$ and $y_{2\tau_{1} - \tau_{2}}=x_{\tau_{1}}$, under the existence condition $m_{1}=m_{4} + m_{5}$ we obtain also a sufficient stability condition for the anticipating synchronization manifold $x=z_{2(\tau_{1}-\tau_{2})}$, $\alpha > \vert m_{4}\vert$. It is clear that, having $n$ response systems one can obtain the anticipation time $n(\tau_{1} - \tau_{2})$.\ Thus, we have demonstrated that increased number of driven systems will allow for larger anticipitation times for the coupled dual time chaotic systems.\ In conclusion, we have analytically investigated the phenomenon of anticipating synchronization in unidirectionally coupled time delayed modified Ikeda systems with two characteristic delay times. We have obtained that the anticipation time is the difference between the delay time in the coupled systems and the coupling delay time and derived both existence and sufficient stability conditions for the anticipating synchronization manifold. In order to exploit the capability of [*anticipating future states*]{} of the master system, it is of great importance to obtain increased anticipating times. We have demonstrated here that the concept of cascaded slave systems can provide large anticipating times for the dual time chaotic time delay systems.\ This work is supported by UK EPSRC under grants GR/R22568/01 and GR/N63093/01.\ [99]{} L. M. Pecora and T. L. Carroll, Phys. Rev. Lett. [**64**]{}, 821 (1990); E.Ott, C.Grebogi and J.A.Yorke, Phys.Rev.Lett.[**64**]{}, 1196 (1990). CHAOS, Special issue on chaos synchronization, [**7**]{}, No 4 (1997); G.Chen and X.Dong, From Chaos to Order.Methodologies, Perspectives and Applications (World Scientific, Singapore, 1998); Handbook of Chaos Control, Ed. H.G.Schuster (Wiley-VCH, Weinheim,1999). J.K.Hale and S.M.V.Lunel, Introduction to Functional Differential Equations (Springer, New York, 1993); O. Diekmann [*et al.*]{}, Delay Equations (Springer, New York, 1995);L.E.El’sgol’ts and S.B.Norkin, Introduction to the theory and applications of differential equations with deviating arguments (Acadamic Press, New York, 1973). K.Pyragas, Phys.Rev.E [**58**]{},3067 (1998). C.Masoller, Chaos [**7**]{}, 455(1997). H.U.Voss, Phys.Rev.E [**61**]{}, 5115 (2000). H.U.Voss, Phys.Rev.Lett. [**87**]{}, N 2 (2001). C.Masoller, Phys.Rev.Lett. [**86**]{},2782 (2001). S.Sivaprakasam, E.M.Shahverdiev and K.A.Shore, Phys.Rev.Lett. [**87**]{}, 154101 (2001). R.He and P.G.Vaida, Phys.Rev.A [**46**]{},7387(1992);E.M.Shahverdiev,Phys.Rev.E [**60**]{},3905 (1999). Software for delay differential equations:Time-Delay System Toolbox: http://fde.usaaa.ru. S.Sivaprakasam and K.A.Shore, Optics Letters [**26**]{}, 253 (2001). [^1]: Permanent address: Institute of Physics, 370143 Baku,Azerbaijan [^2]: Electronic address: alan@sees.bangor.ac.uk
--- abstract: 'Regularization by denoising (RED) is a powerful framework for solving imaging inverse problems. Most RED algorithms are iterative batch procedures, which limits their applicability to very large datasets. In this paper, we address this limitation by introducing a novel online RED (On-RED) algorithm, which processes a small subset of the data at a time. We establish the theoretical convergence of $\text{On-RED}$ in convex settings and empirically discuss its effectiveness in non-convex ones by illustrating its applicability to phase retrieval. Our results suggest that On-RED is an effective alternative to the traditional RED algorithms when dealing with large datasets.' author: - | Zihui Wu Yu Sun Jiaming Liu Ulugbek S. Kamilov\ Washington University in St. Louis\ [{ray.wu, sun.yu, jiaming.liu, kamilov}@wustl.edu]{}\ bibliography: - 'egbib.bib' title: Online Regularization by Denoising with Applications to Phase Retrieval --- Introduction {#Sec:Introduction} ============ The recovery of an unknown image $\xbm \in \R^n$ from a set of noisy measurement is crucial in many applications, including computational microscopy [@Tian.Waller2015], astronomical imaging [@Starck.etal2002], and phase retrieval [@Candes.etal2012]. The problem is usually formulated as a regularized optimization $$\label{Eq:RegularizedOptimization} \xbmhat = \argmin_{\xbm \in \R^N} \left\{f(\xbm)\right\} \quad\text{with}\quad f(\xbm) = g(\xbm) + h(\xbm),$$ where $g$ is the data-fidelity term that ensures the consistency with the measurements, and $h$ is the regularizer that imposes the prior knowledge on the unknown image. Popular methods for solving such optimization problems include the family of proximal methods, such as proximal gradient method (PGM) [@Figueiredo.Nowak2003; @Daubechies.etal2004; @Bect.etal2004; @Beck.Teboulle2009a] and alternating direction method of multipliers (ADMM) [@Eckstein.Bertsekas1992; @Afonso.etal2010; @Ng.etal2010; @Boyd.etal2011], due to their compatibility with non-differentiable regularizers [@Rudin.etal1992; @Figueiredo.Nowak2001; @Elad.Aharon2006]. ![Conceptual illustration of *online regularization by denoising (On-RED)*. The proposed algorithm uses a *random subset of noisy measurements* at every iteration to reconstruct a high-quality image using a *convolutional neural network (CNN)* denoser.[]{data-label="fig:Schema"}](figures/schema){width="\linewidth"} Recent work has demonstrated the benefit of using denoisers as priors for solving imaging inverse problems [@Sreehari.etal2016; @Chan.etal2016; @Brifman.etal2016; @Teodoro.etal2016; @Zhang.etal2017a; @Meinhardt.etal2017; @Kamilov.etal2017; @Sun.etal2018a; @Sun.etal2018b; @Metzler.etal2018]. One popular framework, known as *plug-and-play priors (PnP)* [@Venkatakrishnan.etal2013], extends traditional proximal methods by replacing the proximal operator with a general denoising function. This grants PnP a remarkable flexibility in choosing image priors, but also complicates its analysis due to the lack of an explicit objective function. An alternative strategy for leveraging denoisers is the *regularization by denoising (RED)* framework [@Romano.etal2017], which formulates an explicit regularizer $h$ for certain classes of denoisers [@Romano.etal2017; @Reehorst.Schniter2019]. Recent work has shown the effectiveness of RED under sophisticated denoisers for many different image reconstruction tasks [@Romano.etal2017; @Metzler.etal2018; @Reehorst.Schniter2019; @Sun.etal2019a]. For example, Metzler *et al.* [@Metzler.etal2018] demonstrated the state-of-the-art performance of RED for phase retrieval by using the DnCNN denoiser [@Zhang.etal2017]. Typical PnP and RED algorithms are iterative *batch* procedures, which means that they processes the entire set of measurements at every iteration. This type of batch processing of data is known to be inefficient when dealing with large datasets [@Bottou.Bousquet2007; @Kim.etal2013]. Recently, an online variant of PnP [@Sun.etal2018a] has been proposed to address this problem, yet such an algorithm is still missing for the RED framework. In order to address this gap, we propose an *online* extension of RED, called *online regularization by denoising (On-RED)*. Unlike its batch counterparts, On-RED adopts online processing of data by using only a random subset of measurements at a time (see Figure \[fig:Schema\] for a conceptual illustration). This empowers the proposed method to effectively scale to datasets that are too large for batch processing. Moreover, On-RED can fully leverage the flexibility offered by deep learning by using *convolutional neural network (CNN)* denoisers. The key contributions of this paper are as follows: - We propose a novel On-RED algorithm for online processing of measurements. We provide the theoretical convergence analysis of the algorithm under several transparent assumptions. In particular, given a convex $g$ and nonexpansive denoiser, which does not necessarily correspond to any explicit $h$, our analysis shows that On-RED converges to a fixed point at the worst-case rate of $O(1/\sqrt{t})$. - We validate the effectiveness of On-RED for phase retrieval from *Coded Diffraction Patterns* (CDP) [@Candes.etal2012] under a CNN denoiser. Numerical results demonstrate the empirical fixed-point convergence of On-RED in this non-convex setting and show its potential for processing large datasets under nonconvex $g$. Background {#Sec:Background} ========== In this section, we first review the problem of regularized image reconstruction and then introduce some related work. Inverse Problems in Imaging --------------------------- Consider the inverse problem of recovering $\xbm \in \R^n$ from measurements $\ybm \in \R^m$ specified by the linear system $$\label{Eq:MeasurementSystem} \ybm = \Hbm\xbm + \ebm,$$ where the measurement matrix $\Hbm \in \R^{m\times n}$ characterizes the response of the system, and $\ebm$ is usually assumed to be additive white Gaussian noise (AWGN). When the inverse problem is nonlinear, the measurement operator can be generalized to a mapping $\Hbm: \R^n \rightarrow \R^m$. A common example is the problem of *phase retrieval (PR)*, which corresponds the following nonlinear system $$\label{Eq:PhaseRetrieval} \ybm = \Hbm(\xbm) + \ebm, \quad\text{with}\quad \Hbm(\xbm) = |\Abm \xbm|$$ where $|\cdot|$ denotes an element-wise absolute value, and ${\Abm \in \C^{m \times n}}$ is the measurement matrix. Due to the ill-posedness, inverse problems are often formulated as . A widely-used data-fidelity term is the least-square loss $$\label{Eq:L2norm} g(\xbm) = \frac{1}{2}\| \ybm - \Hbm(\xbm) \|_2^2,$$ which penalizes the mismatch to the measurements in terms of $\ell_2$-norm. In particular, for the PR problem, the data-fidelity becomes $\frac{1}{2}\| \ybm - |\Abm \xbm| \|_2^2$, which is known to be *non-convex*. Two common choices for the regularizer include the sparsity-enhancing $\ell_1$ penalty $h(\xbm) = \tau\|\xbm\|_1$ and the total variation (TV) penalty $h(\xbm) = \tau\|\Dbm\xbm\|_1$, where $\tau>0$ controls the strength of regularization and $\Dbm$ denotes the discrete gradient operator [@Rudin.etal1992; @Tibshirani1996; @Candes.etal2006; @Donoho2006; @Kamilov2017]. Two popular methods for solving are PGM and ADMM. They circumvent the differentiation of non-smooth regularizers by using a mathematical concept called *proximal map* [@Moreau1965] $$\label{Eq:ProximalOperator} \prox_{\tau h}(\zbm) \defn \argmin_{\xbm \in \R^n} \left\{\frac{1}{2}\|\xbm-\zbm\|_2^2 + \tau h(\xbm)\right\}.$$ A close inspection of reveals that the proximal map actually corresponds to an image denoiser based on regularized optimization. This mathematical equivalence led to the development of PnP and RED. Plug-and-play algorithms ------------------------ Consider the ADMM iteration $$\begin{aligned} \label{Eq:ADMM} %\textbf{input: } $\xbm^0 \in \R^n$, $\sbm^0 = \zerobm$, $\rho > 0$ and $\gamma > 0$ \zbm^k &\leftarrow \prox_{\tau g}(\xbm^{k-1} - \sbm^{k-1}) \nonumber \\ \xbm^k &\leftarrow \prox_{\tau h}(\zbm^k + \sbm^{k-1}) \\ \sbm^k &\leftarrow \sbm^{k-1} + (\zbm^k - \xbm^k) \nonumber,\end{aligned}$$ where $k\geq1$ denotes the iteration number. In , the regularization is imposed by $\prox_{\tau h}: \R^n \rightarrow \R^n$, which denotes the proximal map of $h$. **input:** $\xbm^0 \in \R^n$, $\tau > 0$, and $\sigma > 0$ $\nabla g(\xbm^{k-1}) \leftarrow \mathsf{fullGradient}(\xbm^{k-1})$ $\Gsf(\xbm^{k-1}) \leftarrow \nabla g(\xbm^{k-1}) + \tau(\xbm^{k-1}-\Dsf_\sigma(\xbm^{k-1}))$ $\xbm^k \leftarrow \xbm^{k-1} - \gamma \Gsf(\xbm^{k-1})$ \[euclidendwhile\] **input:** $\xbm^0 \in \R^n$, $\tau > 0$, $\sigma > 0$, and $B \geq 1$ $\nablahat g(\xbm^{k-1}) \leftarrow \mathsf{minibatchGradient}(\xbm^{k-1}, B)$ $\Gsfhat(\xbm^{k-1}) \leftarrow \nablahat g(\xbm^{k-1}) + \tau(\xbm^{k-1}-\Dsf_\sigma(\xbm^{k-1}))$ $\xbm^k \leftarrow \xbm^{k-1} - \gamma \Gsfhat(\xbm^{k-1})$ \[euclidendwhile\] Inspired by the equivalence that the proximal map is a denoiser, Venkatakrishnan *et al.* [@Venkatakrishnan.etal2013] introduced the PnP framework based on ADMM by replacing $\prox_{\tau h}$ in  with a general denoising function $\Dsf_\sigma : \R^n \rightarrow \R^n$ $$\label{Eq:PnPADMM} %\textbf{input: } $\xbm^0 \in \R^n$, $\sbm^0 = \zerobm$, $\rho > 0$ and $\gamma > 0$ \xbm^k \leftarrow \Dsf_{\sigma}(\zbm^k + \sbm^{k-1})\nonumber \\$$ where $\sigma > 0$ controls the strength of denoising. This simple replacement enables PnP to regularize the problem by using advanced denoisers, such as BM3D [@Dabov.etal2007] and DnCNN. Numerical experiments show that PnP achieves the state-of-the-art performance in many applications. Similar PnP algorithms have been developed using PGM [@Kamilov.etal2017], primal-dual splitting [@Ono2017], and approximate message passing (AMP) [@Metzler.etal2016; @Fletcher.etal2018]. Considerable effort has been made to understand the theoretical convergence of the PnP algorithms [@Sreehari.etal2016; @Chan.etal2016; @Meinhardt.etal2017; @Teodoro.etal2017; @Buzzard.etal2017; @Sun.etal2018a; @Ryu.etal2019]. Recently, Sun *et al.* [@Sun.etal2018a] proposed an online PnP algorithm based on PGM, named PnP-SPGM, and analyzed its fixed-point convergence using the monotone operator theory [@Bauschke.Combettes2017]. This paper extends their results to the RED framework by introducing a new algorithm and analyzing its theoretical convergence. Regularization by Denoising --------------------------- The RED framework, proposed by Romano *et al.* [@Romano.etal2017], is an alternative way to leverage image denoisers. RED has been shown successful in many regularized reconstruction tasks, including image deblurring [@Romano.etal2017], super-resolution [@Mataev.etal2019], and phase retrieval [@Metzler.etal2018]. The framework aims to find a fixed point $\xbm^\ast$ that satisfies $$\begin{aligned} \label{Eq:FixedPoints} \Gsf(\xbm^\ast) = \nabla g(\xbm^\ast) + \tau (\xbm^\ast - \Dsf_\sigma(\xbm^\ast)) = 0, %\nabla &g(\xbm^\ast) + \Hsf(\xbm^\ast) = 0 \nonumber \\ %&\text{with}\quad \Hsf(\xbm) \defn \tau(\xbm - \Dsf_\sigma(\xbm)),\quad \tau>0\end{aligned}$$ where $\tau>0$ and $\nabla g$ denotes the gradient of $g$. Equivalently, $\xbm^\ast$ lies in the zero set of $\Gsf: \R^n \rightarrow \R^n$ $$\begin{aligned} \label{Eq:ZeroSet} \xbm^\ast \in \zer(\Gsf) \defn \{\xbm \in \R^n\;|\; \Gsf(\xbm) = 0\}.\end{aligned}$$ Romano *et al.* discussed several RED algorithms for finding such $\xbm^\ast$. One popular algorithm is the gradient descent (summarized in Algorithm \[alg:RED\]) $$\begin{aligned} \label{Eq:REDupdate} \xbm^k &\leftarrow \xbm^{k-1} - \gamma (\nabla g(\xbm^{k-1}) + \Hsf(\xbm^{k-1})) \nonumber \\ &\text{with}\quad \Hsf(\xbm) \defn \tau(\xbm - \Dsf_\sigma(\xbm)),\end{aligned}$$ where $\gamma>0$ is the step-size. They have justified $\Hsf(\cdot)$ as a gradient of some explicit function under some conditions. In particular, when denoiser $\Dsf_\sigma$ is locally homogeneous and has a symmetric Jacobian [@Romano.etal2017; @Reehorst.Schniter2019], $\Hsf$ corresponds to the gradient of the following regularizer $$\label{Eq:Regularizer} h(\xbm) = \frac{\tau}{2}\xbm^\Tsf(\xbm-\Dsf_\sigma(\xbm)).$$ By having a closed-form objective function, one can use the classical optimization theory to analyze the convergence of RED algorithms [@Romano.etal2017]. On the other hand, fixed-point convergence has also been established without having an explicit objective function [@Reehorst.Schniter2019; @Sun.etal2019a]. Reehorst *et al.* [@Reehorst.Schniter2019] have shown that RED proximal gradient methods (RED-PG) converge to a fixed point by utilizing the montone operator theory. Sun *et al.* [@Sun.etal2019a] have established the worst-case convergence for the block coordinate variant of RED algorithm (BC-RED) under a nonexpansive $\Dsf_\sigma$. In this paper, we extend the analysis of BC-RED in [@Sun.etal2019a] to the randomized processing of measurements instead of image blocks, which opens up applications requiring the processing of a large number of measurements. Online Regularization by Denoising ================================== We now introduce the proposed online RED (On-RED), which processes the measurements in an online fashion. The online processing of measurements is especially beneficial for problems with the following data-fidelity $$\label{Eq:ComponentData} g(\xbm) = \E[g_i(\xbm)] = \frac{1}{I}\sum_{i = 1}^I g_i(\xbm),$$ which is composed of $I$ component functions $g_i(\xbm)$, each evaluated only on the subset $\ybm_i$ of the measurements $\ybm$. The computation of the gradient $$\label{Eq:ComponentGradient} \nabla g(\xbm) = \E[\nabla g_i(\xbm)] = \frac{1}{I}\sum_{i = 1}^I \nabla g_i(\xbm),$$ is proportional to the total number $I$. Note that the expectation in and is taken over a uniformly distributed random variable ${i \in \{1,\dots, I\}}$. Large $I$ effectively precludes the usage of batch GM-RED algorithms because of large memory requirements or impractical computation times. The key idea of On-RED is to approximate the gradient at every iteration by averaging $B \ll I$ component gradients $$\label{Eq:StochGrad} \nablahat g(\xbm) = \frac{1}{B}\sum_{b = 1}^B \nabla g_{i_b}(\xbm),$$ where $i_1, \dots, i_B$ are independent random indices that are distributed uniformly over $\{1, \dots, I\}$. The *minibatch* size parameter $B \geq 1$ controls the number of gradient components used at every iteration. Algorithm \[alg:OnRED\] summarizes the algorithmic details of On-RED, where the operation $\mathsf{minibatchGradient}$ computes the averaged gradients with respect to the selected minibatch components. Note that at each iteration, the minibatch is randomly sampled from the entire set of measurements. In the next section, we will present the theoretical convergence analysis of On-RED. Convergence Analysis under Convexity {#Sec:Theory} ==================================== A fixed-point convergence of averaged operators is well known as Krasnosel’skii-Mann theorem [@Bauschke.Combettes2017], which was applied to the aforementioned analysis of PnP [@Sun.etal2018a] and RED algorithms [@Reehorst.Schniter2019; @Sun.etal2019a]. Here, our analysis extends these results to the online processing of measurements and provides explicit worst-case convergence rates for On-RED. Note that our analysis does not assume that $\mathsf{H}$ corresponds to any explicit regularizer $h$. We first introduce the assumptions necessary for our analysis and then present the main results. \[As:DataFitConvexity\] We make the following assumptions on the data-fidelity term $g$: 1. The component functions $g_i$ are all convex and differentiable with the same Lipschitz constant $L > 0$. 2. At every iteration, the gradient estimate is unbiased and has a bounded variance: $$\E[\nablahat g(\xbm)] = \nabla g(\xbm),\;\; \E[\|\nabla g(\xbm)-\nablahat g(\xbm)\|_2^2] \leq \frac{\nu^2}{B},$$ for some constant $\nu > 0$. Assumption \[As:DataFitConvexity\](a) implies that the overall data-fidelity $g$ is also convex and has Lipschitz continuous gradient with constant $L$. Assumption \[As:DataFitConvexity\](b) assumes that the minibatch gradient is an unbiased estimate of the full gradient. The bounded variance assumption is a standard assumption used in the analysis of online and stochastic algorithms [@Ghadimi.Lan2016; @Bernstein.etal2018; @Xu.etal2018; @Sun.etal2018a] \[As:NonemptySet\] Let operator $\Gsf$ have a nonempty zero set $\zer(\Gsf) \neq \varnothing$. The distance between the the farthest point in $\zer(\Gsf)$ and the sequence $\{\xbm^{k}\}_{k=0,1,\cdots}$ generated by On-RED is bounded by constant $R_0$ $$\max_{\xbmast \in \zer(\Gsf)} \|\xbm^k-\xbmast\|_2 \leq R_0,\quad k\geq 0$$ This assumption indicates that the iterates of On-RED lie within a Euclidean ball of a bounded radius from $\zer(\Gsf)$. \[As:NonexpansiveDen\] Given $\sigma>0$, the denoiser $\Dsf_\sigma$ is a nonexpansive operator such that $$\| \Dsf_\sigma(\xbm) - \Dsf_\sigma(\ybm)\|_2 \leq \|\xbm-\ybm\|_2\quad \xbm,\ybm \in \R^n,$$ Since the proximal operator is nonexpansive [@Parikh.Boyd2014], it automatically satisfies this assumption. Nonexpansive CNN denoisers can also be trained by using spectral normalization techniques [@Sun.etal2019a]. Under the above assumptions, we now establish the convergence theorem for On-RED. \[Thm:ConvThm1\] Run On-RED for $t \geq 1$ iterations under Assumptions \[As:DataFitConvexity\]-\[As:NonexpansiveDen\] using a fixed step-size $\gamma \in (0,1/(L+2\tau)]$ and a fixed minibatch size $B\geq1$. Then, we have $$\begin{aligned} \E &\left[\min_{k \in \{1, \dots, t\}} \|\Gsf(\xbm^{k-1})\|_2^2\right] \nonumber \\ &\leq\E\left[\frac{1}{t}\sum_{k = 1}^t \|\Gsf(\xbm^{k-1})\|_2^2\right] \nonumber \\ &\leq\frac{(L+2\tau)}{\gamma} \left[\frac{\nu^2\gamma^2}{B} + \frac{2\gamma\nu}{\sqrt{B}}R_0 + \frac{R^2_0}{t} \right]. \nonumber\end{aligned}$$ See Section \[Sec:Proof\]. When $t$ goes to infinity, this theorem shows that the accuracy of the expected convergence of On-RED to an element of $\zer(\Gsf)$ improves with smaller $\gamma$ and larger $B$. For example, we can have the convergence rate of $O(1/\sqrt{t})$ by setting $\gamma = 1/(L+2\tau)$ and $B = t$ $$\E\left[\frac{1}{t}\sum_{k = 1}^t \|\Gsf(\xbm^{k-1})\|_2^2\right]\leq \frac{C}{\sqrt{t}},$$ where $C>0$ is a constant and we use the bound $\frac{1}{t}\leq \frac{1}{\sqrt{t}}$ that is valid for $t \geq 1$. ![image](figures/truth){width="\linewidth"} ![image](figures/step_and_batch){width="\linewidth"} Numerical Simulation for Phase Retrieval ======================================== In this section, we test the performance of On-RED on a nonconvex phase retrieval problem from *coded diffraction patterns (CDP)*. The state-of-the-art performance of RED for this problem was shown by Metzler *et al.* [@Metzler.etal2018]. Here, we investigate the convergence of On-RED and show its effectiveness for reducing the per-iteration complexity of the traditional batch GM-RED. Our results show the potential of On-RED to scale to a large member of measurements under powerful denoisers that do not correspond to explicit regularizers. Experiment Setup ---------------- In CDP, the object $\xbm \in \mathbb{R}^{n}$ is illuminated by a coherent light source. A random known phase mask modulates the light and the modulation code is denoted as $\bm{M}_i$ for the $i$th measurement. In this work, each entry of $\bm{M}_i$ is drawn uniformly from the unit circle in the complex plane. The light goes through the far-field Fraunhofer diffraction and a camera measures its intensity $\ybm_i \in \mathbb{R}_{+}$. Since Fraunhofer diffraction can be modeled by 2D Fourier Transform, the $i$th data-fidelity term of this phase reconstruction problem can be formulated as follows: $$g_i(\xbm)=\frac{1}{2}\|\ybm_i-|\bm{F}\bm{M}_i \xbm|\|_{2}^{2}$$ where $\bm{F}$ denotes 2D discrete Fast Fourier Transform (FFT). The total data-fidelity term for all the measurements then becomes $$g(\xbm) = \E[g_i(\xbm)] = \frac{1}{I}\sum_{i = 1}^I g_i(\xbm).$$ Noticeably, this problem is well suited for On-RED because it has the same formulation as (\[Eq:ComponentData\]). In the experiments, we reconstruct six $256 \times 256$ standard grayscale natural images, displayed in Figure \[fig:truth\]. The simulated measurements are corrupted by AWGN corresponding to 25 dB of input signal-to-noise ratio (SNR), defined as follows $$\operatorname{SNR}(\hat{\ybm},\ybm)=20\log_{10}\frac{\|\ybm\|}{\|\ybm-\hat{\ybm}\|}$$ where $\hat{\ybm}$ represents the noisy vector and $\ybm$ denotes the ground truth. We also use SNR as a quantitative measure for the quality of reconstructions. We used DnCNN$^\ast$ as our CNN denoiser for the experiments. The architecture of DnCNN$^\ast$ is illustrated in Figure \[fig:Schema\] and was adopted from the popular DnCNN. We generated training examples by adding AWGN to images from BSD400 and applying standard data augmentation strategy including flipping, rotating, and rescaling. We used the residual learning technique where DnCNN$^\ast$ predicts the noise image from the input. The network was trained to minimize the following loss $$\mathcal{L}_\theta= \frac{1}{n} \sum_{i=1}^{n} \left\{\|f_\theta(\xbm_i) - \ybm_i\|_2^2 + \|f_\theta(\xbm_i) - \ybm_i\|_1\right\},$$ where $\xbm_i$ is the noisy input, $\ybm_i$ is the noise, and $f_\theta$ represents DnCNN$^\ast$. The hyperparameters for experiments in \[Subsec:conv\_exp\] and \[Subsec:perf\_exp\] are listed in Table \[Tab:Param\]. All algorithms start from $\xbm^{0}=\zerobm$, where $\zerobm \in \mathbb{R}^{n}$ is all zeros. The value of $\tau$ for each image was optimized for the best SNR performance with respect to ground truth test images. In this paper, the values of $B$ and $I$ are set only to show the potential of On-RED dealing with large datasets. [C[1pt]{}C[90pt]{}C[55pt]{}C[45pt]{}]{} & **\[Subsec:conv\_exp\]** & **\[Subsec:perf\_exp\]**\ (lr)[1-2]{} (lr)[3-4]{} $\xbm^{0}$ & initial point of reconstructions & $\zerobm$ & $\zerobm$\ $\sigma$ & input noise level for denoisers & $5$ & $5$\ $\tau$ & level of regularization in RED & $0.2$ & optimized\ $\gamma$ & step size & $\frac{1}{L+2\tau} \cdot \{1, \frac{1}{3}, \frac{1}{9}\}$ & $\frac{1}{L+2\tau}$\ $B$ & minibatch size at every iteration & $\{10, 20, 30\}$ & $1$\ $I$ & batch size & $40$ & $6$\ [L[20pt]{}C[25pt]{}C[25pt]{}C[25pt]{}C[21.2pt]{}C[21.2pt]{}C[21.2pt]{}]{} **Denoiser** & &\ (l)[2-4]{} (lr)[5-7]{} & $\frac{1}{L+2\tau}$ & $\frac{1}{3(L+2\tau)}$ & $\frac{1}{9(L+2\tau)}$ & 10 & 20 & 30\ (lr)[1-7]{} **TV** & 8.65e-5 & 2.36e-5 & 9.43e-6 & 8.65e-5 & 2.81e-5 & 9.81e-6\ **BM3D** & 8.01e-5 & 1.59e-5 & 9.10e-6 & 8.01e-6 & 2.72e-5 & 8.93e-6\ **DnCNN$^\ast$** & 7.63e-5 & 1.94e-6 & 5.03e-6 & 7.63e-5 & 2.72e-5 & 8.88e-6\ ![image](figures/examples){width="\linewidth"} Convergence of On-RED {#Subsec:conv_exp} --------------------- Theorem \[Thm:ConvThm1\] implies that the expected accuracy improves for a smaller step size $\gamma$ and larger minibatch size $B$. In order to numerically evaluate the convergence, we define and consider the following normalized accuracy $$\textit{Norm. Acc.} \defn \|\Gsf(\xbm^k)\|_2^2 / \|\Gsf(\xbm^0)\|_2^2$$ where $\Gsf$ is defined in (\[Eq:FixedPoints\]). As the sequence $\{\xbm^k\}_\text{k=0,1,...}$ converges to a fixed point in $\zer(\Gsf)$, the normalized accuracy decreases to zero. Figure \[fig:step\_and\_batch\] (left) shows the evolution of the convergence accuracy for $\gamma \in \{\frac{1}{L+2\tau}, \frac{1}{3(L+2\tau)}, \frac{1}{9(L+2\tau)}\}$ with DnCNN$^\ast$. Here, $L$ denotes the Lipschitz constant defined in Assumption \[As:DataFitConvexity\] and $\tau$ represents the parameter of RED. We observe that the empirical performance of On-RED using DnCNN$^\ast$ is consistent with Theorem \[Thm:ConvThm1\], as the accuracy improves with smaller step size. Moreover, Figure \[fig:step\_and\_batch\] (right) numerically evaluates the convergence accuracy of On-RED for minibatch size $B \in \{10, 20, 30\}$. This plot shows that the convergence accuracy improves when minibatch size $B$ becomes larger. Therefore, the change of convergence accuracy with both step size $\gamma$ and minibatch size $B$ follows the same trend in Theorem \[Thm:ConvThm1\] for this nonconvex problem. We note that the similar trend generalizes to BM3D and TV denoisers as well. The summary in Table \[Tab:Distance\] gives the convergence results of all three denoisers. Benefits of On-RED with a CNN Denoiser {#Subsec:perf_exp} -------------------------------------- In this subsection, we show the performance and efficiency of On-RED in solving CDP. To understand the potential of On-RED to scale to large datasets, we consider the scenario where the number of illuminations processed at every iteration is fixed to one. Table \[Tab:SNR\] provides the SNR performance of different algorithms. GM-RED (fixed 1) uses 1 fixed measurement and On-RED $(B=1)$ uses 1 random measurement out of 6 total measurements at every iteration, so they have the same per-iteration computation cost. On-RED outperforms GM-RED by 4.54 dB and 4.99 dB under BM3D and DnCNN$^\ast$, respectively, by actually using all measurements. We also note that the average SNR of *stochastic gradient method (SGM)* ($B=1$) is higher than that of GM-RED (fixed 1) for both denoisers. This implies that the online processing in SGM boosts the SNR more than the regularization of GM-RED. By combining online processing and advanced denoisers, On-RED largely improves the reconstruction performance, which is close to that of the batch algorithm GM-RED (6) using all 6 measurements. [L[28pt]{}C[26pt]{}C[19pt]{}C[19pt]{}C[19pt]{}C[19pt]{}C[28pt]{}]{} **Algorithms** & SGM & & & GM-RED\ & ($B=1$) & & & (fixed 6)\ (lr)[2-2]{} (l)[3-4]{} (l)[5-6]{} (lr)[7-7]{} **Denoisers** & — & BM3D & DnCNN$^\ast$ & BM3D & DnCNN$^\ast$ & DnCNN$^\ast$\ *Barbara* & 27.37 & 26.04 & 26.15 & 30.95 & 31.50 & 32.59\ *Boat* & 27.68 & 26.90 & 27.53 & 31.65 & 32.61 & 33.17\ *Lenna* & 27.65 & 26.55 & 27.58 & 31.47 & 32.54 & 33.20\ *Monarch* & 27.51 & 24.76 & 26.34 & 29.66 & 31.31 & 32.63\ *Parrot* & 27.20 & 27.98 & 28.07 & 31.61 & 32.10 & 33.48\ *Pepper* & 27.08 & 26.14 & 25.85 & 30.29 & 31.39 & 32.58\ **Average** & 27.42 & 26.40 & 26.92 & 30.94 & 31.91 & 32.94\ Visual illustrations of *Barbara*, *Parrot*, and *Pepper* are given in Figure \[fig:examples\]. It is clear that the images reconstructed by On-RED (1) preserve the features lost by GM-RED (1), such as the stripes in *Barbara*, the white feather in *Parrot*, and the stems in *Pepper*. Moreover, these features in the reconstructed images of On-RED (1) have no visual difference from the results of GM-RED (6), as illustrated by column 4, 5, and 6. This indicates that the online algorithm approaches the image quality of the batch algorithm with a lower per-iteration complexity. Conclusion ========== In this paper, we proposed an online algorithm for the Regularization by Denoising framework. We provided the theoretical convergence proof under a few transparent assumptions and a detailed analysis in a convex problem setting. We then applied On-RED to a nonconvex phase retrieval problem from coded diffraction patterns to show its convergence. The performance of On-RED with our learning denoiser DnCNN$^\ast$ demonstrated that On-RED is well compatible with powerful denoisers that do not correspond to explicit regularizers. Our results showed that On-RED has the potential to solve data-intensive problems involving a large number of measurements by reducing per-iteration computation cost. Proof of Theorem \[Thm:ConvThm1\] {#Sec:Proof} ================================= We consider the following two operators $$\Psf \defn \Isf - \gamma\Gsf \quad\text{and}\quad \Psfhat \defn \Isf - \gamma\Gsfhat \nonumber$$ where $\Psfhat$ is the online variant of $\Psf$. The iterates of On-RED can be expressed as $$\xbm^{k} = \Psfhat(\xbm^{k-1}) = \xbm^{k-1} - \gamma\Gsfhat(\xbm^{k-1}), \;\;\text{with}\;\; \Gsfhat = \nablahat g + \Hsf.$$ Note also the following equivalence $$\xbm^\ast \in \zer(\Gsf) \quad\Leftrightarrow\quad \xbm^\ast \in \fix(\Psf)$$ \[Prp:StochP\] Consider an operater $\Psf$ and its online variant $\Psfhat$. If the data-fidelity $g(\cdot)$ satisfies Assumption \[As:DataFitConvexity\], then we have $$\E[\Psfhat (\xbm)] = \Psf(\xbm),\;\; \E[\|\Psf(\xbm)-\Psfhat(\xbm)\|_2^2] \leq \frac{\gamma^2\nu^2}{B}.$$ First, we can show $$\E[\Gsfhat (\xbm)] = \E[\nablahat g(\xbm)] + \Hsf(\xbm) = \Gsf(\xbm)$$ and $$\E[\|\Gsf(\xbm) - \Gsfhat (\xbm)\|_2^2] = \E[\|\nabla g(\xbm) - \nablahat g(\xbm)\|_2^2] \leq \frac{\nu^2}{B}$$ Then, we can prove the desired result $$\E[\Psfhat (\xbm)] = \Isf - \gamma\E[\Gsfhat(\xbm)] = \Psf(\xbm)$$ and $$\E[\|\Psf(\xbm) - \Psfhat (\xbm)\|_2^2] = \gamma^2\;\E[\|\Gsf(\xbm) - \Gsfhat(\xbm)\|_2^2] \leq \frac{\gamma^2\nu^2}{B}$$ \[Prp:NonexpansiveP\] Let the denoiser $\Dsf_\sigma$ be such that it satisfies Assumption \[As:NonexpansiveDen\] and $\nabla g$ is L-Lipschitz continuous. For any $\gamma \in (0, 1/(L+2\tau)]$, the operator $\Psf$ is nonexpansive $$\|\Psf(\xbm) - \Psf(\ybm)\|_2 \leq \|\xbm - \ybm\|_2\quad \forall \xbm,\ybm \in \R^n$$ The proposition is a direct result of the part (c) of the proof of Theorem 1 (Section A) in the Supplementary Material of [@Sun.etal2019a] by setting $\Usf=\Usf^\Tsf=\Isf$ and $\Gsf_i = \Gsf$, which corresponds to the full-gradient RED algorithm of . Now we prove Theorem \[Thm:ConvThm1\] in the paper. Consider a single iteration $\xbm^k = \Psfhat(\xbm^{k-1})$, then we can write for any $\xbm^\ast\in\zer(\Gsf)$ that $$\begin{aligned} \label{Eq:FistStochBound} \nonumber&\|\xbm^k - \xbmast\|_2^2 = \|\Psfhat(\xbm^{k-1})-\Psf(\xbmast)\|_2^2\\ \nonumber&= \|\Psfhat(\xbm^{k-1})-\Psf(\xbm^{k-1})+\Psf(\xbm^{k-1})-\Psf(\xbmast)\|_2^2 \\ \nonumber&= \|\Psf(\xbm^{k-1})-\Psf(\xbmast)\|_2^2 + \|\Psfhat(\xbm^{k-1})-\Psf(\xbm^{k-1})\|_2^2 \\ \nonumber&\quad\quad + 2(\Psfhat(\xbm^{k-1})-\Psf(\xbm^{k-1}))^\Tsf(\Psf(\xbm^{k-1})-\Psf(\xbmast)) \\ &\leq \|\xbm^{k-1}-\xbmast\|_2^2 - \left(\frac{\gamma}{L+2\tau}\right)\|\Gsf(\xbm^{k-1})\|_2^2 \\ \nonumber&\quad\quad + \|\Psfhat(\xbm^{k-1})-\Psf(\xbm^{k-1})\|_2^2 \\ \nonumber&\quad\quad + 2\|\Psfhat(\xbm^{k-1})-\Psf(\xbm^{k-1})\|_2 \cdot \|\Psf(\xbm^{k-1})-\Psf(\xbmast)\|_2,\end{aligned}$$ where we use the Cauchy-Schwarz inequality and adapt the bound (14) in the part (d) of the proof of Theorem 1 (Section A) in the Supplementary Material of [@Sun.etal2019a] by setting $\Usf=\Usf^\Tsf=\Isf$ and $\Gsf_i = \Gsf$. According to Assumption \[As:NonemptySet\] and Proposition \[Prp:NonexpansiveP\], we have $$\label{Eq:NonexpansiveFromInitial} \|\Psf(\xbm^{k-1})-\Psf(\xbmast)\|_2 \leq \|\xbm^{k-1}-\xbmast\|_2 \leq R_0.$$ Additionally, by using Jensen’s inequality, we can have for all $\xbm \in \R^n$ that $$\begin{aligned} \label{Eq:JensenSimplification} \E&\left[\|\Psf(\xbm)-\Psfhat(\xbm)\|_2\right] = \E\left[\sqrt{\|\Psf(\xbm)-\Psfhat(\xbm)\|_2^2}\right] \nonumber \\ &\leq \sqrt{\E\left[\|\Psf(\xbm)-\Psfhat(\xbm)\|_2^2\right]} \leq \frac{\gamma \nu}{\sqrt{B}}.\end{aligned}$$ By rearranging and taking a conditional expectation of  and using these bounds, we can obtain $$\begin{aligned} \E&\left[\|\xbm^k-\xbmast\|_2^2 - \|\xbm^{k-1}-\xbmast\|_2^2 \mid \xbm^{k-1}\right] \\ \nonumber&\leq \frac{2\gamma \nu}{\sqrt{B}}R_0 + \frac{\gamma^2 \nu^2}{B} - \left(\frac{\gamma}{L+2\tau}\right)\|\Gsf(\xbm^{k-1})\|_2^2,\end{aligned}$$ which can be reorganized as $$\begin{aligned} \|\Gsf(\xbm^{k-1})&\|_2^2 \leq \left(\frac{L+2\tau}{\gamma}\right)\Big[\frac{\gamma^2\nu^2}{B} + \frac{2\gamma\nu}{\sqrt{B}}R_0 \\ &+\E\left[\|\xbm^{k-1}-\xbmast\|_2^2 - \|\xbm^k-\xbmast\|_2^2 \mid \xbm^{k-1}\right]\Big].\end{aligned}$$ By averaging the inequality over $t \geq 1$ iterations, taking the total expectation, and dropping the last term, we obtain $$\begin{aligned} \E&\left[\frac{1}{t}\sum_{k = 1}^t \|\Gsf(\xbm^{k-1})\|_2^2\right] \\ %&\leq \frac{L+2\tau}{\gamma} \left[\frac{\gamma^2 \nu^2}{B} + \frac{2\gamma \nu }{\sqrt{B}}R_0 + \frac{\|\xbm^0-\xbmast\|_2^2}{t}\right] \nonumber \\ &\leq \frac{L+2\tau}{\gamma} \left[\frac{\gamma^2 \nu^2}{B} + \frac{2\gamma \nu }{\sqrt{B}}R_0 + \frac{R_0^2}{t}\right]\end{aligned}$$ where we apply the law of total expectation and Assumption \[As:NonemptySet\]. This establishes the Theorem \[Thm:ConvThm1\].
--- abstract: 'This paper considers a network where a node wishes to transmit a source message to a legitimate receiver in the presence of an eavesdropper. The transmitter secures its transmissions employing a sparse implementation of Random Linear Network Coding (RLNC). A tight approximation to the probability of the eavesdropper recovering the source message is provided. The proposed approximation applies to both the cases where transmissions occur without feedback or where the reliability of the feedback channel is impaired by an eavesdropper jamming the feedback channel. An optimization framework for minimizing the intercept probability by optimizing the sparsity of the RLNC is also presented. Results validate the proposed approximation and quantify the gain provided by our optimization over solutions where non-sparse RLNC is used.' author: - 'Andrea Tassi, Robert J. Piechocki, and Andrew Nix [^1]' bibliography: - 'IEEEabrv.bib' - 'papers.bib' title: | [On Intercept Probability Minimization under\ Sparse Random Linear Network Coding]{} --- Sparse random network coding, intercept probability, physical layer security, secrecy outage probability. Introduction ============ Due to the broadcast nature of the medium, wireless communications can be vulnerable to eavesdropping. Physical layer security strategies, operating at the lower protocol stack layers, aim to achieve the secrecy of transmitted messages. In partibular, an eavesdropper is prevented from recovering any of the packets broadcast by a source node (*per-packet secrecy*) by optimizing the transmission rate [@4529264]. In this paper, we advance and compare against the framework for physical layer security presented in [@6777406], and more recently in [@7214217]. In particular, we refer to a system model where achieving per-packet secrecy is not necessary if the transmitted packets are a function of a source message intended to be delivered to a legitimate receiver, and if, in order to recover the source message, a receiver has to collect at least a target number of packets [@1023595]. As observed in [@6777406] and [@7214217], this assumption is met by Random Linear Network Coding (RLNC) [@8281108], where a source node generates a stream of coded packets by linearly combining the source packets forming a source message. The legitimate receiver or an eavesdropper can recover the source message only if they successfully receive a number of linearly independent coded packets equal to the number of source packets defining the source message. We secure communications by minimizing the intercept probability – defined as the probability of an eavesdropper recovering the source message intended for a legitimate receiver. Unlike [@6777406; @7214217], the devised proposal applies to both the case when the legitimate receiver does and does not acknowledge the source the successful reception of a message. This is achieved, by establishing our theoretical framework under the conditions where the transmission of acknowledgment messages takes place over a feedback channel that is not assumed fully reliable. In particular, our performance investigation will focus on attacks where an eavesdropper attempts to increase its intercept probability by jamming the feedback channel – thus, increasing the probability of the acknowledgment message not being successfully received and forcing the source node to keep transmitting coded packets even after the legitimate receiver successfully recovered a source message. To avoid that, we will show how the intercept probability can be significantly reduced by adopting a sparse implementation of the RLNC approach where the number of non-zero elements in the encoding matrix is smaller than in the case of classic RLNC [@7335581]. In this paper, we provide the following key contributions: - Existing expressions of the intercept probability are only applicable to extreme cases where the legitimate receiver either does not acknowledge to the source the successful reception of a source message or when an acknowledgment message is transmitted over a fully reliable feedback channel. By resorting to a novel Markov chain-based model, we propose a generic approximation of the intercept probability that is also applicable when the feedback channel is impaired by an arbitrary erasure probability. - By employing a sparse implementation of RLNC, we devise a novel optimization strategy for optimizing the sparsity of the code and then minimizing the intercept probability when the feedback channel is jammed. The rest of the paper is organized as follows. Section \[sec.SM\] describes the considered system model. Section \[sec.PA\] presents our novel approximation of the intercept probability and Section \[sec.OM\] shows how the sparsity of the code can be optimized to minimize the intercept probability. The accuracy of the proposed approximation and the effectiveness of our optimization model are presented in Section \[sec.NR\]. Finally, in Section \[sec.CL\], we draw our conclusions. System Model {#sec.SM} ============ We consider a system model where a node (Alice) wishes to transmit to a legitimate receiving node (Bob) a source message in the presence of an eavesdropper (Eve), over a broadcast channel. Bob and Eve experience a packet error probability equal to $\epsilon_{\mathrm{B}}$ and $\epsilon_{\mathrm{E}}$, respectively. We assume that the packet erasures experienced by Bob and Eve occur as statistically independent events and, based on a general condition for physical layer security over a Wyner’s wiretap channel model [@bloch_barros_2011 Chapter 1], $\epsilon_{\mathrm{B}} \leq \epsilon_{\mathrm{E}}$ [@1055917]. It directly follows from [@6777406; @7214217] that, for $\epsilon_{\mathrm{B}} > \epsilon_{\mathrm{E}}$, the average number of coded packet successfully received by Bob is smaller than that received by Eve – thus, the average number of coded packet transmissions that Eve needs to recover a source message is inevitably smaller than the number of coded packets Bob needs to recover a source message. That is, for $\epsilon_{\mathrm{B}} > \epsilon_{\mathrm{E}}$, the secrecy capacity of a multicast or broadcast communication system cannot be improved by only employing strategies based on rateless codes. Thus, alternative physical layer security techniques achieving per-packet secrecy have to be used. The investigation of scenarios where $\epsilon_{\mathrm{B}} > \epsilon_{\mathrm{E}}$ are beyond the scope of this paper. Alice segments the source message into $K$ source packets and linearly combines at random the source packets to obtain $\Hat{N}$ coded packets for transmission according to the sparse RLNC principle defined as follows. [ Each coded packet $\mathbf{c}_j$ is obtained as $\mathbf{c}_j = \sum_{i = 1}^K g_{i,j} \cdot \mathbf{s}_i$, where $g_{i,j}$ follows the following probability law [@7335581]: $$\label{eq.pl} \mathbb{P}\left(g_{i,j} = v\right) = \left\{ \begin{array}{l l} p & \quad \text{if $v = 0$}\\ \displaystyle\frac{1-p}{q-1} & \quad \text{otherwise,}\\ \end{array} \right.$$ where $\frac{1}{q} < p < 1$ and $q$ is the size of the finite field $\mathbb{F}_q$ over which network coding operations are performed. The bigger $p$, the more likely that $g_{i,j}$ is equal to $0$. Thus, the average number of source packets concurring in the generation of a coded packet is a function of $p$. Classic RLNC assumes  [@8248799].]{} Let $n_\mathrm{B}$ and $n_\mathrm{E}$ be the number of coded packets successfully received by Bob and Eve, for $0 \leq n_\mathrm{B} \leq \Hat{N}$ and $0 \leq n_\mathrm{E} \leq \Hat{N}$, respectively. Column by column, Bob and Eve populate a $K \times n_{\mathrm{B}}$ and a $K \times n_{\mathrm{E}}$ decoding matrix $\mathbf{M}_{\mathrm{B}}$ and $\mathbf{M}_{\mathrm{E}}$, respectively, with the coding vectors associated with the coded packets they successfully received. Bob and Eve recover the source message as soon as the defect of the decoding matrix, defined as $\mathrm{def}(\mathbf{M}_{\mathrm{X}}) = K - \mathrm{rank}(\mathbf{M}_{\mathrm{X}})$ is equal to zero, for $X = \mathrm{B}$ and $X = \mathrm{E}$, respectively [@8281108]. As soon as the source message has been successfully recovered, Bob transmits an acknowledgment message to Alice over a feedback channel. Alice stops broadcasting coded packets as soon as the feedback is successfully received or when $\Hat{N} > K$ coded packets have been broadcast. The acknowledgment message is re-transmitted when Bob detects a new coded packet transmission pertaining to a source message that Bob has already recovered. The detection of new packet transmissions is assumed to be fully reliable. The feedback channel is assumed independent and separated from the broadcast channel used to transmit coded packets. The erasures of acknowledgement messages occur with probability $\epsilon_{\mathrm{K}}$, for $0 < \epsilon_{\mathrm{K}} \leq 1$. Performance Analysis {#sec.PA} ==================== (190,141) – (235.52,141) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (190,186) – (235.52,186) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (191,232) – (235.52,232) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (191,279) – (235.52,279) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (193,79) – (236.52,79) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (299,78) – (321.52,78) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (296,141) – (321.52,141) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (296,187) – (321.52,187) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (296,233) – (321.52,233) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (296,280) – (321.52,280) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (395,141) – (436.52,141) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (395,186) – (436.52,186) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (395,232) – (436.52,232) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (395,279) – (436.52,279) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (397,79) – (438.52,79) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (171,84) – (171,113.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (269,84) – (269,113.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (376,84) – (376,113.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (460,85) – (460,114.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (185,84) – (246.71,115.1) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (290,84) – (328.93,114.76) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (389,85) – (446.74,116.05) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (171,147) – (171,176.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (269,147) – (269,176.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (376,147) – (376,176.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (460,148) – (460,177.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (181,147) – (242.71,178.1) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (287,147) – (328.89,177.81) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (387,147) – (444.74,178.05) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (171,193) – (171,222.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (269,193) – (269,222.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (376,193) – (376,222.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (460,193) – (460,222.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (181,193) – (242.71,224.1) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (287,193) – (328.89,223.81) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (387,194) – (444.74,225.05) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (171,239) – (171,268.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (269,239) – (269,268.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (376,239) – (376,268.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (460,239) – (460,268.67) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (181,239) – (242.71,270.1) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (287,239) – (322,269.68) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (387,240) – (444.74,271.05) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (176,193) – (256.06,269.62) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (273,193) – (329.3,268.4) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (380.5,192.67) – (453.63,270.22) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (156.5,193) .. controls (143.7,221.57) and (142.53,240.43) .. (157.79,269.66) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (262.5,193) .. controls (249.7,221.57) and (248.53,240.43) .. (263.79,269.66) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (362.5,194) .. controls (349.7,222.57) and (348.53,242.4) .. (363.79,271.66) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (467.5,193) .. controls (483.26,217.63) and (483.5,238.37) .. (470.12,269.57) ; (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; (143,63.5) .. controls (143,62.67) and (143.67,62) .. (144.5,62) – (398,62) .. controls (398.83,62) and (399.5,62.67) .. (399.5,63.5) – (399.5,149.83) .. controls (399.5,150.66) and (398.83,151.33) .. (398,151.33) – (144.5,151.33) .. controls (143.67,151.33) and (143,150.66) .. (143,149.83) – cycle ; (143,166.67) .. controls (143,166.3) and (143.3,166) .. (143.67,166) – (398.83,166) .. controls (399.2,166) and (399.5,166.3) .. (399.5,166.67) – (399.5,205.33) .. controls (399.5,205.7) and (399.2,206) .. (398.83,206) – (143.67,206) .. controls (143.3,206) and (143,205.7) .. (143,205.33) – cycle ; (144.5,257.84) .. controls (144.5,257.56) and (144.73,257.33) .. (145,257.33) – (489,257.33) .. controls (489.27,257.33) and (489.5,257.56) .. (489.5,257.84) – (489.5,286.83) .. controls (489.5,287.11) and (489.27,287.33) .. (489,287.33) – (145,287.33) .. controls (144.73,287.33) and (144.5,287.11) .. (144.5,286.83) – cycle ; (434.5,62.22) .. controls (434.5,61.73) and (434.9,61.33) .. (435.39,61.33) – (486.61,61.33) .. controls (487.1,61.33) and (487.5,61.73) .. (487.5,62.22) – (487.5,150.11) .. controls (487.5,150.6) and (487.1,151) .. (486.61,151) – (435.39,151) .. controls (434.9,151) and (434.5,150.6) .. (434.5,150.11) – cycle ; (144,212.67) .. controls (144,212.3) and (144.3,212) .. (144.67,212) – (399.83,212) .. controls (400.2,212) and (400.5,212.3) .. (400.5,212.67) – (400.5,251.33) .. controls (400.5,251.7) and (400.2,252) .. (399.83,252) – (144.67,252) .. controls (144.3,252) and (144,251.7) .. (144,251.33) – cycle ; (434,166.67) .. controls (434,166.3) and (434.3,166) .. (434.67,166) – (487.83,166) .. controls (488.2,166) and (488.5,166.3) .. (488.5,166.67) – (488.5,205.33) .. controls (488.5,205.7) and (488.2,206) .. (487.83,206) – (434.67,206) .. controls (434.3,206) and (434,205.7) .. (434,205.33) – cycle ; (434,212.67) .. controls (434,212.3) and (434.3,212) .. (434.67,212) – (487.83,212) .. controls (488.2,212) and (488.5,212.3) .. (488.5,212.67) – (488.5,251.33) .. controls (488.5,251.7) and (488.2,252) .. (487.83,252) – (434.67,252) .. controls (434.3,252) and (434,251.7) .. (434,251.33) – cycle ; (214,122) node [$\vdots $]{}; (339,122) node [$\vdots $]{}; (420,122) node [$\vdots $]{}; (169,139) node \[scale=0.7\] [$( 2,K,0)$]{}; (267,139) node \[scale=0.7\] [$( 2,K-1,0)$]{}; (340,134) node [$\dotsc $]{}; (376,139) node \[scale=0.7\] [$( 2,1,0)$]{}; (458,139) node \[scale=0.7\] [$( 2,0,0)$]{}; (169,185) node \[scale=0.7\] [$( 1,K,0)$]{}; (267,185) node \[scale=0.7\] [$( 1,K-1,0)$]{}; (340,181) node [$\dotsc $]{}; (376,185) node \[scale=0.7\] [$( 1,1,0)$]{}; (457,184) node \[scale=0.7\] [$( 1,0,0)$]{}; (170,231) node \[scale=0.7\] [$( 0,K,0)$]{}; (267,231) node \[scale=0.7\] [$( 0,K-1,0)$]{}; (340,226) node [$\dotsc $]{}; (376,231) node \[scale=0.7\] [$( 0,1,0)$]{}; (457,231) node \[scale=0.7\] [$( 0,0,0)$]{}; (170,278) node \[scale=0.7\] [$( 0,K,1)$]{}; (267,278) node \[scale=0.7\] [$( 0,K-1,1)$]{}; (340,274) node [$\dotsc $]{}; (376,278) node \[scale=0.7\] [$( 0,1,1)$]{}; (457,278) node \[scale=0.7\] [$( 0,0,1)$]{}; (171,77) node \[scale=0.7\] [$( K,K,0)$]{}; (269,77) node \[scale=0.7\] [$( K,K-1,0)$]{}; (340,73) node [$\dotsc $]{}; (377,77) node \[scale=0.7\] [$( K,1,0)$]{}; (461,77) node \[scale=0.7\] [$( K,0,0)$]{}; (466,269) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [${\displaystyle 0}$]{}; (383,270) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$1$]{}; (281,270) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$K-1$]{}; (180,270) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$K$]{}; (450,241) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$K+1$]{}; (365,223) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$K+2$]{}; (277,223) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$2K$]{}; (185,223) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$2K+1$]{}; (475,176) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$2K+2$]{}; (363,177) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$2K+3$]{}; (257,177) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$3K+1$]{}; (157,177) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$3K+2$]{}; (158,131) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$4K+3$]{}; (167,67) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$( K+1)^{2} +K$]{}; (253,131) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$4K+2$]{}; (364,131) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$3K+4$]{}; (450,131) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$3K+3$]{}; (268,67) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$( K+1)^{2} +K-1$]{}; (374,67) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$( K+1)^{2} +1$]{}; (452,68) node \[scale=0.5,color=[rgb, 255:red, 208; green, 2; blue, 27 ]{} ,opacity=1 \] [$( K+1)^{2}$]{}; (166,55) node \[scale=0.7\] \[align=left\] [Eq. ]{}; (135,185) node \[scale=0.7,rotate=-270\] \[align=left\] [Eq. ]{}; (136,231) node \[scale=0.7,rotate=-270\] \[align=left\] [Eq. ]{}; (461,54) node \[scale=0.7\] \[align=left\] [Eq. ]{}; (496,185) node \[scale=0.7,rotate=-270\] \[align=left\] [Eq. ]{}; (496,231) node \[scale=0.7,rotate=-270\] \[align=left\] [Eq. ]{}; (450,294) node \[scale=0.7\] \[align=left\] [$\text{Absorbing States}$]{}; We derive the probability of Eve recovering the source message, i.e., the intercept probability, by means of the Markov chain $\mathcal{M}$ (shown in Fig. \[fig.amc\]) where its states are defined as follows. \[def.state\] We say that $\mathcal{M}$ is in state $(d_\mathrm{B},d_\mathrm{E},\delta)$ if $\mathrm{def}(\mathbf{M}_{\mathrm{B}}) = d_\mathrm{B}$, $\mathrm{def}(\mathbf{M}_{\mathrm{E}}) = d_\mathrm{E}$, and the ACK has not ($\delta = 0$) or has been ($\delta = 1$) successfully received by Alice. From Definition \[def.state\], we observe that the total number of states defining $\mathcal{M}$ is $2(K+1)^2$, which directly follows from the fact that: (i) the maximum value of defect $d_\mathrm{B}$ and $d_\mathrm{E}$ is equal to $K$ (corresponding to the cases when Bob and Alice have not successfully received any coded packet), and (ii) a ACK can either be received ($\delta = 1$) or not ($\delta = 0$). After a coded packet transmission, assuming $d_{\mathrm{B}} \geq 1$, the rank of $\mathbf{M}_{\mathrm{B}}$ will increase by one if and only if Bob receives a coded packet that is linearly independent with the previously received. Equivalently, the rank of $\mathbf{M}_{\mathrm{B}}$ can at most be increased by one after a single coded packet transmission, i.e., the defect of $\mathbf{M}_{\mathrm{B}}$ can at most be reduced by one after a coded packet transmission. The same holds true from Eve. As for the value of $\delta$, Bob will attempt to acknowledge the successful recovery of a source message as soon as $d_{\mathrm{B}}$ becomes equal to $0$. For these reasons, all the $K(K+1)$ states where $d_\mathrm{B} \geq 1$ and $\delta = 1$ cannot be reached and can be disregarded. Thus, we will only consider the remaining $2(K+1)^2 - K(K+1) = (K+1)\cdot(K+2)$ states. Assume the system is in state $(K,K,0)$ and ignore self-transition loops, Fig. \[fig.amc\] shows that $\mathcal{M}$ is expected to exhibits non-null transition probabilities for states $(K-1,K,0)$, $(K-1,K-1,0)$ and $(K,K-1,0)$ corresponding to the cases when Bob, Bob and Alice or just Alice successfully receive a linearly independent coded packet, respectively. Since Bob cannot transmit an ACK message before a source message has been recovered, the transition probability toward any state where $\delta=1$ is zero. We then label the remaining $(K+1)\cdot(K+2)$ states. \[def.labeling\] Each state takes a numeric label ranging from $0$ to $(K+1)\cdot(K+2) - 1$. If $\delta = 1$, the label of a state is equal to $d_\mathrm{E}$, otherwise it is equal to $(d_\mathrm{B} + 1)(K+1) + d_\mathrm{E}$. Furthermore, in order to derive the probability transition matrix of $\mathcal{M}$, we prove the following lemma. \[lem.trans\] Assume that matrix $\mathbf{M}_{\mathrm{X}}$ consists of $K \times (t+1)$ elements and assume that the first $t$ columns are linearly independent, for $X \in \{\mathrm{B},\mathrm{E}\}$ and $1 \leq t \leq (K-1)$. If $p > \frac{1}{q}$, the probability $\mathrm{W}_t$ of $\mathbf{M}_{\mathrm{X}}$ having rank $t+1$ can be approximated as follows: $$\mathrm{W}_t \cong (1-p^K) \exp\left(-\sum_{\ell = 2}^{t+1}\binom{t}{\ell - 1} \frac{\pi_{\ell,K}}{(1-p^K)^\ell}\right), \label{eq.lm.1}$$ where $\pi_{1,r} = \rho_{1,r}$, $\pi_{\ell,r} = \rho_{c,r} - \sum_{s = 1}^{\ell - 1} \binom{\ell - 1}{s} \rho_{s,\ell}\pi_{\ell-s,r}$ and $\rho_{c,r} = \left[\frac{1}{q} \left(1+(q-1)\left(1-\frac{q(1-p)}{q-1}\right)\right)^c\right]^r.$ If $p = \frac{1}{q}$, $\mathrm{W}_t$ is $\mathrm{W}_t = 1-\frac{1}{q^{K-t}}$. Let $\mathrm{R}_{K,t+1} = \mathbb{P}\left[\mathrm{rank}(\mathbf{M}_{\mathrm{X}})\right]$ be the probability of matrix $\mathbf{M}_{\mathrm{X}}$ having rank $t+1$. That is, let $\mathbf{M}_{\mathrm{X},t}$ be the $K \times t$ matrix defined by the first $t$ columns of $\mathbf{M}_{\mathrm{X}}$. The relation $$\label{eq.app.0} \mathrm{W}_t = \frac{\mathbb{P}\left[\mathrm{rank}(\mathbf{M}_{\mathrm{X}}) = t+1\right]}{\mathbb{P}\left[\mathrm{rank}(\mathbf{M}_{\mathrm{X},t}) = t\right]}$$ holds true due to the fact that if $\mathbf{M}_{\mathrm{X}}$ has rank $t+1$ then the first $t$ columns are linearly independent. From [@8248799 Theorem 3.1], in the case of a $r \times c$ sparse random matrix over $\mathbb{F}_q$, it follows that $$\label{eq.6} \mathrm{R}_{r,c} \cong (1 - p^r)^c \exp\left(-\sum_{\ell = 2}^c\binom{c}{\ell}\frac{\pi_{\ell,r}}{(1-p^r)^\ell}\right),$$ for $r \geq c$. Thus, by substituting  in  and by noting that [$\binom{t}{\ell-1} = \binom{t+1}{\ell} - \binom{t}{\ell}$]{},  holds. Finally, the case when $p = 1/q$ directly follows form [@8248799 Eq. (2)]. From , the probability transition matrix $\mathbf{P}$ of $\mathcal{M}$ can be approximated by means of the following lemma. \[eq.lem.P\] The probability $\mathrm{P}_{i,j}$ of moving from state $i$ to state $j$ can be approximated as follows (only non-zero probabilities are listed): - If $(K+1)(K-\tau+2) - K \leq i \leq (K+1)(K-\tau+2) - 1$, for $\tau = 0, \ldots, (K-2)$, $$\label{8}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j}\!\! \cong\!\! \left\{ \begin{array}{l l} \epsilon_\mathrm{B}(1\!-\!\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}} & \hspace{0mm}\text{if $j = i -1 \wedge d_\mathrm{B} \geq d_\mathrm{E}$}\\ (1\!-\!\epsilon_\mathrm{E})[\mathrm{W}_{K-d_\mathrm{E}}\!\!-\!\!(1\!-\!\epsilon_\mathrm{E}) \mathrm{W}_{K-d_\mathrm{B}}] & \hspace{0mm}\text{if $j = i -1 \wedge d_\mathrm{B} < d_\mathrm{E}$}\\ \epsilon_\mathrm{E}(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \hspace{0mm}\text{if $j = i - K-1 \wedge d_\mathrm{E} \geq d_\mathrm{B}$}\\ (1\!-\!\epsilon_\mathrm{B})[\mathrm{W}_{K-d_\mathrm{B}}\!\! -\!\!(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{E}}] & \hspace{0mm}\text{if $j = i - K-1 \wedge d_\mathrm{E} < d_\mathrm{B}$}\\ (1\!-\!\epsilon_\mathrm{B})(1\!-\!\epsilon_\mathrm{E})\mathrm{W}_{K-\min(d_\mathrm{B},d_\mathrm{E})} & \text{if $j = i - K - 2$}\\ 1 - \sum_{\stackrel{j = \{i-1,i-K-1,}{i-K-2\}}} \mathrm{P}_{i,j} & \text{if $j = i$}\\ \end{array} \right.}$$ - If $2K+3 \leq i \leq 3K+2$, $$\label{9}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j}\!\! \cong\!\! \left\{ \begin{array}{l l} \epsilon_\mathrm{B}(1\!-\!\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}} & \hspace{-4mm}\text{if $j = i -1\wedge\, d_\mathrm{B} \geq d_\mathrm{E}$}\\ (1\!-\!\epsilon_\mathrm{E})[\mathrm{W}_{K-d_\mathrm{E}}\!\!-\!\!(1\!-\!\epsilon_\mathrm{E}) \mathrm{W}_{K-d_\mathrm{B}}] & \hspace{-4mm}\text{if $j = i -1\wedge\, d_\mathrm{B} < d_\mathrm{E}$}\\ \epsilon_\mathrm{K}\epsilon_\mathrm{E}(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \hspace{-10mm}\text{if $j = i - K-1\wedge\,d_\mathrm{E} \geq d_\mathrm{B}$}\\ \epsilon_\mathrm{K}(1\!-\!\epsilon_\mathrm{B})[\mathrm{W}_{K-d_\mathrm{B}}\!\! -\!\!(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{E}}] & \hspace{1mm}\text{if $j = i - K-1$}\\ &\hspace{3mm}\text{$\wedge\,d_\mathrm{E} < d_\mathrm{B}$}\\ \epsilon_\mathrm{K}(1\!-\!\epsilon_\mathrm{B})(1\!-\!\epsilon_\mathrm{E})\mathrm{W}_{K-\min(d_\mathrm{B},d_\mathrm{E})} & \hspace{1mm}\text{if $j = i - K - 2$}\\ (1\!-\!\epsilon_\mathrm{K})\epsilon_\mathrm{E}(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \hspace{-10mm}\text{if $j = i - 2K-2\wedge\,d_\mathrm{E} \geq d_\mathrm{B}$}\\ \!\!(1\!-\!\epsilon_\mathrm{K})(1\!-\!\epsilon_\mathrm{B})[\mathrm{W}_{K-d_\mathrm{B}}\!\! -\!\!(1\!-\!\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{E}}] & \hspace{1mm}\text{if $j = i - 2K-2$}\\ & \hspace{3mm}\text{$\wedge\,d_\mathrm{E} < d_\mathrm{B}$}\\ (1\!-\!\epsilon_\mathrm{K})(1\!-\!\epsilon_\mathrm{B})(1\!-\!\epsilon_\mathrm{E})\mathrm{W}_{K-\min(d_\mathrm{B},d_\mathrm{E})} & \hspace{1mm}\text{if $j = i - 2K - 3$}\\ 1 - \sum_{\stackrel{j = \{i-1,i-K-1,i-K-2,}{i-2K-2,i-2K-3\}}} \mathrm{P}_{i,j} & \hspace{1mm}\text{if $j = i$}\\ \end{array} \right.}$$ - If $K+2 \leq i \leq 2K+1$, $$\label{10}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j}\!\! \cong\!\! \left\{ \begin{array}{l l} \epsilon_\mathrm{K}(1-\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}} & \quad\text{if $j = i-1$}\\ (1-\epsilon_\mathrm{K})(1-\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}} & \quad\text{if $j = i-K-1$}\\ (1-\epsilon_\mathrm{K})[1-(1-\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}}] & \quad\text{if $j = i-K-2$}\\ \epsilon_\mathrm{K}[1-(1-\epsilon_\mathrm{E})\mathrm{W}_{K-d_\mathrm{E}}] & \quad\text{if $j = i$}\\ \end{array} \right.}$$ - If $i = (K+1)(K-\tau+1)$, for $\tau = 0, \ldots, (K-2)$, $$\label{11}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j} \!\!\cong\!\! \left\{ \begin{array}{l l} (1-\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \quad\text{if $j = i - K - 1$}\\ 1 - (1-\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \quad\text{if $j = i$}\\ \end{array} \right.}$$ - If $i = 2(K+1)$, $$\label{12}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j} \!\!\cong\!\! \left\{ \begin{array}{l l} (1-\epsilon_\mathrm{K})(1-\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \quad\text{if $j = i - 2K - 2$}\\ \epsilon_\mathrm{K}(1-\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \quad\text{if $j = i - K - 1$}\\ 1-(1-\epsilon_\mathrm{B})\mathrm{W}_{K-d_\mathrm{B}} & \quad\text{if $j = i$}\\ \end{array} \right.}$$ - For $i = K+1$, $$\label{13}{\footnotesize \hspace{-7mm}\mathrm{P}_{i,j} \!\!=\!\! \left\{ \begin{array}{l l} (1-\epsilon_\mathrm{K}) & \quad\text{if $j = i-K-1$}\\ \epsilon_\mathrm{K} & \quad\text{if $j = i$} \end{array} \right.}$$ - If $0 \leq i \leq K$, the state is and absorbing state and, hence, $\mathrm{P}_{i,j} = 1$. [We consider the case as per . In particular, we consider the case where $j = i-1$, which we can informally regard as the case where a state transition occurs *horizontally*, from left to right (see Fig. \[fig.amc\]).]{} As such, Bob will either not correctly receive a coded packet with probability $\epsilon_\mathrm{B}$ or he will receive a coded packet without reducing the defect of $\mathbf{M}_\mathrm{B}$. Conversely, Eve successfully receives a coded packet that reduces the defect of $\mathbf{M}_\mathrm{E}$. That is, $$\begin{aligned} \mathrm{P}_{i,j} &{}={}& \epsilon_\mathrm{B}(1-\epsilon_\mathrm{E})\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}]\\ &&{}+{} (1-\epsilon_\mathrm{B})(1-\epsilon_\mathrm{E})\notag\\ &&{}\cdot{}\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{B}) = K-d_\mathrm{B} \wedge \,\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}+1],\notag\end{aligned}$$ since $\mathbf{M}_\mathrm{B}$ and $\mathbf{M}_\mathrm{E}$ are statistically correlated. Thus, we have the following cases. If $d_\mathrm{B} \geq d_\mathrm{E}$, the probability of $\mathrm{M}_\mathrm{B}$ not reducing its defect while $\mathrm{M}_\mathrm{E}$ does is expect to be small. Thus, the term $\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{B}) = K-d_\mathrm{B}\wedge\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}+1]$ can be disregarded, and relation $\mathrm{P}_{i,j} \geq \epsilon_\mathrm{B}(1-\epsilon_\mathrm{E})\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}]$ holds. If $d_\mathrm{B} < d_\mathrm{E}$, the term $\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{B}) = K-d_\mathrm{B}\wedge\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}+1]$ can be approximated by subtracting the probability of $\mathbf{M}_\mathrm{B}$ reducing its defect from the probability of $d_\mathrm{E}$ being reduced as a result of a successfully received coded packet. From [@8281108 Lemma 3.2], it follows that $\mathrm{P}_{i,j} \geq \epsilon_\mathrm{B}(1-\epsilon_\mathrm{E})\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}] +\, (1-\epsilon_\mathrm{B})(1-\epsilon_\mathrm{E}) \Big(\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{E}) = K-d_\mathrm{E}+1] -{}{} \,\mathbb{P}[\mathrm{rank}(\mathrm{M}_\mathrm{B}) = K-d_\mathrm{B}+1]\Big)$. [The same reasoning holds true when $j = i - K - 1$ and we informally say that the transition occurs *vertically*, from top to bottom. In that case, the third and fourth cases of  follows by simply substituting $\mathrm{E}$ with $\mathrm{B}$ in the first and second cases of the same relation. Let us now consider the situation where $j = i - K -2$, which corresponds to the case where both $\mathrm{M}_\mathrm{B}$ and $\mathrm{M}_\mathrm{E}$ reduce their defect as a result of a successfully received coded packet. In this case, we informally say that the transition occurs *diagonally*.]{} That is, both Bob and Eve successfully receive a coded packet with probability $(1-\epsilon_\mathrm{B})(1-\epsilon_\mathrm{E})$. Since $\mathbf{M}_\mathrm{B}$ and $\mathbf{M}_\mathrm{E}$ are statistically correlated, from [@8281108 Lemma 3.2], it follows that $\mathrm{P}_{i,j}$ is upper-bounded by the product of $(1\!-\!\epsilon_\mathrm{B})(1\!-\!\epsilon_\mathrm{E})$ and the probability of $\mathrm{M}_\mathrm{t}$ reducing its defect, where the index $\mathrm{t} \in \{\mathrm{B},\mathrm{E}\}$ signifies the matrix with the smallest defect between $\mathrm{M}_\mathrm{B}$ and $\mathrm{M}_\mathrm{E}$. We then approximate $\mathrm{P}_{i,j}$ with the aforementioned upper-bound. As for the cases when $i$ fulfills the conditions for , from Fig. \[fig.amc\], we observe that the probability of having a horizontal transition ($j = i - 1$) can be approximated as per the first and second case of . Once again, the probability of having a vertical transition can be approximated according to the third and fourth case of  multiplied for $(1-\epsilon_\mathrm{K})$ or $\epsilon_\mathrm{K}$ if the transition leads to a state where the ACK message has ($\delta = 1$) or has not been successfully delivered ($\delta = 0$), respectively. The same reasoning holds true for the diagonal transitions. When $i$ fulfil the conditions for , transition probability can be seen as a special case of  where $\mathrm{W}_{K-d_\mathrm{B}}$ is $0$ as the defect of $\mathbf{M}_\mathrm{B}$ is $0$. Relations  and  are special cases of  and , respectively, where only vertical transitions are considered and $\mathrm{W}_{K-d_\mathrm{E}}$ is $0$, as $d_\mathrm{E}$ is equal to $0$. When $i$ fulfills the condition for , both $d_\mathrm{B}$ and $d_\mathrm{E}$ are equal to $0$ – thus, the system remains in the state $(K+1)$ for as long as the ACK message cannot be successfully delivered. Finally, the first $K + 1$ states are absorbing as Bob can successfully acknowledge to Alice the recovery of the source message and the transmission of coded packets is subsequently halted. From Lemma \[eq.lem.P\], it follows that $\mathcal{M}$ does not contain any cycles other than loops. For these reasons, $\mathbf{P}$ is a lower-triangular matrix with non-zero diagonal elements, which makes $\mathbf{P}$ invertible in the real field. Finally, The intercept probability can be obtained as follows. \[th.th\] For a given probability $p$ and a maximum number of coded packet transmissions $\Hat{N}$, the intercept probability $\mathrm{I}_{\Hat{N}}(p)$ can be approximated as $$\mathrm{I}_{\Hat{N}}(p) \cong \sum_{\stackrel{j \in \small\{\tau(K+1),} {\text{for $\tau = 0, \ldots, (K+1)$}\small\}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbf{P}^{\Hat{N}}\left((K+1)^2 + K,j\right), \label{eq.th}$$ where $\mathbf{P}^{\Hat{N}}\left(s,t\right)$ signifies the $(s,t)$-th element of the matrix $\mathbf{P}$ after it has been elevated to the power of $\Hat{N}$, for $s$ and $t = 0, \ldots, (K+1)^2 + K$. The system starts with probability $1$ from the state with label $(K+1)^2 + K$, i.e., the system starts from state $(K,K,0)$ with probability $1$. The term $\mathrm{I}_{\Hat{N}}(p)$ is equal to the probability of the system being in any of the states having $d_\mathrm{E}$ equal to $0$, for a given $\Hat{N}$. From Definition \[def.labeling\], we observe that states with labels $\tau(K+1)$, for $\tau = 0, \ldots, (K+1)$ are associated with those cases where Eve successfully recovered the information message. That is,  holds. Optimization Model {#sec.OM} ================== We define the Intercept Minimization (IM) problem as follows: $$\begin{aligned} \text{IM} & \quad \min_{p} \,\, \mathrm{I}_{\Hat{N}}(p) \label{IM.of}\\ \text{s.t.} & \quad \mathrm{D}_{\Hat{N}}(p) \geq \Hat{D} \label{IM.c1}$$ where $\mathrm{D}_{\Hat{N}}(p)$ signifies the probability of Bob recovering the source message. For a given value of $p$ and $\Hat{N}$, constraint  ensures that Bob recovers the source message with at least probability $\Hat{D}$. [Form [@7335581; @8248799], it follows that the average number of coded packet transmissions needed to recover a source messages increases as $p$ increases. Thus, not only Eve but also Bob is expected to require more coded packet transmissions to recover a source message. To prevent the IM problem to minimize the intercept probability by increasing the value of $p$ at the expense of the number of coded packet transmissions, constraint  not only imposes a minimum threshold for the probability of Bob recovering a source message but also it ensures that a source message has to be recovered by $\Hat{N}$ coded packets transmissions. As such, if we consider the case where one coded packet transmission takes place in one-time slot, the proposed optimization framework ensures the delivery of a source message with a probability greater than or equal to $\Hat{D}$ in $\Hat{N}$ time slots or less.]{} By following the same reasoning as in Theorem \[th.th\], term $\mathrm{D}_{\Hat{N}}(p)$ can be approximated as $\sum_{j = 0}^{K+1} \mathbf{P}^{\Hat{N}}\left((K+1)^2 + K,j\right)$. However, as discussed in the proof of Lemma \[eq.lem.P\], the proposed approximation of $\mathbf{P}_{i,j}$ is likely to over-estimate both $\mathrm{I}_{\Hat{N}}(p)$ and $\mathrm{D}_{\Hat{N}}(p)$ – thus making approximation  an empirical upper-bound of the system intercept probability but leading to potentially overestimating the probability of Bob recovering the source message. For the sake of solving the IM problem, $\mathrm{D}_{\Hat{N}}(p)$ is approximated by directly employing , as per [@8248799 Eq. (2), Theorem 3.1]: $$\label{eq.22} \mathrm{D}_{\Hat{N}}(p) \cong \sum_{n = K}^{\Hat{N}} \binom{\Hat{N}}{n} (1-\epsilon_\mathrm{B})^n \epsilon^{\Hat{N}-n} \mathrm{R}_{n,K}.$$ The the IM problem can be solved as follows. From , it follows that term $\sum_{\ell = 2}^{t+1}\binom{t}{\ell - 1} \frac{\pi_{\ell,K}}{(1-p^K)^\ell}$ is a non-decreasing function of $p$, which makes $\mathrm{W}_t$ a non-increasing function of $p$. That is, for a given $\Hat{N}$, the higher $p$, the more unlikely it gets for the system to be in any of the states with label $\tau(K+1)$, for $\tau = 0, \ldots, (K+1)$, i.e., the more unlikely it gets for Eve to recover the source message. In the following section, we will show how the proposed approximation for the intercept probability $\mathrm{I}_{\Hat{N}}(p)$ is largely a non-increasing function of $p$, for $q^{-1} \leq p < 1$ and $\epsilon_\mathrm{K} \geq 0.85$. Similarly,  is a non-increasing function of $p$, which makes  a non-increasing function as well. For these reasons, the solution of the IM problem is given by the real root of $\mathrm{D}_{\Hat{N}}(p) - \Hat{D} = 0$, which can be derided by employing the bisection method. Numerical Results {#sec.NR} ================= This section compares the derived expression of the intercept probability with Monte Carlo simulations, and solves the IM problem for different configurations. The code needed to reproduce our results is available online[^2]. Fig. \[fig.1\] compares the expression of the intercept probability as per  with Monte Carlo simulations, for $K = 20$, $q = \{2,2^4\}$ and $\Hat{N} = 2K$. We also set Bob’s and Eve’s packet error probability equal to $\epsilon_\mathrm{B} = \{0.01,0.05,0.1\}$ and $\epsilon_\mathrm{E} = \epsilon_\mathrm{B} + 0.25$, respectively. In particular, Fig. \[fig.1.1\] shows that, for $q = 2$,  is a tight empirical approximation of the intercept probability – the maximum Mean Squared Error (MSE) between simulations and our proposed approximation  is equal to $0.933\cdot 10^{-3}$, for $\epsilon_\mathrm{B} = 0.01$, $\epsilon_\mathrm{E} = 0.26$ and $\epsilon_\mathrm{K} = 1$. For $q = 2^4$, Fig. \[fig.1.2\] shows that the intercept probability are almost constant for $2^{-4}\leq p \leq 0.73$, which follows from the fact that both $\rho_{c,r}$ and $\pi_{\ell,K}$ approach $0$ as $q$ grows (see Lemma \[lem.trans\]), and hence, $\mathrm{W}_t$ can be approximated with $(1-p^K)$. The proposed approximation becomes looser only when the probability $p$ of a source packet not taking part in the generation of coding vector is very large ($p \geq 0.8$). [From Fig. \[fig.1\], we also observe that the proposed  is also an empirical upper-bound of the intercept probability both in the case of $q = 2$ and $2^4$, for $\epsilon_\mathrm{K} \geq 0.85$ and $\epsilon_\mathrm{K} \geq 0.9$, respectively. In addition, for $p \geq 0.8$ and $\epsilon_\mathrm{K} \geq 0.85$, the simulated $\mathrm{I}_\mathrm{\Hat{N}}(p)$ sharply decreases as the value of $p$ approaches $0.9$ and hence, the probability of having all-zero coding vectors sharply increases thus, making for both Eve and Bob more unlikely to recover a source message – for instance, if the value of $p$ increases from $0.8$ to $0.9$, the probability of having an all-zero coded packet increases from $0.012$ to $0.12$, for $K = 20$. For $\epsilon_\mathrm{K} \leq 0.5$ or $\epsilon_\mathrm{K} \leq 0.85$, for $q = 2$ and $2^4$, respectively, the intercept probability increases with $p$, for $0.75 \leq p \leq 0.85$. That is, as $\epsilon_\mathrm{K}$ decreases, the number of coded packets transmitted after Bob has already recovered the source message decreases as well. This impacts on the probability of Eve recovering the source message, and hence, the overall value of $\mathrm{I}_\mathrm{\Hat{N}}(p)$ reduces up to $0.05$. In these cases, from Lemma \[lem.trans\], we note that some composite transition probabilities are non-decreasing functions with $p$ and in this case they can be appreciated in the overall expression of $\mathrm{I}_\mathrm{\Hat{N}}(p)$. Assuming $\epsilon_\mathrm{E} = \epsilon_\mathrm{B} = 0$, $K = 20$ and that $\mathcal{M}$ transitions from $(4,5,0)$ to $(3,4,0)$ and then to $(3,3,0)$, the overall probability of this transitions to happen is $\mathrm{W}_{K-4}(\mathrm{W}_{K-4} - \mathrm{W}_{K-3})$ which is a non-decreasing function of $p$ when $0.7 \leq p \leq 0.87$ and $0.7 \leq p \leq 0.9$, for $q = 2$ and $2^4$, respectively.]{} [Fig. \[fig.2\] compares the intercept probability obtained by employing the proposed IM problem with the state-of-the-art performance of a system model as per [@6777406; @7214217] where $p = 1/q$ and hence, the classic RLNC is used. In particular, Fig. \[fig.2\] shows the *intercept probability gain* defined as the difference between the intercept probability values obtained by using the classic RLNC and the intercept probability that we get by setting $p$ equal to the solution of the IM problem $p^\star$ – namely, $\mathrm{I}_\mathrm{\Hat{N}}(1/q) - \mathrm{I}_\mathrm{\Hat{N}}(p^\star)$.]{} In order to show the intercept probability gain effectively achieved, both $\mathrm{I}_\mathrm{\Hat{N}}(1/q)$ and $\mathrm{I}_\mathrm{\Hat{N}}(p^\star)$ are obtained by employing Monte Carlo simulations. \[fig.1\] [Let us consider Fig. \[fig.2.1\], for $\epsilon_\mathrm{B} = 0.05$, $\epsilon_\mathrm{E} = 0.2$, $K = 5$ and $q = 2$. In the case of $\epsilon_\mathrm{K} = 1$, the intercept probability gain sharply increases and reaches its maximum of $0.196$ for $\Hat{N} = 17$. As $\epsilon_\mathrm{K}$ decreases, the intercept probability gain decreases as well. In particular, for $\Hat{N} = 17$ and $\epsilon_\mathrm{K} = 0.85$, the intercept probability gain reduces to $0.15$. For $q = 2^4$, the intercept probability gain is generally larger. That is, for $\epsilon_\mathrm{K} = 1$ and $\epsilon_\mathrm{K} = 0.85$, the intercept probability gain reaches its maximum of $0.25$ and $0.27$, for $\Hat{N}=75$. With regard to Fig. \[fig.2.2\], as $\epsilon_\mathrm{E}$ increases to $0.3$, the intercept probability gain reaches the value of $0.33$ and $0.36$, for $q = 2$ and $q = 2^4$, respectively. As $K$ is set equal to $20$, the intercept probability gain associated to $q = 2$ and $q = 2^4$ are comparable.]{} We also note that, as $\epsilon_\mathrm{K}$ decreases, we expect the intercept probability gain to decrease the chances of Eve successfully receiving enough coded packets to recover the source message are impaired by the reduced probability of Alice having to unnecessarily broadcast coded packets due to the loss of acknowledge messages from Bob. [In Figs. \[fig.2.3\] and \[fig.2.4\], Bob’s packet error probability is doubled ($\epsilon_\mathrm{B} = 0.1$). Yet, the intercept probability gains are comparable to those in the cases where $\epsilon_\mathrm{B}$ was equal to $0.05$. Since in Fig. \[fig.2\] the difference $\epsilon_\mathrm{E} - \epsilon_\mathrm{B}$ is fixed and set equal to $0.15$ or $0.25$, we can conclude that the value of the intercept probability gain is determined by the difference in the packet error probability between Eve and Bob, for a given $p$ and $\Hat{N}$.]{} Conclusions {#sec.CL} =========== We present a novel strategy for approximating the intercept probability for networks where secrecy is achieved by employing a sparse implementation of RLNC. The proposed approximation is general and applies to the cases where transmissions are not acknowledged or when they are and the eavesdropper jams the feedback channel. We also propose an optimization framework for minimizing the intercept probability by increasing the sparsity of RLNC in use. Analytic results empirically establish that the proposed approximation for the intercept probability is tight, for practical network and transmission parameters. Our optimization framework ensures a reduction of the intercept probability of up to $82\%$ compared to the case where classic RLNC is used. \ \[fig.2\] Acknowledgments =============== The authors would like to thank Oliver Johnson (University of Bristol, Bristol, UK) for the insightful discussions and precious feedback. [^1]: Copyright (c) 2015 IEEE. Personal use of this material is permitted. However, permission to use this material for any other purposes must be obtained from the IEEE by sending a request to [pubs-permissions@ieee.org]{}. A. Tassi, R. J. Piechocki and A. Nix are with the Department of Electrical and Electronic Engineering, University of Bristol, UK (e-mail: [{A.Tassi,R.J.Piechocki,Andy.Nix}@bristol.ac.uk]{}). [^2]: <https://github.com/andreatassi/SparseRLNC>.
--- abstract: 'The fields of cavity quantum electrodynamics and magnetism have recently merged into *‘cavity spintronics’*, investigating a quasiparticle that emerges from the strong coupling between standing electromagnetic waves confined in a microwave cavity resonator and the quanta of spin waves, magnons. This phenomenon is now expected to be employed in a variety of devices for applications ranging from quantum communication to dark matter detection. To be successful, most of these applications require a vast control of the coupling strength, resulting in intensive efforts to understanding coupling by a variety of different approaches. Here, the electromagnetic properties of both resonator and magnetic samples are investigated to provide a comprehensive understanding of the coupling between these two systems. Because the coupling is a consequence of the excitation vector fields, which directly interact with magnetisation dynamics, a highly-accurate electromagnetic perturbation theory is employed which allows for predicting the resonant hybrid mode frequencies for any field configuration within the cavity resonator, without any fitting parameters. The coupling is shown to be strongly dependent not only on the excitation vector fields and sample’s magnetic properties but also on the sample’s shape. These findings are illustrated by applying the theoretical framework to two distinct experiments: a magnetic sphere placed in a three-dimensional resonator, and a rectangular, magnetic prism placed on a two-dimensional resonator. The theory provides comprehensive understanding of the overall behaviour of strongly coupled systems and it can be easily modified for a variety of other systems.' author: - 'Rair Macêdo\*' - 'Rory C. Holland' - 'Paul G. Baity' - 'Karen L. Livesey' - 'Robert L. Stamps' - 'Martin P. Weides' - 'Dmytro A. Bozhko' title: An Electromagnetic Approach to Cavity Spintronics --- Dr. R. Macêdo, R. C. Holland, Dr. P. Baity, Prof. Martin P. Weides\ James Watt School of Engineering, Electronics & Nanoscale Engineering Division, University of Glasgow, Glasgow G12 8QQ, United Kingdom\ Email Address: Rair.Macedo@glasgow.ac.uk\ Dr. K. L. Livesey\ Center for Magnetism and Magnetic Materials, Department of Physics and Energy Science, University of Colorado Colorado Springs, Colorado Springs, Colorado 80918, USA\ School of Mathematical and Physical Sciences, The University of Newcastle, Callaghan NSW 2308, Australia\ Prof. R. L. Stamps\ Department of Physics and Astronomy, University of Manitoba, Winnipeg, Manitoba, MB R3T 2N2, Canada\ Dr. D. A. Bozhko\ James Watt School of Engineering, Electronics & Nanoscale Engineering Division, University of Glasgow, Glasgow G12 8QQ, United Kingdom\ Center for Magnetism and Magnetic Materials, Department of Physics and Energy Science, University of Colorado Colorado Springs, Colorado Springs, Colorado 80918, USA\ Introduction ============ $ $ The concept of using electromagnetic waves at millimetre wavelengths trapped within resonators to probe quantum properties of matter is no stranger to us. In fact, it dates back to the 1940’s when Purcell and colleagues published an abstract which was later presented at the 1946 Spring Meeting of the American Physical Society [@purcell46]. In that work, they showed that the transitions between energy levels, which correspond to different orientations of the nuclear spin in the presence of a static applied magnetic field, can couple to a resonant circuit. This coupling could then be measured through changes in the quality factor of the system. Their work was the steppingstone to the field of cavity quantum electrodynamics [@walther06]. Interestingly enough, in that same year Griffiths also used standing waves in a microwave resonator to measure the effective high-frequency permeability of ferromagnets [@griffiths46] which then led to Kittel’s theory of ferromagnetic resonances [@kittel48]. More recently, these two – once distinct – lines of research have come together in a newly designated area of research known as *cavity spintronics* which is concerned with studying ‘cavity magnon-polaritons’ [@goryachev14; @zhang14; @zhang15b]. These are hybrid light–matter quasiparticles originating from the strong coupling between magnons (the quanta of spin waves) and electromagnetic waves bound inside a microwave cavity resonator [@zhang14]. One of the most fascinating aspects of these hybrid cavity-magnon systems is the potential to combine light and magnetism; and by doing so it should be possible to combine quantum information with spintronics [@tabuchi15; @lachance_quirion19]. In addition, this emergent phenomenon can also be used to engineer devices including, gradient memory devices [@zhang15], ferromagnetic haloscopes for axion detection [@crescini18; @flower19], and radiofrequency-to-optical transducers [@hisatomi16]. In order to fully exploit cavity-magnon hybrid quasiparticles for applications, a deep understanding of the coupling strength is required. The coupling strength determines the degree of coherent information exchange, and thus, plays a crucial role when constructing any devices employing cavity spintronics. As an example of recent efforts into fully understanding cavity magnon-polariton coupling, we can quote Zhang and colleagues’ findings[@zhang17] on the observation of exceptional points (where the two-level system’s eigenfrequencies coalesce) in a cavity magnon–polariton system upon tuning the magnon–photon coupling strength. In addition, the optimisation of the coupling conditions has been shown to be a vital aspect of obtaining non-Markovian dynamics in a multi magnet-cavity hybrid system employed as a coherent, long-lifetime, broadband and multimode gradient memory with a 100-ns storage [@zhang15]. Mechanisms to control the coupling strength have so far included changing the position of the sample within the resonator [@harder18], voltage induced control [@kaur16], as well as varying the temperature of the system [@flaig17]. More recently, a two-port cavity approach has been implemented using two-[@bhoi19; @zhang19] and three-dimensional[@boventer19a; @boventer19b] systems as a way to achieve level attraction as well as coherent manipulation of energy exchange in the time domain [@wolz19]. These are only a few examples of the intensifying interests to fully understand and manipulate the coupling behaviour in hybrid cavity spintronic systems. However, up to now most works have neglected how the excitation vector fields within the resonator can modify the coupling of the hybrid modes and, more importantly, how these fields directly interact with magnetisation dynamics. This includes the direction and profile of the cavity fields. A few different models have been used to describe the magnet-cavity system, one of which is the harmonic coupling model. This treats the magnet and cavity as two coupled harmonic oscillators (microscopically [@goryachev14] or macroscopically [@harder16; @proskurin19]). Another is the dynamic phase correlation model which looks at impedance changes due to charge motion generated by spin precession inside the cavity – thus relating the system to Ampére’s and Faraday’s laws [@cao19; @harder16]. While these models have captured much of the nature of hybrid cavity-spin systems, they still do not consider the full effect of complex driving fields on the spin dynamics. In addition, they also require the introduction of various experimentally-extracted parameters. Here, we demonstrate experimentally that by modifying the position of the sample inside a resonator as well as changing the sample’s shape, it is possible to drastically change the coupling strength. We explain the results with an elegant theory for predicting the hybrid magnon-polariton frequencies, without any fitting parameters and without any phenomenological terms. The theory couples the fundamental magnetic torque equation to Maxwell’s equations. The theory and experiment show remarkable agreement. To demonstrate that the theoretical method is generally applicable to any magnet-cavity system we use two illustrative cases: we start with a microwave cavity resonator where linearly polarised excitation is obtained, and place a magnetic Yttrium Iron Garnet (Y$_3$Fe$_5$O$_{12}$ or simply YIG)[@cherepanov93; @serga10] sphere inside. We then change the position of the sphere to exemplify how the coupling strength can be drastically modified with small changes in the microwave field profile at the sample position. Further, we investigate similar behaviour in a different cavity resonator – namely a two-dimensional wave guide resonator. Using a perturbation method, we provide a theoretical framework to describe the behaviour of cavity spintronic systems based on self-consistent electromagnetic theories. This allows for an accurate verification of our experimental findings using analytical expressions for the field profile inside the cavity and accounting for its coupling with specific magnetic permeability tensor components. This tensor is obtained from the magnetic torque equation (ie. the Landau Lifshitz equation) and can be used to treat magnets of various types and shapes. Hence, our theoretical framework is very general and can be tailored to fit a variety of different hybrid systems. Finally, we expect that by being able to fully understand the behaviour of these systems, we open up new avenues for exchange and manipulation of information through cavity spintronic devices; in both classical and quantum regimes. Theoretical Framework ===================== $ $ Before listing our main findings, it will be necessary to revisit two well-known concepts in magnetism and microwave engineering: the response of magnetisation to an oscillating magnetic field, characterised through a dynamic susceptibility; and electromagnetic perturbation theory in a microwave resonator. These are essential for a faithful theoretical description of cavity-magnon hybridisation. Magnetic response through a dynamic susceptibility -------------------------------------------------- $ $ Let us start by looking at ferromagnetic resonances. This, in general, happens when a steady magnetic field, $\mathbf{H_{ext}}$, is applied to a spin system wherein the total magnetic moment, $\mathbf{M}$, will coherently precess about its equilibrium orientation. Resonance will occur when an oscillating magnetic field is applied with frequency equal to that of the natural Larmor frequency of the magnet. The behaviour can be semi-classically described by the equation of motion of magnetisation \[the Landau-Lifshitz (LL) equation\] [@gurevichBook]: $$\begin{aligned} \label{llg} \frac{\partial\mathbf{M}}{\partial t} = -\gamma\mu_0(\mathbf{M}\times \mathbf{H_{0}}).\end{aligned}$$ Here, the magnetisation is given by $\mathbf{M} = \hat{\mathbf{z}}M_s + \mathbf{m}e^{j\omega t}$, with $M_s$ being the saturation magnetisation, $\gamma$ is the gyromagnetic ratio, and $\omega$ is the angular frequency. Note that magnetic damping is ignored for now but it will later be taken into account phenomenologically. The effective field, $\mathbf{H_{0}}$, acting on $\mathbf{M}$ includes contributions from the various energy terms such as Zeeman, dipole-dipole, exchange and anisotropy. Here we consider that it contains terms due to the oscillating field $\mathbf{h}$ and the externally applied magnetic field $\mathbf{H_{0}}$ along the $z$ direction. To account for the shape of magnetic samples, we also include contributions from a demagnetising field which can be written as $\mathbf{H_{D}} = -\overleftrightarrow{D}\cdot\mathbf{M}$, where $\overleftrightarrow{D}$ denotes the demagnetising tensor $diag(D_x,D_y,D_z)$ [@brown62]. The effective field can then be written as $\mathbf{H_{eff}} = \mathbf{H_D}+\hat{\mathbf{z}}H_{0}+\mathbf{h}e^{j\omega t}$. After applying these definitions to Eq. (\[llg\]), one arrives at the relation between the oscillating magnetisation, $\mathbf{m}$, and the oscillating magnetic field, $\mathbf{h}$: $$\begin{aligned} \label{mhs} \begin{bmatrix} m_x \\ m_y \\ \end{bmatrix} = \underbrace{\begin{bmatrix} \chi_{xx} & j\chi_{xy} \\ -j\chi_{yx} & \chi_{yy} \\ \end{bmatrix}}_{\overleftrightarrow{\chi}_m(\omega)} \begin{bmatrix} h_x \\ h_y \\ \end{bmatrix}.\end{aligned}$$ Here, $\overleftrightarrow{\chi}_m(\omega)$ is the high-frequency magnetic susceptibility which is a second rank tensor. This tensor is often used to describe the electromagnetic response of magnetic materials. It is noteworthy that the nonzero off-diagonal elements are well-known to give rise to various nonreciprocal effects [@camley87; @macedo19], which are the basis for a number of important device applications [@how05; @fuller1987]. Before looking at the behaviour of a magnet inside a microwave cavity, some intuition can be gained by exploring Eq. (\[mhs\]) under a few different circumstances. For this we will look at $\chi_{xx}$, which will be the only component necessary through the remainder on this work – note that, for completeness, the other components of $\overleftrightarrow{\chi}_m(\omega)$ are given in the Methods sections. This component is given by $$\label{Xxx} \chi_{xx}(\omega) = \frac{\chi_a}{1-(\omega/\omega_0)^2}$$ where the resonance frequency $\omega_r$ is given by $$\omega_0^2 = \gamma^2\mu_0^2[H_0+(D_y-D_z)M_s]\times[H_0+(D_y-D_z)M_s]$$ and $$\label{Xa} \chi_{a} = \frac{M_s}{H_{0}+(D_x-D_z)M_s}.$$ The simplest case to interpret here is that of a ferromagnetic sphere, such as the one depicted in Fig. \[fig:MagPrecess\](a). Due to the symmetry of the system, the demagnetising factors are the same in all directions, thus cancelling themselves out in the equations outlined above. In this special case, the resonance frequency is now simply $\omega_0 = \gamma\mu_0 H_{0}$, which is the natural precession frequency of a magnetic dipole in a constant magnetic field. We can also see that Eq. (\[Xxx\]) is reduced to the well know form $\chi_{xx}(\omega) = \omega_m\omega_0/(\omega_0^2-\omega^2)$ with $\omega_m = \gamma\mu_0 M_s$. For the case of a ferromagnetic rectangular prism \[such as the one depicted in Fig. \[fig:MagPrecess\](b)\] on the other hand, all demagnetising factors are non-zero [@aharoni98] so that both the resonance frequency, $\omega_0$ and permeability tensor components, such as $\chi_{xx}$ have a strong dependence on the components of $\mathbf{H_D}$ as outlined in Eqs. (\[Xxx\]-\[Xa\]). Note that the demagnetising factors are approximate for rectangular prisms since the demagnetizing fields are in fact nonuniform [@aharoni98]. A comparison between both cases in given in Fig. \[fig:MagPrecess\](c) where the solid lines are for a ferromagnetic sphere and the dashed lines are for a rectangular prism. It is then evident that in both case the susceptibility component $\chi_{xx}$ has a singularity at $\omega=\omega_0$. However, the resonance is shifted to higher frequencies if the demagnetising fields for each direction differ from each other. ![image](Figure_1.png){width="0.5\linewidth"} It is important to point out that for a sphere $\chi_{xx}(\omega)=\chi_{yy}(\omega)$. This is not the case for a rectangular prism with different demagnetising factors along the $x$ and $y$ direction. This can be intuitively understood by looking at the cartoons in Fig. \[fig:MagPrecess\](a)-(b). For a sphere, the magnetisation is the same in $x$ and $y$, which is in stark contrast to the case of a rectangular prism as $m_y$ and $m_x$ differ from one another. Thus, one should expect that $\chi_{xx}(\omega)\neq\chi_{yy}(\omega)$. We will not discuss this further as $\chi_{yy}(\omega)$ will not be used in the remainder of this work. Therefore, we will now move on to the electromagnetic perturbation theory method to describe a cavity-magnon system. However, the implication of the off-diagonal elements in Eq. (\[mhs\]), and the different forms of $\overleftrightarrow{\chi}_m(\omega)$ when field configurations and polarisation states within a microwave cavity are changed will soon be discussed. Perturbation Theory for Cavity Magnon-Polaritons ------------------------------------------------ $ $ In practical applications, the resonance frequency of a microwave cavity resonator, $\omega_c$, can be easily modified with the smallest modification in shape, size, or with a small piece of material placed inside the cavity. While the effects of these perturbations can often be difficult to quantify, they can be calculated accurately by employing perturbation theory. This holds if one assumes that the fields of a cavity with a small shape or material perturbation inside does not greatly deviate from those of the empty cavity. In recent cavity magnon-polariton experiments, a microwave cavity resonator is modified by introducing a small piece of magnetic material within the cavity. Up to now, most of the works in cavity spintronics have used approximations or oscillator models to describe the coupling and overall behaviour of the system [@harder16; @bourhill19]. If the magnetic sample is small enough compared to the cavity volume, however, the effects of the sample and the coupling between magnon-cavity can be accurately probed using perturbation theory. A short derivation of the most general equations is presented below, and then the results for the specific geometry experimentally studied here are derived. We start by looking at an unperturbed cavity state; that of an empty cavity, resonating in only one of its normal modes at frequency $\omega_c$. Let the oscillating electric and magnetic fields within the cavity be $\mathbf{E_c}$ and $\mathbf{h_c}$, respectively, proportional to $e^{j\omega_c t}$. Under these conditions, one can write Maxwell’s equations as: \[allE)H)\] $$\begin{aligned} \nabla \times \mathbf{h_c} = j\omega_c\varepsilon_0\mathbf{E_c} \label{DxH0} \\ \nabla \times \mathbf{E_c} = -j\omega_c\mu_0\mathbf{h_c} \label{DxE0}. \end{aligned}$$ On introducing a small ferrite sample into the cavity, the cavity will then resonate at a new frequency $\omega$ [@waldron1957]. Thus, Eqs. (\[DxH0\]) and (\[DxE0\]) have to be rewritten as follows: \[allEH\] $$\begin{aligned} \nabla \times \mathbf{h} = j\omega\varepsilon_0\mathbf{E} +\mathbf{J_e} \label{DxH} \\ \nabla \times \mathbf{E} = -j\omega\mu_0\mathbf{h} +\mathbf{J_m} \label{DxE}, \end{aligned}$$ where $\mathbf{\mathbf{J_e}}$ and $\mathbf{J_m}$ are the sample’s dielectric and magnetic contributions which only exist in the region occupied by the perturbing material and are zero elsewhere in the cavity [@fuller1987]. We can write these quantities as $\mathbf{J_e} = j\omega\varepsilon_0\overleftrightarrow{\chi}_e(\omega)\cdot \mathbf{E} \label{Je}$ and $ \mathbf{J_m} = -j\omega\mu_0\overleftrightarrow{\chi}_m(\omega)\cdot\mathbf{h} \label{Jm}$. Here, $\overleftrightarrow{\chi}_e(\omega)$ and $\overleftrightarrow{\chi}_m(\omega)$ are the electric and magnetic susceptibility contributions of the ferrite (both written in tensor form for a more general description). Following common vector algebraic operations [@pozar], we can obtain the following relation: $$\label{w-w0Js} \omega-\omega_c = j\frac{\displaystyle\int_{\delta v}(\mathbf{J_e}\cdot\mathbf{E_c^*}-\mathbf{J_m}\cdot\mathbf{H_c^*})\mathrm{d}v}{\displaystyle\int_{v}(\varepsilon_0\mathbf{E_c^*}\cdot\mathbf{E}+\mu_0\mathbf{h_c^*}\cdot\mathbf{h})\mathrm{d}v}.$$ where $\delta v$ is the sample volume, and $v$ is the volume of the empty cavity. This expression is exact, given the perturbative assumptions made in Eqs. (\[allEH\]), and could be evaluated if the configuration of $\mathbf{E}$ and $\mathbf{h}$ for the perturbed cavity were known. In general, this can be hard to estimate. For cavity measurements in which the samples are small enough, however, one can assume that $\mathbf{E} = \mathbf{E_c}$ and $\mathbf{h} = \mathbf{h_c}$ everywhere inside the cavity. For simplicity, and for the remainder of this work, we can also consider that there are no dielectric contributions from the sample and it responds only to the $\mathbf{h_c}$ field of the cavity, so that we can make $\mathbf{E_c} = 0$. This way, Eq. (\[w-w0Js\]) can be rewritten as $$\label{w-w0} \omega-\omega_c = -\omega_c\frac{\displaystyle\int_{\delta v}\mu_0[\overleftrightarrow{\chi}_m(\omega)\cdot\mathbf{h_c}]\cdot\mathbf{h_c^*}\mathrm{d}v}{2\displaystyle\int_{v}\mathbf{h_c}^2 \mathrm{d}v}.$$ Dependency on the Distribution of a Linearly Polarised Field ============================================================ $ $ We will now apply the theory detailed so far to understand the coupling between microwaves in a cavity resonator and magnons. We will start by looking at the simple case of a microwave field in a 3D cavity exciting magnons in a YIG sphere such as that depicted in Fig. \[fig:Field\_g\_Position\](a). In order to gain some insight into the effect of the field configuration on the coupling, we have experimentally probed the behaviour of hybrid system as the position of a magnetic sphere inside the cavity is changed, so it experiences different field directions. For this, we have used a rectangular microwave cavity, such as the one shown in Fig. \[fig:Field\_g\_Position\](b), with capacitive coupling generating a TE$_{11}$ mode. The oscillating $\mathbf{h_c}$ intensity profile is also shown in Fig. \[fig:Field\_g\_Position\](b) with anti-node at $x = 27$ mm, $y = 2.5$ mm and $z = 3.75$ mm, also marked as $A$ (details on the experimental setup are given in the methods section). By placing a small magnetic sample (YIG sphere of diameter 0.5 mm) in the anti-node of $\mathbf{h_c}$ we obtain the Rabi splitting [@miller05] displayed in Fig. \[fig:Field\_g\_Position\](c). This has often been referred to as level repulsion of the coupled magnon-cavity system and is a classic feature of the hybridisation between these two systems. In this case, the macroscopic coupling strength, $g$, is often associated with the width of the splitting at $\omega_c=\omega_0$ which is where the effect of hybridisation is greatest. In the strong coupling regime these are related by $2g = |\omega_a-\omega_b|=\omega_{gap}$ [@harder16]. Here, $\omega_a$ and $\omega_b$ are the eigenfrequencies for the two modes (branches) seen in Fig. \[fig:Field\_g\_Position\](c). The effect of placing the sample away from the anti-node of $\mathbf{h_c}$ is shown in Fig. \[fig:Field\_g\_Position\](d) for a sample placed at position $B$ ($y = 10$ mm) and in Fig. \[fig:Field\_g\_Position\](e) for a sample at position $C$ ($y = 15$ mm) where $\omega_{gap}$ is very small as $\mathbf{h_c}$ is close to vanishing – positions A, B, and C are drawn in Fig. \[fig:Field\_g\_Position\](b). ![(a) Behaviour of magnetisation excited by a linearly polarised excitation such as shown in (b) when the sample is placed inside of a rectangular cavity microwave resonator. We show a cross-sectional field configuration at $z$ = 3.75 mm generated by capacitive coupling (simulated with COMSOL). Experimental spectra and perturbation theory (dashed) lines of the Rabi splitting close to $\omega_c$=$\omega_0$ for the YIG sphere placed at positions (c) $A$ ($y$ = 2.5), (d) $B$ ($y$ = 10), and (e) $C$ ($y$ = 15 mm). In part (f) we give a full map of the width of the Rabi splitting $\omega_{gap}$ for any given $x-y$ position. (g) Experimental points and theoretical lines of $\omega_{gap}$ as the sample is moved within the microwave cavity (along $y$ and at $x$ = 27 mm) through positions $A$, $B$ and $C$ \[see panel (b)\].[]{data-label="fig:Field_g_Position"}](Figure_2.png){width="0.95\linewidth"} Employing perturbation theory by combining Eq. (\[w-w0\]) and Eq. (\[mhs\]), we can predict the behaviour of $\omega_{gap}$. For this, we consider that in the YIG sphere used in our experiment, the effect of an applied field $\mathbf{H_{0}}$ directed along $z$ is to induce precession that can only couple with the components of $\mathbf{h_c}$ along the $x$ and $y$ directions, $h_{cx}$ and $h_{cy}$ respectively. If we concentrate on the behaviour of the sample moved from the anti-node to the node of $\mathbf{h_c}$ (from $y = 2.5$ mm to point $y = 18$ mm, but always at $x = 27$ mm), we can neglect $h_{cy}$ as it is much smaller than $h_{cx}$ at all points. This means the sample is always excited by a linearly polarised field. We can then rewrite Eq. (\[w-w0\]) as: $$\label{w-w0usingmu} \frac{\omega-\omega_c}{\omega_c} =-\chi_{xx}(\omega)~\frac{\displaystyle\int_{\delta v}\mu_0|h_{cx}|^2\mathrm{d}v}{2\displaystyle\int_{v}\mathbf{h_c}^2 \mathrm{d}v}=-\frac{\omega_0\omega_m}{\omega_0^2-\omega^2} \frac{W_p}{W_c}.$$ For simplicity, we write the quantities relating to the oscillating fields in Eq. (\[w-w0usingmu\]) as $W_c$ and $W_p$, respectively. Since $W_c$ is the energy stored in the empty cavity, we can write it as $W_c=1/2(\varepsilon_0v)$ – note that this hold for simple rectangular cavities such as the ones considered here. Since we can neglect $h_{cy}$ for the positions of interest, here we are able to reduce the form of $W_p$ in Eq. (\[w-w0usingmu\]) to $W_p = \int_{\delta_v}\mu_0|h_{cx}^2|dv$ [^1] – with $h_{0x}$ being equivalent to $h_x$ from Eq. (\[mhs\]). Because we are interested in the behaviour at frequencies close to both the cavity and magnet resonance frequencies we can use the relation $\omega_0^2-\omega^2\approx(\omega_0-\omega)2\omega_0$ to find $$(\omega-\omega_c)(\omega-\omega_0) = \frac{1}{2}\omega_c\omega_m\frac{W_p}{W_c}.$$ We can then solve this for $\omega$ which yields $$\label{w_a_b} \omega_{a,b}=\frac{1}{2}\left[\omega_c+\omega_0\pm\sqrt{(\omega_c-\omega_0)^2+2\omega_c\omega_m\frac{W_p}{W_c}}\right].$$ These are the eigenfrequencies of the cavity-magnon hybrid system and using the magnetic parameters for YIG, i.e. same as those used in Fig. \[fig:MagPrecess\](c), we obtain the dashed lines in Fig. \[fig:Field\_g\_Position\](c)-(e) which are in excellent agreement with the experimental contour data. These relations can be used to calculate the size of the Rabi splitting, which at $\omega_c=\omega_0$ is given by: $$\label{DeltaW} \omega_{gap}=(\omega_a-\omega_b)|_{\omega_c=\omega_0} = \sqrt{2\omega_c\omega_m \frac{W_p}{W_c}}.$$ The heat map shown in Fig. \[fig:Field\_g\_Position\](f) summarizes the behaviour of $\omega_{gap}$ as function of both $x$ and $y$ positions within the resonator calculated from Eq. (\[DeltaW\]). This clearly shows that the behaviour of $\omega_{gap}$ strongly reflects the intensity of $\mathbf{H_0}$ given in Fig. \[fig:Field\_g\_Position\](b). Fig. \[fig:Field\_g\_Position\](g) shows how the predicted values of $\omega_{gap}$ from the analytical expressions from perturbation theory (green dashed lines) match experimental points (green dots) as the sample is moved within the resonator shown in part (b) along the $x$-axis. A few main remarks should be made here: - The coupling constant has been previously estimated by various models fitting experimental data. However, as seen here, perturbation theory is an effective way to exactly calculate $g$ without any need for experimental fitting parameters. - In order to solve Eq. (\[w-w0usingmu\]), it is not necessary to make any approximations as the ones made here to obtain Eq. (\[w\_a\_b\]). However, the approximations work well close to the splitting. - Finally, at our initial sample position \[shown as $A$ in Fig. \[fig:Field\_g\_Position\](b)\] the $\mathbf{h_c}$-field within the cavity is close to its maximum value, and lies in the $\hat{\mathbf{x}}$ direction. While the oscillating field, $\mathbf{h_c}$, gains other components as we move the sample away from the anti-node it is still always linearly polarised; and thus, nothing changes for the perturbation theory. However, the coupling dramatically changes from maximal (at the anti-node of $\mathbf{h_c}$) to vanishing (at the anti-node of $\mathbf{E_c}$). Furthermore, the perturbation theory described here, with some modifications, can be readily applied to microwave resonators and transmission lines of any kind by describing the field distribution. It can also be used for a variety of different magnetic samples by obtaining the appropriate permeability tensors (considering shapes, structuring, or composition). A case example is now given. Magnetic Thin-Film Stripes Coupled to a Transmission Line Resonator =================================================================== $ $ Now that we have investigated a simple case, against which we have verified the validity of the perturbation theory for various field configurations, we move on to a more complicated resonator. Up to now we have discussed the case of a large and three-dimensional system, where the resonator is in the order of a few centimetres. This is often the dimensions of cavity spintronic devices to enhance the coupling rates. However, in order to easily integrated these with either silicon-based or superconducting quantum circuits, for example, it is necessary to reduce the system’s dimensions towards on-chip scalable devices. The first studies on new on-chip cavity spintronic devices were by Hou et al. [@hou2019] and Li et al. [@li19]. We will therefore now look at a similar system to the ones investigated in those studies, where a micrometre-sized stripe of magnetic materials is placed on planar (two-dimensional) superconducting resonators. One way to get this is by employing a coplanar waveguide structure with two gaps in the center conductor such as the schematics shown in Fig. \[fig:2Dresonator\](a). The gaps in the centre conductor form a planar capacitor acting as dielectric mirrors, which in turns generates standing waves. These determine the resonance frequency through the separation length between the capacitors, $l$, which is a multiple of the half wavelength $\lambda$/2 [@goppl08]. ![(a) Diagram of the set up used here where a 2D coplanor waveguide resonator generates an oscillating magnetic field which couples to a magnetic thin-film stripe. A static field, $\mathbf{H_{0}}$ is applied along the sample’s long axis (along $z$) and the oscillating magnetic field, $\mathbf{h_c}$, at the sample position only has a component along the $x$ direction – denoted as $h_{cx}$. A full schematic of the magnetic sample and oscillating magnetisation is given in (b). (c) Spectra of the hybrid magnon-resonator modes calculated close to the Rabi splitting ($\omega_c$=$\omega_0$). Here, we considered the magnetic sample to be a Py (Ni$_{80}$Fe$_{20}$) rectangular prism (14$\times$0.03$\times$900 $\mu$m$^3$) and the resonance frequency of the resonator is $\omega_0/2\pi$=5.0 GHz. The solid lines are for no damping \[using Eq. (\[w\_a\_b\_demag\])\] and the dashed lines take damping into account.[]{data-label="fig:2Dresonator"}](2Dresonator_Demag.png){width="0.95\linewidth"} The magnetic thin-film stripe is placed in the center of the resonator, and an external magnetic field is used to set the magnon mode frequency near the cavity resonance frequency. However, as opposed to the case of an sphere, the magnetic precession drastically changes due to the shape of the sample, as shown in Fig. \[fig:2Dresonator\](b). The confined dimensions now induce highly elliptical precession behaviour and that can be quantified through demagnetising factors in the susceptibility, as discussed in Eq. (\[mhs\]). In order to obtain the condition for ferromagnetic resonance, the sample is positioned so that the oscillating magnetic field generated by the centre conductor is perpendicular to the static field. Here, the relevant component of the oscillating field $\mathbf{H_0}$ at the sample position is along the $x$ direction \[in Fig. \[fig:2Dresonator\](b) this is depicted as $h_{0x}$\]. The more complex field profiles inside the two-dimensional resonator, compared to the 3D cavity discussed earlier, only marginally affect the perturbation method described earlier. In fact, the main difference for this particular case is the calculation of the fields exciting the magnetic sample, contained in $W_p$, and the total energy stored in the resonator $W_c$ \[both discussed near Eq. (\[w\_a\_b\])\]. These quantities can no longer be calculated analytically as we have done in previous sections, but they can easily be estimated using electromagnetic solvers such as HFSS or COMSOL (See supplemental information for details). Once those are estimated, the eigenfrequencies can be obtained using: $$\label{w_a_b_demag} \omega_{a,b}=\frac{1}{2}\left[\omega_c+\omega_0\pm\sqrt{(\omega_c-\omega_0)^2+2\chi_a\omega_c\omega_0 \frac{W_p}{W_c}}\right].$$ Note that this equation is slightly different from Eq. (\[w\_a\_b\]). This is because we now have to account for the demagnetising fields and use the full form of $\chi_{xx}$ as given in Eq. (\[Xxx\]). The resulting Rabi splitting calculated using Eq. (\[w\_a\_b\_demag\]) is shown in Fig. \[fig:2Dresonator\](c) as the solid lines. For this, we have used Ni$_{80}$Fe$_{20}$ (Permalloy) as the example material for the magnetic thin-film stripe with parameters $\mu_0M_s=1$ T and $\gamma/2\pi = 28$ GHz/T [@li19]. The demagnetising parameters are $D_x$ = 0.0052, $D_y$ = 0.9947, and $D_z$ = 0.00008 [@aharoni98] and the oscillating magnetic field $h_{0x}$ at the sample position and energy stored in the system $W_c$ were calculated using COMSOL (see supplemental information). Knowing the fields in the resonator and the dimensions of the magnetic sample, it is also straightforward to estimate the coupling constant $g$ through the width of the splitting using the relation: $$\label{DeltaWdemag} \omega_{gap}=(\omega_a-\omega_b)|_{\omega_c=\omega_0} = \sqrt{2\chi_a\omega_c\omega_0 \frac{W_p}{W_c}}.$$ This yields $\omega_{gap} =$ 0.330 GHz. Effect of Damping on the Coupling Strength ------------------------------------------ $ $ Our calculated $\omega_{gap}$ using Eq. (\[DeltaWdemag\]) is 25 MHz higher compared to the case reported by Li and co-workers [@li19]. While we have used the same resonance frequency, material parameters, and sample dimensions as reported in their work, there is one property we have neglected so far: damping. This was not necessary when looking at YIG spheres, as in this case the linewidth of the magnetic resonance is small enough so that it does not affect the eighenfrequencies from perturbation theory. In magnetic thin films, however, such linewidths are not only a result of intrisinc damping but they are often broadened by various surface and interface non-uniformity as well as sample defects – known as inhomogeneous broadening. In general, the effect of damping and dissipation can be introduced by replacing $\omega_0$ with a complex frequency $\omega_0\rightarrow\omega_0'+j\omega_0''$, or even just $H_{0}$ by a complex magnetic field $H_{0}\rightarrow H_{0}'+j\Delta H_{0}$ where $\Delta H_0$ is the width of the resonance curve at half height. Applying the former description into Eq. (\[w-w0usingmu\]) we obtain the dashed lines in Fig. \[fig:2Dresonator\](c) using $\omega_0''=$ 0.122 GHz as measured by Li and co-workers[@li19]. The Rabi splitting for a Py thin-film strip when dissipation is considered is clearly smaller than when no damping is taken into account. With dissipation, Eq. (\[w\_a\_b\_demag\]) which quantifies $\omega_{gap}$ now becomes: $$\label{Dw_demag_damp} \omega_{gap}=(\omega_a-\omega_b)|_{\omega_c=\omega_0} = \sqrt{2\chi_a\omega_c\omega_0' \frac{W_p}{W_c}+j\omega_0''\left(2\chi_a\omega_c\frac{W_p}{W_c}+j\omega_0''\right)},$$ and for the case shown in Fig. \[fig:2Dresonator\](c), we can use Eq. \[Dw\_demag\_damp\] to find $\omega_{gap}$ = 0.305 GHz. This is in excellent agreement with the value of $g$ reported by Li and co-workers of $g/2\pi$ = 0.152 GHz [@li19]. Scattering Parameter and Quality Factor from Perturbation Theory ================================================================ $ $ As we have seen so far, cavity perturbation theory in itself is an extremely efficient method to measure the coupling strength of hybrid systems. However, it is also useful to employ this technique in order to compute scattering parameters. These quantities are often measured by VNA’s in spectroscopic experiments, much like the data we have discussed in Fig. \[fig:Field\_g\_Position\](c)-(e). We can then employ a scattering matrix formalism in order to investigate how microwave radiation interacts with the hybrid system. In the vicinity of the resonances, the behavior of the cavity resonator and magnet can both be represented as lumped circuits. This way we can assume that a voltage wave $a_1$ is incident on an arbitrary microwave device. The wave is scattered and some of its energy goes into a reflected wave $a_2$ and part into a transmitted wave $b_2$. Therefore the scattering parameter $S_{11}(\omega)$ (which we have looked at Fig. \[fig:Field\_g\_Position\]) is given by the ratio $a_2/a_1$ [@han96]. Moreover, in order to account for both resonant hybrid systems, we take into account the product of the two resonances obtained as our perturbation theory eigenfrequencies so that we can make $S_{11}(\omega) = S^{(a)}_{11}(\omega)\times S^{(b)}_{11}(\omega)$, with [@luiten05]: $$S^{(a,b)}_{11}(\omega) = \frac{\beta-1-jQ[\omega/\omega_{(a,b)}-\omega_{(a,b)}/\omega]}{\beta+1+jQ[\omega/\omega_{(a,b)}-\omega_{(a,b)}/\omega]}.$$ Here, $\beta$ is the propagation constant which determines whether the system is *undercoupled* (taking $\beta <1$); *overcoupled* (taking $\beta >1$); or if the resonator is critically coupled (taking $\beta =1$). We also take the quality factor $Q$ to be [@probst15]: $$Q=\frac{\omega_{a,b}'}{\omega_{a,b}''},$$ where $\omega_{(a,b)}'$ is the real part of either eigenfrequency $a$ or $b$ calculating with the equations from perturbation theory, such as Eq. (\[w\_a\_b\]), and $\omega_{a,b}''$ denotes their equivalent imaginary part. ![(a) Scattering parameter $|S_{11}|$ calculated from the quality factor, $Q_p$, from perturbation theory as a function of both input frequency $\omega$ and externally applied magnetic field $\mathbf{H_{0}}$. (b) Comparison between experimental and theoretical $|S_{11}|$ spectra at $\omega_c$=$\omega_0$. The solid line is for the theory \[vertical cut in (a)\] and the dashed line is for experimental data \[a vertical cut in Fig. \[fig:Field\_g\_Position\](b)\]. Here we have considered the system to be slightly overcoupled with $\beta$=$1.05$; and the dissipation for the two systems were taken to be $\omega_0''$=$10^{-3}$ and $\omega_r''$=$10^{-4}$. Scattering parameters for a two-dimensional microwave resonator coupled to a YIG thin-film stripe are also given using perturbation theory (c) compared to experimental data (d). []{data-label="fig:S11fromQ"}](Q_Sparameter.png){width="0.7\linewidth"} In Fig. \[fig:S11fromQ\](a) we show the calculated scattering parameter mirroring that shown in Fig. \[fig:Field\_g\_Position\](c) at the node of the magnetic field inside the resonator. In Fig. \[fig:S11fromQ\](b) show a comparison between the experimental and theoretical $|S_{11}|$ parameters obtain using perturbation theory at $\omega_c=\omega_0$. Note that to obtain those it was necessary to consider dissipation for both cavity and magnetic systems as just discussed for the case of thin-film stripes. This can again be accrounted for by making $\omega_c = \omega_c'+j\omega_c''$ and $\omega_0 = \omega_0'+j\omega_0''$, where $\omega_c''$ and $\omega_0''$ are the dissipation in the cavity and in the magnetic sample, respectively. The same principles outlined above can also be applied to more complex resonator systems, as is the case of two-dimensional resonator. In Fig. \[fig:S11fromQ\](c) we show the theoretical spectra for a thin-film YIG stripe coupled to a superconducting in-plane resonator. The YIG sample size is 5$\times$10$\times$50 $\mu$m$^3$ with magnetic properties as discussed in Fig. \[fig:MagPrecess\] where the eigenfrequencies were obtained as done in the previous section. We can compare this directly with the experimental data for the same system shown in Fig. \[fig:S11fromQ\](d) [@baity20] showing excellent agreement with our theoretical data. Conclusion ========== $ $ To fully understand the electromagnetic behaviour of cavity-magnon hybrid systems we have employed a versatile and self-consistent theory which is an excellent tool to estimate the coupling strength $g$ (or width of the Rabi splitting $\omega_{gap}$) without any fitting parameters. This technique allows us to describe the direct interaction between the microwave excitation vector fields and magnetisation magnetisation dynamics. The direction, profile and intensity of microwave fields can dramatically alter magnon-cavity hybridised states, i.e. cavity magnon-polaritons. The understanding presented here is particularly relevant for technological applications based on cavity spintronics. For instance, they are expected to aid bidirectional conversion between radio-frequency waves and light [@hisatomi16; @lambert19]. Moreover, as cavity magnon–polaritons can also couple with qubits [@Lachance-Quirion20], they are also expected to be used as an aid to quantum information processing [@tabuchi15; @lachance_quirion19]. In both cases, engineering as well as understanding the coupling are crucial steps to optimise the conversion and (or) information exchange. In our work, through perturbation theory, we are able to predict as well as gain further insight into the nature of the coupling between microwave cavities and magnons in a rigorous manner for any resonator with any field configuration and for any geometry of the sample (e.g. spheres or thin films, pillars, etc). Our theory is particularly relevant, not only from a fundamental point of view, but also practically, as in order to engineer and optimise cavity spintronic devices the behaviour of the coupling must be fully understood. While the constant $g$ is more often obtained from experimental data and incorporated into models, such as the circuit model [@harder16], there have been efforts to describe the coupling with no fitting parameters [@bourhill19; @zhang14]. This however, is done through phenomenological oscillator models where the total spin number is key. The work presented here employs the magnetic susceptibility of a sphere and rectangular prism to find the magnon-cavity coupling. The magnetic susceptibility can be found for vastly more complicated magnetic systems, where exchange interactions and dipolar interactions are important to consider. This involves solving the Landau-Lifshitz equation numerically for an array of spins, rather than a macrospin as was done here. Accurate prediction is therefore possible for the coupling strength between electromagnetic cavity waves and magnons in samples with complicated shapes or spin orderings. Our findings show excellent agreement with recently published works towards miniaturisation of hybrid systems and provide a new avenue to predict the coupling to not only extremely low damping magnetic films such as YIG [@baity20], but also to highly damped metallic thin films [@li19; @hou2019]. Methods ======= \ $ $ Further to $\chi_{xx}$ – which was given in the main text as Eq. (\[Xxx\]) – the other components of the susceptibility tensor, $\overleftrightarrow{\chi}_m(\omega)$, given in Eq. \[mhs\] are: \[XyyNXxy\] $$\begin{aligned} \chi_{yy}(\omega) = \frac{\chi_b}{1-(\omega/\omega_0)^2}\\ \chi_{xy}(\omega) = \frac{\omega\omega_m}{\omega_0^2[1-(\omega/\omega_0)^2]}. \end{aligned}$$ where $$\chi_{b} = \frac{M_s}{H_{0}+(D_y-D_z)M_s}.$$ \ $ $ For the three-dimensional rectangular resonator measurements, microwave signals were supplied by port 1 of a Rohde & Schwarz ZVA 40 vector network analyser (VNA) and signals reflected from or transmitted through the cavity were sent to port 2 of the VNA. The capacitive coupling to the cavity was tuned by adjusting the length of the SMA connector contacts, which extended into the body of the cavity. **Supporting Information** Supporting Information is available from the Wiley Online Library or from the author. **Acknowledgements** We would like to thank Tim Wolz and Isabella Boventer for their most useful discussions as well as their comments on the manuscript. This work was supported by the European Research Council (ERC) under the Grant Agreement 648011, the Initiative and Networking Fund of the Helmholtz Association, the Leverhulme Trust and the University of Glasgow through LKAS funds. D. A. Bozhko acknowledges support from the Alexander von Humboldt Foundation. R. Holland was supported by the Engineering and Physical Sciences Research Council (EPSRC) through the Vacation Internships Scheme. [10]{} \[1\][`#1`]{} E. M. Purcell, H. C. Torrey, R. V. Pound, *Phys. Rev.* **1946**, *69* 37. H. Walther, B. T. Varcoe, B.-G. Englert, T. Becker, *Reports on Progress in Physics* **2006**, *69*, 5 1325. J. H. E. Griffiths, *Nature* **1946**, *158*, 4019 670. C. Kittel, *Physical Review* **1948**, *73*, 2 155. M. Goryachev, W. G. Farr, D. L. Creedon, Y. Fan, M. Kostylev, M. E. Tobar, *Phys. Rev. Applied* **2014**, *2* 054002. X. Zhang, C.-L. Zou, L. Jiang, H. X. Tang, *Phys. Rev. Lett.* **2014**, *113* 156401. D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, J. You, *npj Quantum Information* **2015**, *1*, 1 1. Y. Tabuchi, S. Ishino, A. Noguchi, T. Ishikawa, R. Yamazaki, K. Usami, Y. Nakamura, *Science* **2015**, *349*, 6246 405. D. Lachance-Quirion, Y. Tabuchi, A. Gloppe, K. Usami, Y. Nakamura, *Applied Physics Express* **2019**, *12*, 7 070101. X. Zhang, C.-L. Zou, N. Zhu, F. Marquardt, L. Jiang, H. X. Tang, *Nature communications* **2015**, *6* 8914. N. Crescini, D. Alesini, C. Braggio, G. Carugno, D. Di Gioacchino, C. S. Gallo, U. Gambardella, C. Gatti, G. Iannone, G. Lamanna, C. Ligi, A. Lombardi, A. Ortolan, S. Pagano, R. Pengo, G. Ruoso, C. C. Speake, L. Taffarello, *The European Physical Journal C* **2018**, *78*, 9 703. G. Flower, J. Bourhill, M. Goryachev, M. E. Tobar, *Physics of the Dark Universe* **2019**, *25* 100306. R. Hisatomi, A. Osada, Y. Tabuchi, T. Ishikawa, A. Noguchi, R. Yamazaki, K. Usami, Y. Nakamura, *Phys. Rev. B* **2016**, *93* 174427. D. Zhang, X.-Q. Luo, Y.-P. Wang, T.-F. Li, J. You, *Nature communications* **2017**, *8*, 1 1368. M. Harder, Y. Yang, B. M. Yao, C. H. Yu, J. W. Rao, Y. S. Gui, R. L. Stamps, C.-M. Hu, *Phys. Rev. Lett.* **2018**, *121* 137203. S. Kaur, B. M. Yao, J. W. Rao, Y. S. Gui, C.-M. Hu, *Applied Physics Letters* **2016**, *109*, 3 032404. H. Maier-Flaig, M. Harder, S. Klingler, Z. Qiu, E. Saitoh, M. Weiler, S. Geprägs, R. Gross, S. T. B. Goennenwein, H. Huebl, *Applied Physics Letters* **2017**, *110*, 13 132401. B. Bhoi, B. Kim, S.-H. Jang, J. Kim, J. Yang, Y.-J. Cho, S.-K. Kim, *Phys. Rev. B* **2019**, *99* 134426. X. Zhang, A. Galda, X. Han, D. Jin, V. M. Vinokur, Strong coupling-enabled broadband non-reciprocity, **2019**. I. Boventer, M. Kläui, R. Mac[ê]{}do, M. Weides, *New Journal of Physics* **2019**, *21*, 12 125001. I. Boventer, C. Dörflinger, T. Wolz, R. Macêdo, R. Lebrun, M. Kläui, M. Weides, *Phys. Rev. Research* **2020**, *2* 013154. T. Wolz, A. Stehli, A. Schneider, I. Boventer, R. Mac[ê]{}do, A. V. Ustinov, M. Kl[ä]{}ui, M. Weides, *Communications Physics* **2020**, *3*, 1 1. M. Harder, L. Bai, C. Match, J. Sirker, C. Hu, *Science China Physics, Mechanics [&]{} Astronomy* **2016**, *59*, 11 117511. I. Proskurin, R. Mac[ê]{}do, R. L. Stamps, *New Journal of Physics* **2019**, *21*, 9 095003. Y. Cao, P. Yan, *Phys. Rev. B* **2019**, *99* 214415. V. Cherepanov, I. Kolokolov, V. L’vov, *Physics Reports* **1993**, *229*, 3 81 . A. A. Serga, A. V. Chumak, B. Hillebrands, *Journal of Physics D: Applied Physics* **2010**, *43*, 26 264002. A. G. Gurevich, G. A. Melkov, *Magnetization oscillations and waves*, CRC press, Inc, Boca Raton, Florida, **1996**. W. Brown, *Magnetostatic Principles in ferromagnetism*, (Series of monographs on selected topics in solid state physics). North-Holland Publishing Company, **1962**. R. Camley, *Surface Science Reports* **1987**, *7*, 3 103 . R. Macêdo, R. E. Camley, *Phys. Rev. B* **2019**, *99* 014437. H. How, *Magnetic Microwave Devices. In Encyclopedia of RF and Microwave Engineering, K. Chang (Ed.)*, 2425–2461, John Wiley & Sons, Inc., Hoboken, New Jersey., ISBN 9780471654506, **2005**. A. J. Baden-Fuller, *Ferrites at microwave frequencies*, 23. Peter Perogrinus Ltd., London, United Kingdom, **1987**. A. Aharoni, *Journal of Applied Physics* **1998**, *83*, 6 3432. M. Göppl, A. Fragner, M. Baur, R. Bianchetti, S. Filipp, J. M. Fink, P. J. Leek, G. Puebla, L. Steffen, A. Wallraff, *Journal of Applied Physics* **2008**, *104*, 11 113904. J. Bourhill, V. Castel, A. Manchec, G. Cochet, Spectroscopy of magnetic materials for universal characterisation of cavity-magnon polariton coupling strength, **2019**. R. Waldron, *Proceedings of the IEE-Part B: Radio and Electronic Engineering* **1957**, *104*, 6S 307. D. Pozar, *Microwave Engineering, 4th Edition*, John Wiley & Sons, Inc, New York, **2011**. R. Miller, T. E. Northup, K. M. Birnbaum, A. Boca, A. D. Boozer, H. J. Kimble, *Journal of Physics B: Atomic, Molecular and Optical Physics* **2005**, *38*, 9 S551. J. T. Hou, L. Liu, *Phys. Rev. Lett.* **2019**, *123* 107702. Y. Li, T. Polakovic, Y.-L. Wang, J. Xu, S. Lendinez, Z. Zhang, J. Ding, T. Khaire, H. Saglam, R. Divan, J. Pearson, W.-K. Kwok, Z. Xiao, V. Novosad, A. Hoffmann, W. Zhang, *Phys. Rev. Lett.* **2019**, *123* 107701. D. Han, Y. Kim, M. Kwon, *Review of Scientific Instruments* **1996**, *67*, 6 2179. A. Luiten, *Q-Factor Measurements. In Encyclopedia of RF and Microwave Engineering, K. Chang (Ed.)*, 3948–3964, John Wiley & Sons, Inc., Hoboken, New Jersey., ISBN 9780471654506, **2005**. S. Probst, F. B. Song, P. A. Bushev, A. V. Ustinov, M. Weides, *Review of Scientific Instruments* **2015**, *86*, 2 024706. P. Baity, $et.$ $al.$, *Unpublished* **2020**. N. J. Lambert, A. Rueda, F. Sedlmeir, H. G. L. Schwefel, *Advanced Quantum Technologies* **2020**, *3*, 1 1900077. D. Lachance-Quirion, S. P. Wolski, Y. Tabuchi, S. Kono, K. Usami, Y. Nakamura, *Science* **2020**, *367*, 6476 425. Supplemental Material {#supplemental-material .unnumbered} ===================== Supplemental 1: Coplanar waveguide Resonator Construction in COMSOL {#supplemental-1-coplanar-waveguide-resonator-construction-in-comsol .unnumbered} ------------------------------------------------------------------- ![(a) COMSOL model of a coplanar waveguide resonator. (b) Expanded view of the couplers at one of the ends of the centre conductor showing the in-plane intensity of magnetic field. (c) Expanded view of the centre conductor at the sample position showing the intensity on the oscillating magnetic field as well as the field direction. At the position where the sample is placed (directly above the centre conductor) the field is uniform and along the $x$ axis - which we have termed in our perturbation theory $h_{0x}$. []{data-label="fig:Sup1"}](Sup1.png){width="0.98\linewidth"} COMSOL 5.5 was used to model the CPW shown in Fig. \[fig:Sup1\](a). For this model, an eigenfrequency study in the RF module was computed. The geometry displayed in Fig. \[fig:Sup1\](a) consists of a substrate block (12.17 $\times$ 1.0 $\times$ 0.5) mm, an enclosing air block (13.17 $\times$ 1.0 $\times$ 1.0) mm and a 2D work plane on which the CPW pattern was drawn. The central conductor had a width of 20 $\mu$m, and the gaps had a width of 10 $\mu$m. 11.17 mm of conductor was placed between the two couplers. The substrate was modelled as Silicon with a relative permittivity of 11.7, relative permeability of 1 and conductivity of 1.0$\times$10$^{-12}$ \[S/m\]. The enclosure is modelled as air with a relative permittivity and relative permeability of 1 and conductivity of 0. The metallized surface was modelled as a lossless perfect electrical conductor. The conductor was excited by a multielement uniform lumped port. The minimum mesh size was set to be half the width of the couplers, 2 $\mu$m. A frequency-domain step was added to the study to allow for the calculation of S-parameters. This study identified a resonance of this structure at 5.07GHz. The strength of the magnetic field, $\mathbf{H_0}$, at varying heights and positions as well as a volume integration of the magnetic field, $W_c$, was also found. [^1]: In the most general case, where the excitation vector fields lie in both $x$ and $y$ directions, $W_p$ should be given by: $$W_p = \displaystyle\int_{\delta v}[|h_{cx}|^2+|h_{cy}|^2+j((h_{cy}h_{cx}^*-h_{cx}h_{cy}^*)]dv.$$
--- abstract: 'A Wigner function representation of multi-band quantum transport theory is developed in this paper. The equations are derived using non-equilibrium Green’s function formulation with the generalized Kadanoff-Baym ansatz and the multi-band $\bf{k.p}$ Hamiltonian including spin. The results are applied to a two-band resonant inter-band tunneling structure.' address: - 'Department of Physics and Engineering Physics, Stevens Institute of Technology, Hoboken, New Jersey 07030' - 'Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, North Carolina 27695' author: - 'Mehmet Burcin Unlu, Bernard Rosen, and Hong-Liang Cui' - Peiji Zhao title: 'Multi-band Wigner Function Formulation of Quantum Transport' --- Wigner function equation ,Multi-band semiconductor systems 72.10.-d ,79.60.Jv Introduction ============ The single band approximation [@key-25] is the most often used approach in quantum device models. In this approximation, inter-band processes in the structure are ignored and the boundary conditions for the model are oversimplified. The single band electron transport models have been applied to large band-gap semiconductor heterostructures (e.g. AlAs/GaAs/AlAs). Multi-band quantum transport has attracted attention due to the existence of various inter-band tunneling structures and especially resonant inter-band tunneling structures (RITS). It is possible to achieve multiple negative differential regions (NDR) in these structures and they are shown to exhibit high peak-to-valley current voltage characteristics. RITS’s are based upon the type-I, the type-II staggered and the type-II broken-gap band alignments. Recently [@key-5], it was theoretically shown that a type-II staggered band-gap resonant tunneling diode can exhibit oscillations in the THz region. A significant amount of inter-band current can be present in a staggered band-gap structure. The coupling between the conduction and the valence bands is considered to be the dominant mechanism in these structures. Therefore, in a correct description of electron transport, multi-band effects must be included [@key-6]. We do not consider an external magnetic field but the extension is quite straightforward. For direct band-gap semiconductors the conduction band near $k=0$ has symmetry properties (spherical symmetry) same as the $|S>$ atomic orbital ($l=0,m_{l}=0$). On the other hand, the valence band near $k=0$ has symmetry of p-orbitals, $|X>,|Y>$and $|Z>$ (p-orbitals are antisymmetric and $l=1,m_{l}=-1,0,1$). An eight-band model [@key-7] can be gained with the inclusion of spin. These states that become doubly degenerate, $|S\uparrow>,|X\uparrow>,|Y\uparrow>,|Z\uparrow>,|S\downarrow>,|X\downarrow>,|Y\downarrow>,|Z\downarrow>$. The spin-orbit interaction lifts the six-fold degeneracy of the valence band and splits it into a four-fold degenerate and a two-fold degenerate level. If the spin-orbit coupling is considered in the energy band calculation of a semiconductor, the Bloch states become a mixture of spin up and spin down states. This becomes important in asymmetric quantum well devices. Inversion asymmetry of the bulk or the confining potential causes spin splitting even in zero magnetic field due to spin-orbit interaction. Therefore inclusion of spin and spin-orbit interaction in the quantum transport equations is important. Derivations of quantum transport equations have usually been based on the first-order gradient expansion [@key-30]. This approximation is based on the assumption that the “fast” quantum variations can be separated from the “slow” macroscopic variations and causes the loss of information related to quantum processes such as interference and tunneling which are crucial in nano-scale devices. Buot and Jensen [@key-25], [@key-8] presented an alternative derivation for single-band devices and provided an exact integral form of the quantum transport equation which is capable of accurately describing full quantum effects. The first-order gradient expansion is still needed to simplify the collision terms after the derivation of transport equations. Their approach has been generalized for the multi-band transport in this work. The Wigner function modeling of the quantum transport in the single-band resonant tunneling structures has been quite popular in the literature due to its success in dealing with the dissipation and the open boundary conditions [@key-25]. Similarly, it is expected that one should be able to model the multi-band transport in the resonant tunneling structures using the Wigner function. There are a number of works on multi-band Wigner function representation of quantum transport in the literature. Miller and Neikirk [@key-9] used Wigner function for multi-subband transport in double barrier resonant tunneling structures. Demei et al. presented multi-band Wigner function formulation without spin [@key-10], [@key-11]. Zhao et al. [@key-12] showed that multi-band quantum transport equations can be decoupled to reduce the number equations to be solved. Borgioli [@key-13] employed a two-band Kane model to derive Wigner function equations for resonant inter-band tunneling diodes. The point of this work is to develop a complete theory of the multi-band Wigner function for transport in nano-scale devices. This has been accomplished by using non-equilibrium Green’s function methodology which is known to be the most complete description of quantum transport. The results give us the Wigner function formulation of multi-band systems based on ${\bf k.p}$ theory. The results can be easily simplified by the symmetry arguments of the band structure of the system under study. The derived multi-band Wigner function equations which are also capable of description of zero magnetic field spin transport devices are the first in the literature. In the introduction, the subsection 1.1 we present a preliminary on $\bf{k.p}$ method. Then in the subsection 1.2, the non-equilibrium Green’s function method in phase space is given. We derive the Wigner function equations for multi-band systems in section 2. Finally, in part section 3, we apply the formalism to a simple one dimensional two-band resonant inter-band tunneling diode. $\bf{k.p}$ Hamiltonian ---------------------- The Schrödinger equation for the lattice periodic part of the Bloch functions can be written as [@key-24] $$\begin{aligned} [\frac{\hat{p}^{2}}{2m_{0}}+V({\bf r})+\frac{\hbar^{2}k^{2}}{2m_{0}}+\frac{\hbar}{m_{0}}{\bf k}.\hat{{\bf p}} +\frac{\hbar^{2}}{4m_{0}^{2}c^{2}}({\bf {\bf \hat{{\bf \sigma}}}}\times\bigtriangledown V).\hat{{\bf p}}\nonumber \\+\frac{\hbar}{4m_{0}^{2}c^{2}}({\bf \hat{{\bf \sigma}}}\times\bigtriangledown V).{\bf k}]|n{\bf k}>=\varepsilon_{n{\bf k}}|n{\bf k}>. \end{aligned}$$ We express the bulk band matrix element $H_{ab}$ of the Hamiltonian in the second order $\bf{k.p}$ theory as $$H_{ab}=D_{ab}^{(2)\alpha\gamma}k_{\alpha}k_{\gamma}+D_{ab}^{(1)\alpha}k_{\alpha}+(D_{aa}^{(0)}+V_{a}({\bf r}))\delta_{ab}$$ where the indices $\alpha$ and $\gamma$ are summed over $x$, $y$, and $z$. The $a$, $b$ include both the band and the spin indices. $V({\bf r})$ is a spin-independent self-consistent potential. Note that for heterostructures $k_{\alpha}$ is replaced by $-i{\bf \bigtriangledown}_{\alpha}$. We define a vector $\pi$ as, $$\mathbf{\pi}=\mathbf{p}+\frac{\hbar}{4m_{0}c^{2}}(\bf{\hat{\sigma}}\times\mathbf{\bigtriangledown}V).$$ So, $$D_{ab}^{(2)\alpha\gamma}=\frac{\hbar^{2}}{2m_{0}}\delta_{ab}\delta_{\alpha\gamma}+(\frac{\hbar}{m_{0}})^{2}\sum_{r}\frac{\pi_{ar}^{\alpha}\pi_{rb}^{\gamma}}{(\frac{E_{a}+E_{b}}{2}-E_{r})},$$ noting that the second term arises from Löwdin renormalization and needed to include the interactions with remote bands. So we denote these states by the index $r$. These interactions are usually ignored in Kane models and so $D_{ab}^{(2)\alpha\gamma}(a\neq b)$ terms vanish. The part of $H_{ab}$ linear in $k$ include the inter-band coupling ($\bf{k.p}$ interaction) and the spin-orbit interaction terms, $$D_{ab}^{(1)\alpha}=\frac{\hbar}{m_{0}}\pi_{ab}^{\alpha}=\frac{\hbar}{m_{0}}p_{ab}^{\alpha}+\frac{\hbar}{4m_{0}^{2}c^{2}}(\hat{\mathbf{\sigma}}\times\mathbf{\bigtriangledown}V)_{ab}^{\alpha}$$ where $$\pi_{ab}^{\alpha}=<U_{a}|\hat{p}^{\alpha}|U_{b}>+\frac{1}{4m_{0}c^{2}}<U_{a}|(\hat{\mathbf{\sigma}}\times{\bf \bigtriangledown}V)^{\alpha}|U_{b}>$$ for $a\neq b$. Note that $\pi_{ba}^{\alpha}=(\pi_{ab}^{\alpha})^{*}$ and $\pi_{aa}^{\alpha}=0$ (which implies that $D_{aa}^{(1)\alpha}$terms vanish). The terms $p_{ab}^{\alpha}=<U_{a}|\hat{p}^{\alpha}|U_{b}>$ are the inter-band momentum matrix elements and measure the strength of the coupling between the various bands. Note that $\pi_{aa}^{\alpha}=0$ even if the band minimum is at some point other than ${\bf k}=0$. The term $\frac{\hbar}{m_{0}}p_{ab}^{\alpha}$ is usually written in terms of a real parameter $P$ originally defined by Kane. The value of this parameter is known for any given material. $$P=-\frac{i\hbar}{m_{0}}<S|\hat{p}_{x}|X>=-\frac{i\hbar}{m_{0}}<S|\hat{p}_{y}|Y>=-\frac{i\hbar}{m_{0}}<S|\hat{p}_{z}|Z>.$$ The band edge is denoted by $D_{aa}^{(0)}$ such that, $D_{aa}^{(0)}=E_{a}(\mathbf{k}=0)$. We write $D_{aa}^{(0)}+V_{a}({\bf r})$ as $E_{a}({\bf r})$ in the calculations. The non-equilibrium Green’s function formalism in phase space ------------------------------------------------------------- The multi-band Green’s function [@key-2] is defined by $$G_{ab}(1,2)=-\frac{i}{\hbar}<\psi_{a}(1)\psi_{b}^{\dagger}(2)>_{C}$$ where C denotes that time arguments are on a contour rather than a real-time axis. The expectation value is defined in a grand-canonical ensemble. We define the space time arguments $1=({\bf \mathbf{r}_{1},t_{1})}$, $2=({\bf {\bf \mathbf{r}}_{2},t_{2})}$. $\psi_{a}$ is the field operator for electrons. The $a$,$b$ include both the band and the spin indices. The equation of motion of the band electron Green’s function is written as (sum over repeated indices) $$[i\hbar\delta_{a\beta}\frac{\partial}{\partial t_{1}}-H_{a\beta}(1)]G_{\beta b}(1,2)=\delta_{ab}\delta(1-2)+\int d3\Sigma_{a\beta}(1,3)G_{\beta b}(3,2),$$ and the adjoint equation is given by $$[-i\hbar\delta_{a\beta}\frac{\partial}{\partial t_{2}}-H_{a\beta}(2)]G_{\beta b}(1,2)=\delta_{ab}\delta(1-2)+\int d3G_{a\beta}(1,3)\Sigma_{\beta b}(3,2),$$ where the self-energy is denoted by $\Sigma(1,2)$. It describes the scattering of electrons by other electrons, phonons and impurities. Throughout the paper, Greek indices are used to denote the repeated indices to be summed over. The generalized Kadanoff-Baym (GKB) equation describes the time evolution of the electron correlation function $G_{aa}^{<}(1,2)$ in the band $a$. It should be noted that, in the equal time limit ($t_{1}=t_{2}$), the off-diagonal ($a\neq b$) lesser Green’s functions correspond to inter-band polarizations in energy band space and inter-spin-band polarizations in spin space whereas the diagonal Green’s functions give the particle densities with the spin up or down in each band. The time evolution of $G^{<}(1,2)$ given by the GKB equation can be written, using the Langreth algebra, as$$\begin{aligned} i\hbar(\frac{\partial}{\partial t_{1}}+\frac{\partial}{\partial t_{2}})G_{ab}^{<}(1,2) & = & [H,G^{<}](1,2)_{ab}+[\Sigma^{<},\textrm{Re}G^{R}](1,2)_{ab}\nonumber \\ & & +\frac{i}{2}\{\Sigma^{<},A\}(1,2)_{ab}-\frac{i}{2}\{\Gamma,G^{<}\}(1,2)_{ab}.\label{gkb1}\end{aligned}$$ where $[$ $]$ is the commutation and $\{$$\}$ is the anti-commutation. Above, the spectral function is defined as, $$A_{ab}(1,2)=i[G_{ab}^{>}(1,2)-G_{ab}^{<}(1,2)]=-2\textrm{Im}[G_{ab}^{R}(1,2)],$$ and the dissipation function is $$\Gamma_{ab}(1,2)=i[\Sigma_{ab}^{>}-\Sigma_{ab}^{<}]=-2\textrm{Im}[\Sigma_{ab}^{R}(1,2)].\label{deneme}$$ Switching to center of mass and relative coordinates (Wigner coordinates) done by defining $$\mathbf{R}=\frac{\mathbf{{\bf \mathbf{r}_{1}+\mathbf{r}_{2}}}}{2};T=\frac{ t_{1}+t_{2}}{2},$$ $$\mathbf{v}={\mathbf r}_{2}-{\mathbf r}_{1};t=t_{2}-t_{1}.$$ The four-dimensional, (3+1), crystal momentum and its conjugate variable lattice coordinate are represented as $p=({\bf \mathbf{p}},E)$, $r=(\mathbf{R},T)$. Note that we use $\hbar\mathbf{k}$ as the crystal momentum when the matrix elements of the Hamiltonian is considered. Therefore $\hbar{\bf \mathbf{k}}$ and $\mathbf{p}$ are used as the crystal momentum interchangeably. The Weyl-Wigner representation, $W[\hat{O}]=O(p,r)$, of any operator $\hat{O}(1,2)$ [@key-14], [@key-15] is defined by $$O(p,r)=\int dv\exp(\frac{i}{\hbar}p.v)<R-\frac{v}{2}|\hat{O}|R+\frac{v}{2}>.$$ It is very important to note that $O(p,r)$ is real if $\hat{O}$ is hermitian. Let $\hat{C}=\hat{A}\hat{B}$ then the differential form of the Weyl transform of the product of two operators can be obtained by, $$\begin{aligned} W[\hat{C}]=C(p,r) & = & \exp(i\hat{\Lambda})\hat{A}(p,r)\hat{B}(p,r)\nonumber \\ & = & \exp(-i\hat{\Lambda})\hat{B}(p,r)\hat{A}(p,r),\end{aligned}$$ where $$\hat{\Lambda}=\frac{\hbar}{2}[\frac{\partial^{(A)}}{\partial r}\frac{\partial^{(B)}}{\partial p}-\frac{\partial^{(A)}}{\partial p}\frac{\partial^{(B)}}{\partial r}].$$ The partial differential $\partial^{(A)}$acts on only $A$ and $\partial^{(B)}$ acts on $B$ only. To obtain the Wigner function equation, it is necessary to switch to a phase space description of GKB. Taking the Weyl-Wigner transform of both sides of the equation (\[gkb1\]) gives the GKB in the phase-space-energy-time domain $$\begin{aligned} i\hbar\frac{\partial}{\partial T}G_{ab}^{<}(p,r) & = & \exp(i\hat{\Lambda})[H,G^{<}](p,r)_{ab}+\exp(i\hat{\Lambda})[\Sigma^{<},ReG^{R}](p,r)_{ab}\nonumber \\ & & +\frac{i}{2}\exp(i\hat{\Lambda})\{\Sigma^{<},A\}(p,r)_{ab}-\frac{i}{2}\exp(i\hat{\Lambda})\{\Gamma,G^{<}\}(p,r)_{ab}\label{gkb2}\end{aligned}$$ For any operator $A$ and $B$, the integral representations of $\exp(i\hat{\Lambda})A(p,r)B(p,r)$ and $\exp(-i\hat{\Lambda})A(p,r)B(p,r)$ in (3+1) dimensions can be written as [@key-2], $$\begin{aligned} \exp(\pm i\hat{\Lambda})A(p,r)B(p,r) & = & \frac{1}{(h^{4})^{2}}\int dr_{1}dp_{1}dr_{2}dp_{2}\exp[\frac{i}{\hbar}p_{1}.(r-r_{2})]\exp[\frac{i}{\hbar}r_{1}.(p-p_{2})]\nonumber \\ & & \times A(p\pm\frac{p_{1}}{2},r\mp\frac{r_{1}}{2})B(p_{2},r_{2}),\label{integ rep1}\end{aligned}$$ Defining $$K_{A}^{\pm}(p,r-r_{2};r,p-p_{2})=\int dp_{1}dr_{1}\exp(\frac{i}{\hbar}p_{1}.(r-r_{2}))\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))A(p\pm\frac{p_{1}}{2},r\mp\frac{r_{1}}{2}),$$ the equation (\[gkb2\]) becomes $$\begin{aligned} i\hbar\frac{\partial}{\partial T}G_{ab}^{<}(p,r) & = & \frac{1}{(h^{4})^{2}}\int dp_{2}dr_{2}K_{H_{a\beta}}^{c}(p,r-r_{2};r,p-p_{2})G_{\beta b}^{<}(p_{2},r_{2})\nonumber \\ & & +\frac{1}{(h^{4})^{2}}\int dp_{2}dr_{2}K_{\Sigma_{a\beta}^{<}}^{c}(p,r-r_{2};r,p-p_{2})ReG_{\beta b}^{R}(p_{2},r_{2})\nonumber \\ & & +\frac{i}{2(h^{4})^{2}}\int dp_{2}dr_{2}K_{\Sigma_{a\beta}^{<}}^{s}(p,r-r_{2};q,p-p_{2})A_{\beta b}(p_{2},r_{2})\nonumber \\ & & -\frac{i}{2(h^{4})^{2}}\int dp_{2}dr_{2}K_{\Gamma_{a\beta}}^{s}(p,r-r_{2};r,p-p_{2})G_{\beta b}^{<}(p_{2},r_{2}),\label{gkb3}\end{aligned}$$ where $K_{A}^{s,c}(p,r-r_{2};r,p-p_{2})=K_{A}^{+}(p,r-r_{2};r,p-p_{2})\pm K_{A}^{-}(p,r-r_{2};r,p-p_{2}).$ The multi-band Wigner function is found by taking the energy integral of the Weyl-Wigner transformed $G_{ab}^{<}$: $$f_{ab}(\mathbf{p},\mathbf{R},T)=\int dE(-i)G_{ab}^{<}(\mathbf{p},E;\mathbf{R},T).\label{wigner1}$$ Note that the indices $a$, $b$ include both spin and band. The total Wigner function of the multi-band system with spin can be written as the summation over the band and the spin indices, $$f(\mathbf{p},\mathbf{r},\kappa)=\sum_{c,d}\sum_{m,m'=\uparrow,\downarrow}\sigma_{m,m'}^{\kappa}f_{cd}^{mm'}(\mathbf{p},\mathbf{r}),$$ where $c$, $d$ are band, $m$ and $m^{'}$ are spin indices. $\kappa$ takes values of $x$, $y$, and $z$. $\sigma^{0}$ is the unit matrix and the others are the Pauli spin matrices [@key-16]. Therefore each intra-band and inter-band component of Wigner function becomes $2\times2$ matrix in spin space. The Wigner Function Equations for Multi-band Systems ==================================================== Under the assumption that the self-energies are slowly varying with respect to the center of mass coordinates, equation (\[gkb3\]) reduces to $$\begin{aligned} i\hbar\frac{\partial}{\partial T}G_{ab}^{<}(p,r) & = & \frac{1}{(h^{4})^{2}}\int dp_{2}dr_{2}K_{H_{a\beta}}^{c}(p,r-r_{2};r,p-p_{2})G_{\beta b}^{<}(p_{2},r_{2})\nonumber \\ & & +\Sigma_{a\beta}^{>}(p,r)G_{\beta b}^{<}(p,r)-\Sigma_{a\beta}^{<}(p,r)G_{\beta b}^{>}(p,r)\label{eq:0.2.1}\end{aligned}$$ using $$i[\Sigma_{a\beta}^{<}A_{\beta b}-\Gamma_{a\beta}G_{\beta b}^{<}]=\Sigma_{a\beta}^{>}G_{\beta b}^{<}-\Sigma_{a\beta}^{<}G_{\beta b}^{>}.$$ The self-energy function can be written as [@key-17], $$\Sigma_{ab}^{>,<}(1,2)=iG_{ab}^{>,<}(1,2)D^{>,<}(1,2).\label{selfenergy1}$$ The Weyl-Wigner transform gives of the above equation (\[selfenergy1\]) gives, $$\Sigma_{ab}^{>,<}(p,r)=\frac{i}{h^{4}}\int dqG_{ab}^{>,<}(p+q)D^{>,<}(q).$$ Assuming the phonon bath is in equilibrium, the Fourier transforms of the phonon Green’s functions can be written as, $$D^{<}(\mathbf{q},E^{'})=-ihM_{\mathbf{q}}^{2}[(N_{\mathbf{q}}+1)\delta(E^{'}-\Omega_{\mathbf{q}})+N_{\mathbf{q}}\delta(E^{'}+\Omega_{\mathbf{q}})],$$ $$D^{>}(\mathbf{q},E^{'})=-ihM_{\mathbf{q}}^{2}[(N_{\mathbf{q}}+1)\delta(E^{'}+\Omega_{\mathbf{q}})+N_{\mathbf{q}}\delta(E^{'}-\Omega_{\mathbf{q}})]$$ where $M_{\mathbf{q}}$ is the electron-phonon scattering matrix element. Therefore, inclusion of the phonon scattering gives the following scattering functions $$\Sigma_{ab}^{<}=\sum_{\eta=+1,-1}\frac{1}{h^{3}}\int d{\bf q}G_{ab}^{<}(\mathbf{p}+\mathbf{q},E+\eta\Omega_{\mathbf{q}},r)M_{\mathbf{q}}^{2}(N_{\mathbf{q}}+\frac{1}{2}+\frac{1}{2}\eta),$$ $$\Sigma_{ab}^{>}=\sum_{\eta=+1,-1}\frac{1}{h^{3}}\int d{\bf q}G_{ab}^{>}(\mathbf{p}+\mathbf{q},E+\eta\Omega_{\mathbf{q}},r)M_{\mathbf{q}}^{2}(N_{{\bf \mathbf{q}}}+\frac{1}{2}-\frac{1}{2}\eta).$$ The first term on the right hand side of the equation (\[eq:0.2.1\]) can be written as $$exp(i\Lambda)[H,G^{<}](p,r)_{ab}=exp(i\Lambda)H_{a\beta}(p,r)G_{\beta b}^{<}(p,r)-exp(-i\Lambda)H_{\beta b}(p,r)G_{a\beta}^{<}(p,r).$$ The integral representation, using the equation (\[integ rep1\]) becomes $$\begin{aligned} \exp(i\Lambda)[H,G^{<}](p,r)_{ab} & = & \frac{1}{(h^{4})^{2}}\int dr_{1}dp_{1}dr_{2}dp_{2}\exp(\frac{i}{\hbar}p_{1}.(r-r_{2}))\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))\nonumber \\ & & \times[H_{a\beta}({\bf p}+\frac{{\bf p}_{1}}{2},{\bf q}-\frac{{\bf q}_{1}}{2})G_{\beta b}^{<}(p_{2},q_{2})\nonumber \\ & & -H_{\beta b}({\bf p}-\frac{{\bf p}_{1}}{2},{\bf r}+\frac{{\bf r_{1}}}{2})G_{a\beta}^{<}(p_{2},r_{2})],\end{aligned}$$ where $$\begin{aligned} H_{ab}(p\pm\frac{p_{1}}{2},r\mp\frac{r_{1}}{2}) & = & D_{ab}^{(2)\alpha\gamma}(p_{\alpha}\pm\frac{p_{1\alpha}}{2})(p_{\gamma}\pm\frac{p_{1\gamma}}{2})\nonumber \\ & & +D_{ab}^{(1)\alpha}(p_{\alpha}\pm\frac{p_{1\alpha}}{2})+(D_{aa}^{(0)}+V_{a}(\mathbf{r}\mp\frac{\mathbf{r}_{1}}{2}))\delta_{ab}.\end{aligned}$$ At this point, since the purpose of the paper is to derive a Boltzmann type transport equation, it is useful to make quasiparticle approximation to get the form of the spectral function. The free generalized Kadanoff-Baym (FGKB) ansatz for multi-band systems is stated as [@key-18], $$-iG_{ab}^{<}(\mathbf{p},E,\mathbf{r},T)=hf_{ab}(\mathbf{p},\mathbf{r},T)\delta(E-\frac{E_{a}(\mathbf{p})+E_{b}({\bf \mathbf{p}})}{2})$$ $$iG_{ab}^{>}(\mathbf{p},E,\mathbf{r},T)=h(\delta_{ab}-f_{ab}(\mathbf{p},\mathbf{r},T))\delta(E-\frac{E_{a}(\mathbf{p})+E_{b}(\mathbf{p})}{2})$$ Using FGKB ansatz, one can simplify the scattering functions so that the equation of motion for $G^{<}$ in the phase-space-energy-time domain becomes $$\begin{aligned} i\hbar\frac{\partial}{\partial T}G_{ab}^{<}(\mathbf{p},E,\mathbf{r},T) & = & D_{a\beta}^{(2)\alpha\gamma}p_{\alpha}p_{\gamma}G_{\beta b}^{<}-D_{\beta b}^{(2)\alpha\gamma}p_{\alpha}p_{\gamma}G_{a\beta}^{<}\nonumber \\ & & +\frac{\hbar}{2i}D_{a\beta}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]G_{\beta b}^{<}+\frac{\hbar}{2i}D_{\beta b}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]G_{a\beta}^{<}\nonumber \\ & & -\frac{\hbar^{2}}{4}D_{a\beta}^{(2)\alpha\gamma}\frac{\partial}{\partial r_{\alpha}}\frac{\partial}{\partial r_{\gamma}}G_{\beta b}^{<}+\frac{\hbar^{2}}{4}D_{\beta b}^{(2)\alpha\gamma}\frac{\partial}{\partial r_{\alpha}}\frac{\partial}{\partial r_{\gamma}}G_{a\beta}^{<}\nonumber \\ & & +D_{a\beta}^{(1)\alpha}[p_{\alpha}+\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]G_{\beta b}^{<}-D_{\beta b}^{(1)\alpha}[p_{\alpha}-\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]G_{a\beta}^{<}\nonumber \\ & & +\delta_{a\beta}\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))[D_{aa}^{(0)}+V_{aa}({\bf r}-\frac{{\bf r}_{1}}{2})]G_{\beta b}^{<}(p_{2},r)\nonumber \\ & & -\delta_{\beta b}\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))[D_{\beta\beta}^{(0)}+V_{\beta\beta}({\bf r}+\frac{{\bf r}_{1}}{2})]G_{a\beta}^{<}(p_{2},r)\nonumber \\ & & +[\Sigma_{a\beta}^{>}G_{\beta b}^{<}-\Sigma_{a\beta}^{<}G_{\beta b}^{>}]\end{aligned}$$ where $p_{\alpha}$ denotes $p_{x},p_{y},p_{z}$ , $\frac{\partial}{\partial r_{\alpha}}$ denotes $\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}$ and there is summation over Greek indices as usual. Using the equation (\[wigner1\]), the multi-band Wigner functions for an open system in weakly contact with a phonon heat bath can be written as $$\begin{aligned} i\hbar\frac{\partial}{\partial T}f_{ab}(\mathbf{p},\mathbf{r},T) & = & D_{a\beta}^{(2)\alpha\gamma}p_{\alpha}p_{\gamma}f_{\beta b}(\mathbf{p},\mathbf{r},T)-D_{\beta b}^{(2)\alpha\gamma}p_{\alpha}p_{\gamma}f_{a\beta}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & +\frac{\hbar}{2i}D_{a\beta}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]f_{\beta b}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & +\frac{\hbar}{2i}D_{\beta b}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]f_{a\beta}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & -\frac{\hbar^{2}}{4}D_{a\beta}^{(2)\alpha\gamma}\frac{\partial}{\partial r_{\alpha}}\frac{\partial}{\partial r_{\gamma}}f_{\beta b}(\mathbf{p},\mathbf{r},T)+\frac{\hbar^{2}}{4}D_{\beta b}^{(2)\alpha\gamma}\frac{\partial}{\partial r_{\alpha}}\frac{\partial}{\partial r_{\gamma}}f_{a\beta}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & +D_{a\beta}^{(1)\alpha}[p_{\alpha}+\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{\beta b}(\mathbf{p},\mathbf{r},T)-D_{\beta b}^{(1)\alpha}[p_{\alpha}-\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{a\beta}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & +\delta_{a\beta}\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))[D_{aa}^{(0)}+V_{aa}({\bf r}-\frac{{\bf r_{1}}}{2})]f_{\beta b}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & -\delta_{\beta b}\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))[D_{\beta\beta}^{(0)}+V_{\beta\beta}({\bf r}+\frac{{\bf r_{1}}}{2})]f_{a\beta}(\mathbf{p},\mathbf{r},T)\nonumber \\ & & +\sum_{\eta=+1,-1}\frac{1}{h^{3}}\int d\mathbf{q}(\delta_{a\beta}-f_{a\beta}(\mathbf{p}+\mathbf{q},\mathbf{r},T))f_{\beta b}({\bf \mathbf{p}},\mathbf{r},T)\nonumber \\ & & \times\delta(\frac{E_{a}({\bf \mathbf{p}}+{\bf \mathbf{q}})+E_{\beta}({\bf \mathbf{p}}+{\bf \mathbf{q}})}{2}-\frac{E_{\beta}({\bf \mathbf{p}})+E_{b}(\mathbf{p})}{2}+\eta\Omega_{\mathbf{q}})M_{\mathbf{q}}^{2}(N_{\mathbf{q}}+\frac{1}{2}-\frac{1}{2}\eta)\nonumber \\ & & -\sum_{\eta=+1,-1}\frac{1}{h^{3}}\int d\mathbf{q}f_{a\beta}(\mathbf{{\bf \mathbf{p}}+{\bf \mathbf{q}}},\mathbf{r},T)(\delta_{\beta b}-f_{\beta b}({\bf \mathbf{p}},\mathbf{r},T))\label{multibandwigner}\\ & & \times\delta(\frac{E_{a}({\bf \mathbf{p}}+{\bf \mathbf{q}})+E_{\beta}({\bf \mathbf{p}}+{\bf \mathbf{q}})}{2}-\frac{E_{\beta}({\bf \mathbf{p}})+E_{b}(\mathbf{p})}{2}+\eta\Omega_{\mathbf{q}})M_{\mathbf{q}}^{2}(N_{\mathbf{q}}+\frac{1}{2}+\frac{1}{2}\eta)\nonumber \end{aligned}$$ The third and the fourth terms at the right of the equation (\[multibandwigner\]) are the usual drift terms. The first, the second, the fifth and the sixth terms do not cancel each other for $a\neq b$ if and only if Löwdin renormalization is considered. If the effects of the remote bands are ignored, these terms cancel each other. The seventh and eight terms explicitly give the $\mathbf{k.p}$ and spin-orbit interactions. The ninth and tenth terms give the potential term. The last two terms correspond to electron-phonon scattering. The relaxation time approximation can be made for these. An important simplification occurs when the in-plane wave vector is taken to be zero. This approximation gives a set of spin-independent quantum transport equations. If the structure under consideration has inversion symmetry as in diamond structures, the equations can be simplified further. Note that there is no inversion symmetry in zinc-blende structures (bulk inversion asymmetry) and spin degeneracy in zinc-blende type heterostructures is lifted even at zero magnetic field. Usually, this splitting is very small and can be ignored. However, recently resonant intra-band and inter-band spin filter was proposed based on the zero magnetic field spin splitting of the conduction band in the case of structural inversion asymmetry (Rashba effect) [@key-19], [@key-20]. We are going to discuss the Wigner function modeling of these kind of devices in future papers. The total current density is the sum of intra-band and inter-band components [@key-14], $$J(r)=\frac{q}{h^{3}}\int d\mathbf{p}\frac{\partial H_{\alpha\beta}}{\partial\mathbf{p}}f_{\beta\alpha}(\mathbf{p},r).\label{current}$$ The particle density in each band is written as $$n_{a}=\frac{1}{h^{3}}\int d\mathbf{p}f_{aa}(\mathbf{p},r).\label{particle}$$ Let’s consider a two-band model ($a,b=1,2$) without scattering and neglect the effects of the remote bands. Then the first component of the Wigner equation (\[multibandwigner\]) becomes $$\begin{aligned} i\hbar\frac{\partial f_{11}(\mathbf{p},r)}{\partial T} & = & \frac{\hbar}{i}D_{11}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]f_{11}(\mathbf{p},r)\nonumber \\ & & +D_{12}^{(1)\alpha\gamma}[p_{\alpha}+\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{21}(\mathbf{p},r)-D_{21}^{(1)\alpha\gamma}[p_{\alpha}-\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{12}(\mathbf{p},r)\nonumber \\ & & +\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))\nonumber \\ & & \times[V_{1}(\textrm{{\bf r}}-\frac{{\bf r}_{1}}{2})-V_{1}({\bf r}+\frac{{\bf r}_{1}}{2})]f_{11}(\mathbf{p}_{2},r).\end{aligned}$$ The rest of the equations are as follows $$\begin{aligned} i\hbar\frac{\partial f_{12}(\mathbf{p},r)}{\partial T} & = & \frac{\hbar}{2i}[D_{11}^{(2)\alpha\gamma}+D_{22}^{(2)\alpha\gamma}][p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]f_{12}(\mathbf{p},r)\nonumber \\ & & +D_{12}^{(1)\alpha}[p_{\alpha}+\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{22}(\mathbf{p},r)-D_{12}^{(1)\alpha}[p_{\alpha}-\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{11}(\mathbf{p},r)\nonumber \\ & & +\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))\nonumber \\ & & \times[V_{1}(\textrm{{\bf r}}-\frac{{\bf r}_{1}}{2})-V_{2}({\bf r}+\frac{{\bf r}_{1}}{2})]f_{12}(\mathbf{p}_{2},r),\end{aligned}$$ $$\begin{aligned} i\hbar\frac{\partial f_{22}(\mathbf{p},\mathbf{r},T)}{\partial T} & = & \frac{\hbar}{i}D_{22}^{(2)\alpha\gamma}[p_{\alpha}\frac{\partial}{\partial r_{\gamma}}+p_{\gamma}\frac{\partial}{\partial r_{\alpha}}]f_{22}(\mathbf{p},r)\nonumber \\ & & +D_{21}^{(1)\alpha\gamma}[p_{\alpha}+\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{12}(\mathbf{p},r)-D_{12}^{(1)\alpha\gamma}[p_{\alpha}-\frac{\hbar}{2i}\frac{\partial}{\partial r_{\alpha}}]f_{21}(\mathbf{p},r)\nonumber \\ & & +\frac{1}{h^{3}}\int dp_{2}dr_{1}\exp(\frac{i}{\hbar}r_{1}.(p-p_{2}))\nonumber \\ & & \times[V_{2}(\textrm{{\bf r}}-\frac{{\bf r}_{1}}{2})-V_{2}({\bf r}+\frac{{\bf r}_{1}}{2})]f_{22}(\mathbf{p}_{2},r),\end{aligned}$$ and $f_{21}=f_{12}^{*}$. 1-Dimensional Two-band Kane Model of Resonant Inter-band Tunneling Diode ======================================================================== Resonant inter-band tunneling structures (RITS) are based on the interaction between the conduction and valence bands and the transport is in the growth direction. For narrow band-gap RIT structures, the coupling of the in-plane momentum to the transverse momentum component becomes important. Therefore a realistic modeling of these structures requires a serious amount of computational work. The simplest choice is a two-band model that is suitable for large and mid-band-gap Type I RITS [@key-21]. The $\mathbf{k}$ vector is taken in the $z$ direction and the inversion asymmetry is neglected. Therefore, the in-plane momentum ($k_{x}=k_{y}=0$) vanishes so that the heavy-hole state is decoupled and the Hamiltonian matrix becomes block-diagonal [@key-22]. The remote bands are neglected too. Therefore the Hamiltonian is reduced to a spin-independent three-band model (conduction, light and split-off bands) [@key-23]: $$\left[\begin{array}{ccc} E_{c}(z)+\frac{p_{z}^{2}}{2m_{0}} & \sqrt{\frac{2}{3}}\frac{P_{cv}}{m_{0}}p_{z} & -\sqrt{\frac{1}{3}}\frac{P_{cv}}{m_{0}}p_{z}\\ -\sqrt{\frac{2}{3}}\frac{P_{cv}}{m_{0}}p_{z} & E_{lh}(z)+\frac{p_{z}^{2}}{2m_{0}} & 0\\ \sqrt{\frac{1}{3}}\frac{P_{cv}}{m_{0}}p_{z} & 0 & E_{so}(z)+\frac{p_{z}^{2}}{2m_{0}}\end{array}\right].$$ where $P_{cv}=i\sqrt{\frac{m_{0}E_{p}}{2}}$. Instead of ignoring the split-off band, Sirtori et. al. [@key-23] presented an improved two-band model (conduction and “effective valence band”) that can be gained through a unitary transformation. The $2\times2$ Hamiltonian is $$\left[\begin{array}{cc} E_{c}(z)+\frac{p_{z}^{2}}{2m_{0}} & \frac{P_{cv}}{m_{0}}p_{z}\\ \frac{P_{cv}^{*}}{m_{0}}p_{z} & E_{v}(z)+\frac{p_{z}^{2}}{2m_{0}}\end{array}\right]$$ where $E_{v}=\frac{2E_{lh}+E_{so}}{3}$ is effective valence band edge. Therefore the components of the Wigner function become $$\begin{aligned} i\hbar\frac{\partial f_{cc}(p_{z},z,t)}{\partial t} & = & -\frac{i\hbar p_{z}}{m_{0}}\frac{\partial f_{cc}(p_{z},z,t)}{\partial z}\nonumber \\ & & +\frac{1}{h}\int dz_{1}dp_{z2}\exp(\frac{i}{\hbar}z_{1}(p_{z}-p_{z2}))[E_{c}(z-\frac{z_{1}}{2})-E_{c}(z+\frac{z_{1}}{2})]f_{cc}(p_{z2},z,t)\nonumber \\ & & +\frac{p_{z}}{m_{0}}P_{cv}f_{vc}(p_{z},z,t)-\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{vc}(p_{z},z,t)}{\partial z}\nonumber \\ & & +\frac{p_{z}}{m_{0}}P_{cv}f_{cv}(p_{z},z,t)+\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{cv}(p_{z},z,t)}{\partial z},\end{aligned}$$ $$\begin{aligned} i\hbar\frac{\partial f_{cv}(p_{z},z,t)}{\partial t} & = & -\frac{i\hbar p_{z}}{m_{0}}\frac{\partial f_{cv}(p_{z},z,t)}{\partial z}\nonumber \\ & & +\frac{1}{h}\int dz_{1}dp_{z2}\exp(\frac{i}{\hbar}z_{1}(p_{z}-p_{z2}))[E_{c}(z-\frac{z_{1}}{2})-E_{v}(z+\frac{z_{1}}{2})]f_{cv}(p_{z2},z,t)\nonumber \\ & & +\frac{p_{z}}{m_{0}}P_{cv}f_{vv}(p_{z},z,t)-\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{vv}(p_{z},z,t)}{\partial z}\nonumber \\ & & +\frac{p_{z}}{m_{0}}P_{cv}f_{cc}(p_{z},z,t)-\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{cc}(p_{z},z,t)}{\partial z},\end{aligned}$$ $$\begin{aligned} i\hbar\frac{\partial f_{vv}(p_{z},z,t)}{\partial t} & = & -\frac{i\hbar p_{z}}{m_{0}}\frac{\partial f_{vv}(p_{z},z,t)}{\partial z}\nonumber \\ & & +\frac{1}{h}\int dz_{1}dp_{z2}\exp(\frac{i}{\hbar}z_{1}(p_{z}-p_{z2}))[E_{v}(z-\frac{z_{1}}{2})-E_{v}(z+\frac{z_{1}}{2})]f_{vv}(p_{z2},z,t)\nonumber \\ & & -\frac{p_{z}}{m_{0}}P_{cv}f_{cv}(p_{z},z,t)+\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{cv}(p_{z},z,t)}{\partial z}\nonumber \\ & & -\frac{p_{z}}{m_{0}}P_{cv}f_{vc}(p_{z},z,t)-\frac{i\hbar}{2m_{0}}P_{cv}\frac{\partial f_{vc}(p_{z},z,t)}{\partial z},\end{aligned}$$ and $f_{vc}=f_{cv}^{*}$. Note that above equations are for the conduction and valence band electrons and $E_{a}(z)=E_{a}(0)+V_{a}(z)$. The current density can be calculated using the equation (\[current\]) and given by $$J=J_{intra}+J_{inter}=\frac{e}{h}\int dp_{z}\frac{p_{z}}{m_{0}}(f_{cc}(p_{z})+f_{vv}(p_{z}))+\frac{2e}{h}\int dp_{z}\sqrt{\frac{m_{0}E_{p}}{2}}\frac{1}{m_{0}}\textrm{Im}[f_{cv}(p_{z})].$$ where Im$[f_{cv}]$ means the imaginary part of $f_{cv}$. The particle densities in each band using the equation (\[particle\]) are $$n_{c}=\frac{1}{h}\int dp_{z}f_{cc}(p_{z}),$$ $$n_{v}=\frac{1}{h}\int dp_{z}f_{vv}(p_{z}).$$ Conclusions =========== In this paper we developed multi-band Wigner function formalism including spin, which has a profound effect especially in narrow band-gap semiconductors. We employed the $\bf{k.p}$ Hamiltonian to derive the quantum transport equations for multi-band semiconductors using non-equilibrium Green’s function methodology for systems weakly coupled to a phonon bath. It was shown that in addition to drift, potential and scattering terms that exist in single-band form of Wigner function, there are terms arising from inter-band coupling and spin-orbit interaction. These terms are source of the off-diagonal terms of Wigner function in energy band space and spin space. A two-band Kane model of resonant inter-band tunneling diode was presented. The current and particle densities were derived for this simple model. We are going to discuss the numerical solution of the two-band and the three-band equations and present the simulation results in future papers. Acknowledgments =============== This work was supported by a grant from the Army Research Office’s Defense University Research Initiative on Nanotechnology (DURINT). We thank Greg Recine, Robert Murawski, and Vadim Puller for helpful discussions. [10]{} H. Haug and A. Jauho, Quantum Kinetics in Transport and Optics of Semiconductors (Springer, 1996). F. A. Buot and K. L. Jensen, Phys. Rev. B, 42, 9429, (1990). D. Woolard, W. Zhang, B. Gelmont, Proc. of International Semiconductor Device Research Symposium, December 10-12, 2003. B. Gelmont, D. Woolard, W. Zhang, T. Globus, Solid State Electronics, 46, 1513, (2002). E. O. Kane, in Physics of III-V Compounds, Semiconductors and Semimetals, Vol. 1, 75, (Academic Press, 1966). L. P. Kadanoff and G. Baym, Quantum Statistical Mechanics, (Benjamin, 1962). F. A. Buot, J. of Stat. Phys., 61, 1223, (1990). D. R. Miller and D. P. Neikirk, Appl. Phys. Lett., 58, 2803, (1994). L. Demeio, L. Barletti, P. Bordone and C. Jacoboni, Wigner function for multiband transport in semiconductors, Transport Theory and Statistical Physics, 32 (3-4), 321-339 (2003). L. Demeio, P. Bordone and C. Jacoboni, Numerical and analytical applications of multiband transport in semiconductors, Proc. XXIII Symposium on Rarefied Gas Dynamics, Whistler, BC, Canada, July 20-25, 2002. P. Zhao, N. J. M. Horing and H. L. Cui, Phil. Mag. B, 80, 1359, (2000). G. Borgioli, G. Frosali, P. F. Zweifel, Transport Theory Stat. Phys. 32, (2003). W. W. Chow and S. W. Koch, Semiconductor-Laser Fundamentals: Physics of the Gain Materials (Springer Verlag, 1999). F. A. Buot, Superlattices and Microstructures, 11, 103, (1992). F. A. Buot, Phys. Rev. B, 10, 3700, (1974). R. F. O’Connell and E. P. Wigner, Phys. Rev. A, 30, 2613, (1984). G. D. Mahan, Physics Reports, 145, 251, (1987). R. Binder and S. W. Koch, Prog. Quant. Electr., 19, 307, (1995). D. Z.-Y. Ting and X. Cartoxia, Appl. Phys. Lett., 81, 4198, (2002). T. Koga, J. Nitta, H. Takayanagi and S. Datta, Phys. Rev. Lett., 88, 126601, (2002). M. Sweeny and J. Xu, Appl. Phys. Lett., 54, 546, (1989). G. Bastard, Wave Mechanics Applied to Semiconductor Heterostuctures (Les Editions de Physique, 1988). C. Sirtori, F. Capasso and J. Faist, Phys. Rev. B, 50, 8663, (1994).
--- abstract: 'Using symplectic Floer homology, Seidel associated a module to each mapping class of a compact connected oriented two-manifold of genus bigger than one. We compute this module for mapping classes which do not have any pseudo-Anosov components in the sense of Thurston’s theory of surface diffeomorphisms. The Nielsen-Thurston representative of such a class is shown to be monotone. The formula for the Floer homology is obtained from a topological separation of fixed points and a separation mechanism for Floer connecting orbits. As examples, we consider the geometric monodromy of isolated plane curve singularities. In this case, the formula for the Floer homology is particularly simple.' address: 'ETH Zürich, HG G36.1, Rämistrasse 101, 8092 Zürich' author: - Ralf Gautschi title: | Floer homology of algebraically finite\ mapping classes --- Introduction ============ This work is concerned with the computation of Floer homology groups of symplectomorphisms of compact 2-manifolds. In the case of a 2-sphere, each symplectomorphism is exact and its Floer homology equals, by Floer’s original work [@F4], the singular homology of the 2-sphere. In the case of a torus, the Floer homology of linear symplectomorphisms was computed by Po[ź]{}niak [@Po]. In this article, we consider the case of a compact connected oriented 2-manifold $\Sigma$ of genus $\geq2$. It was shown by Seidel [@S2], that to every mapping class $g$ of $\Sigma$, there is associated a ${\mathbb{Z}}_2$-graded vector space ${HF_*}(g)$ over the field ${\mathbb{Z}}_2$, which is equipped with an additional module structure over $H^*(\Sigma;{\mathbb{Z}}_2)$. To put it short, ${HF_*}(g)$ is the symplectic Floer homology of an area preserving and so-called monotone representative of $g$. The first computational result in this context is also due to Seidel. In [@S1] it was shown, that if $g$ is the mapping class of a positive Dehn twist along an embedded circle $C\subset\Sigma$, then ${HF_*}(g)\cong H_*(\Sigma\setminus C;{\mathbb{Z}}_2)$. Our starting point is the following definition given in detail in Section \[se:diff\]: an orientation preserving diffeomorphism $\phi$ of $\Sigma$ is called of finite type, if $\Sigma$ can be obtained by piecing together $\phi$-invariant 2-manifolds $\Sigma'$ with boundary such that $\phi|\Sigma'$ is either periodic, a flip-twist map or a twist map without fixed points. For the terminology we refer to Section \[se:diff\]. The significance of this definition lies in the following fact: due to Nielsen-Thurston theory of surface diffeomorphisms, every mapping class without pseudo-Anosov components admits a representative which is of finite type. Such a mapping class is called algebraically finite. Our main result is a formula for the Floer homology of finite type diffeomorphisms. We use the following notation. If $\phi$ is such a diffeomorphism, we denote by $\Sigma_0$ the union of the $\Sigma'$ where $\phi$ restricts to the identity. By ${\partial}_+\Sigma_0$, we denote the union of components of ${\partial}\Sigma_0$ where in a neighborhood, $\phi$ is a right-handed twist. \[thm:main1\] If $\phi$ is a diffeomorphism of finite type, then $\phi$ is monotone with respect to some $\phi$-invariant area form and $${HF_*}(\phi) \cong H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)\oplus {\mathbb{Z}}_2^{\Lambda(\phi|\Sigma\setminus\Sigma_0)}.$$ Here, $\Lambda$ denotes the Lefschetz number. The $H^*(\Sigma;{\mathbb{Z}}_2)$-action on the r.h.s. is split and given as follows. On the first summand, it is the ordinary cap product. On the second, $1\in H^0(\Sigma;{\mathbb{Z}}_2)$ acts by the identity and any element of $H^1(\Sigma;{\mathbb{Z}}_2)\oplus H^2(\Sigma;{\mathbb{Z}}_2)$ by the zero map. As suggested by the formula above, the Floer complex of $\phi$ splits into a complex associated to $\phi|\Sigma_0$ and one associated to $\phi|\Sigma\setminus\Sigma_0$. On one hand this follows from a purely topological separation of fixed points for finite type diffeomorphisms. On the other hand, there is also a separation mechanism for Floer connecting orbits, i.e. after suitably perturbing $\phi$, every connecting orbit starting and ending in $\Sigma_0$ does not leave $\Sigma_0$. For the precise statement we refer to Section \[se:corbits\]. A natural source of examples is provided by the theory of singularities. To every isolated plane curve singularity is associated a compact connected oriented 2-manifold with boundary, the Milnor fiber, and an isotopy class of orientation preserving diffeomorphisms of the Milnor fiber which are the identity near the boundary, called the geometric monodromy. The precise definitions are given in Section \[se:singularity\]. \[thm:main2\] Let $M\subset\Sigma$ be the Milnor fiber of an isolated plane curve singularity and $g$ be the mapping class which is obtained by extending the geometric monodromy trivially to $\Sigma$. Then $${HF_*}(g) \cong H_*(\Sigma,M;{\mathbb{Z}}_2),$$ where $H^*(\Sigma;{\mathbb{Z}}_2)$ acts by cap product. A special case of Theorem \[thm:main2\] is the following generalization of Seidel’s formula for the Floer homology of a Dehn twist [@S1]. For the notation see Section \[se:singularity\]. \[cor:dehn1\] Let $(C_1,\dots,C_k)$ be an $A_k$-configuration of circles in $\Sigma$. Let $g$ be the mapping class of the product $\tau_{1}\circ\cdots\circ\tau_{k}$, where $\tau_i$ denotes the positive Dehn twist along $C_i$. Then $${HF_*}(g) \cong H_*(\Sigma,C_1\cup\cdots\cup C_k;{\mathbb{Z}}_2),$$ where $H^*(\Sigma;{\mathbb{Z}}_2)$ acts by cap product. The same formula holds, if $g$ is the class of $\tau_{\sigma 1}\circ\cdots\circ\tau_{\sigma k}$, where $\sigma$ is a cyclic permutation of $k$ elements. The proof of Theorem \[thm:main2\] is given in Section \[se:singularity\]. It relies, besides on Theorem \[thm:main1\], on an additional result about the geometric monodromy. This result is stated in Proposition \[prop:monodromy\] and proven in Appendix \[ap:monodromy\]. The proof uses the well known theorem of A’Campo [@AC2] and L[ê]{} [@L] that the Lefschetz number of the geometric monodromy vanishes, together with the theory of splice diagrams which is due to Eisenbud and Neumann [@EN]. Appendix B includes a summary of the relevant results on splice diagrams from [@EN] and is therefore quite lengthy. We would like to point out, that Proposition \[prop:monodromy\] also follows from A’Campo’s work [@AC1], [@AC4]. Finally, we mention that there is a version of Floer homology theory for diffeomorphisms of compact oriented 2-manifolds with boundary. It assigns a pair of vector spaces ${HF_*}(g,+),{HF_*}(g,-)$ to every isotopy class $g$ of orientation preserving diffeomorphisms which are the identity near the boundary. The sign corresponds to two different perturbations of $g$ near the boundary. In Appendix \[ap:open\] we give an outline on this version of Floer homology. Using Proposition \[prop:monodromy\], we confirm in Section \[se:singularity\] a conjecture of Seidel [@S5 Page 23] in the case of plane curve singularities. \[thm:main3\] If $g$ is the geometric monodromy of an isolated plane curve singularity, then $${HF_*}(g,+) = 0.$$ The rest of the article is organized as follows. In Section \[se:monofloer\] we recall the basic facts about monotone symplectomorphisms and Floer homology. For background material on symplectic Floer homology in two dimensions we refer to [@S2] and the references given therein. Section \[se:diff\] is devoted to diffeomorphisms of finite type and their properties relevant for Floer homology. We compute the fixed point classes, establish monotonicity and show that the symplectic action is exact. At several points we use a topological proposition about products of disjoint Dehn twists which was already used in [@S1]. We give a proof of this proposition in Appendix \[ap:dehn\]. The result about the fixed point classes, Proposition \[prop:fclass\], is already contained in the work of Jiang and Guo [@JG] on the Nielsen number of surface diffeomorphisms. For the sake of completeness, we give an independent proof. In Section \[se:main\], the results from the previous sections are put together to prove Theorem \[thm:main1\]. We would like to point out that the proof of Theorem 1 is very much inspired by Seidel’s computation of the Floer homology of a Dehn twist [@S1]. In particular our result about the Floer connecting orbits, Proposition \[prop:corbits\], is a generalization of the corresponding result for Dehn twists [@S1 Lemma 4]. Our approach to the proof, however, is slightly different from Seidel’s original approach. This has led to a shorter proof and to a wider range of application, see the remark at the end of Section \[se:corbits\]. The idea of looking at algebraically finite mapping classes was the fruit of a week of stimulating discussions with Paul Seidel at Ecole polytechnique. I am indebted to him for devoting time and sharing his insight. This is a part of my PhD thesis and I am very grateful to my supervisor Dietmar Salamon for all his advise during the preparation of this work. I thank Eduard Zehnder for many helpful suggestions. Monotonicity and Floer homology {#se:monofloer} =============================== In this section we discuss the notion of monotonicity as defined in [@S2] and give an outline of its significance for Floer homology in two dimensions. For a more detailed account we refer to the original article. At the end of this section we give two criteria for monotonicity which we use in the next section. Throughout this article, $\Sigma$ denotes a closed connected and oriented 2-manifold of genus $\geq2$. In this section, we also fix an area form $\omega$ on $\Sigma$. Let $\phi\in\operatorname{Symp}(\Sigma,\omega)$, the group of $\omega$-preserving diffeomorphisms of $\Sigma$. The mapping torus of $\phi$, $${T_{\phi}}= {\mathbb{R}}\times\Sigma/(t+1,x)\sim(t,\phi(x)),$$ is a 3-manifold fibered over $S^1={\mathbb{R}}/{\mathbb{Z}}$. There are two natural second cohomology classes on ${T_{\phi}}$, denoted by $[{\omega_{\phi}}]$ and $c_\phi$. The first one is represented by the closed two-form ${\omega_{\phi}}$ which is induced from the pullback of $\omega$ to ${\mathbb{R}}\times\Sigma$. The second is the Euler class of the vector bundle $$V_\phi = {\mathbb{R}}\times T\Sigma/(t+1,\xi_x)\sim(t,{\mathrm{d}}\phi_x\xi_x),$$ which is of rank 2 and inherits an orientation from $T\Sigma$. $\phi\in\operatorname{Symp}(\Sigma,\omega)$ is called [**monotone**]{}, if $$[\omega_\phi] = (\operatorname{area}_\omega(\Sigma)/\chi(\Sigma))\cdot c_\phi$$ in $H^2({T_{\phi}};{\mathbb{R}})$; $\operatorname{Symp}^m(\Sigma,\omega)$ denotes the set of monotone symplectomorphisms. Now $H^2({T_{\phi}};{\mathbb{R}})$ fits into the following short exact sequence $$\label{eq:cohomology} 0 {\longrightarrow}\frac{H^1(\Sigma;{\mathbb{R}})}{\operatorname{im}(\operatorname{id}-\phi^*)} \stackrel{\delta}{{\longrightarrow}} H^2({T_{\phi}};{\mathbb{R}}) \stackrel{\iota^*}{{\longrightarrow}} H^2(\Sigma;{\mathbb{R}}) {\longrightarrow}0.$$ The map $\delta$ is defined as follows. Let $\rho:{[0,1]}{\rightarrow}{\mathbb{R}}$ be a smooth function which vanishes near $0$ and $1$ and satisfies $\int_0^1\!\rho\,{\mathrm{d}}t=1$. If $\theta$ is a closed 1-form on $\Sigma$, then $\rho\cdot\theta\wedge{\mathrm{d}}t$ defines a closed 2-form on $T_\phi$; indeed $$\delta[\theta] = [\rho\cdot\theta\wedge{\mathrm{d}}t].$$ The map $\iota:\Sigma{\hookrightarrow}T_\phi$ assigns to each $x\in\Sigma$ the equivalence class of $(1/2,x)$. Note, that $\iota^*{\omega_{\phi}}=\omega$ and $\iota^*c_\phi$ is the Euler class of $T\Sigma$. Hence, by , there exists a unique class $m(\phi)\in H^1(\Sigma;{\mathbb{R}})/\operatorname{im}(\operatorname{id}-\phi^*)$ satisfying $$\delta\,m(\phi) = [{\omega_{\phi}}]-(\operatorname{area}_\omega(\Sigma)/\chi(\Sigma))\cdot c_\phi,$$ where $\chi$ denotes the Euler characteristic. Therefore, $\phi$ is monotone if and only if $m(\phi)=0$.\ We recall the fundamental properties of $\operatorname{Symp}^m(\Sigma,\omega)$ from [@S2]. By $\operatorname{Diff}^+(\Sigma)$, we denote the group of orientation preserving diffeomorphisms of $\Sigma$.\ (Naturality)\[page:natur\] If $\phi\in\operatorname{Symp}^m(\Sigma,\omega),\psi\in\operatorname{Diff}^+(\Sigma)$, then $\psi^{-1}\phi\psi\in\operatorname{Symp}^m(\Sigma,\psi^*\omega)$.\ (Isotopy) Let $(\psi_t)_{t\in{[0,1]}}$ be an isotopy in $\operatorname{Symp}(\Sigma,\omega)$, i.e. a smooth path with $\psi_0=\operatorname{id}$. Then $$m(\phi\circ\psi_1)=m(\phi)+[\operatorname{Flux}(\psi_t)_{t\in{[0,1]}}]$$ in $H^1(\Sigma;{\mathbb{R}})/\operatorname{im}(\operatorname{id}-\phi^*)$; see [@S2 Lemma 6]. For the definition of the flux homomorphism see [@MS].\ (Inclusion) The inclusion $\operatorname{Symp}^m(\Sigma,\omega){\hookrightarrow}\operatorname{Diff}^+(\Sigma)$ is a homotopy equivalence. This follows from the isotopy property, surjectivity of the flux homomorphism and Moser’s isotopy theorem [@Mo]. Furthermore, the Earl-Eells Theorem [@EE] implies that every connected component of $\operatorname{Symp}^m(\Sigma,\omega)$ is contractible.\ (Floer homology) To every $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$ symplectic Floer homology theory assigns a ${\mathbb{Z}}_2$-graded vector space ${HF_*}(\phi)$ over ${\mathbb{Z}}_2$, with an additional multiplicative structure, called the quantum cap product, $$H^*(\Sigma;{\mathbb{Z}}_2)\otimes{HF_*}(\phi){\longrightarrow}{HF_*}(\phi).$$ Each $\psi\in\operatorname{Diff}^+(\Sigma)$ induces an isomorphism ${HF_*}(\phi)\cong{HF_*}(\psi^{-1}\phi\psi)$ of $H^*(\Sigma;{\mathbb{Z}}_2)$-modules.\ (Invariance) If $\phi,\phi'\in\operatorname{Symp}^m(\Sigma,\omega)$ are isotopic, then ${HF_*}(\phi)$ and ${HF_*}(\phi')$ are naturally isomorphic as $H^*(\Sigma;{\mathbb{Z}}_2)$-modules. This is proven in [@S2 Page 7].\ Now let $g$ be a mapping class of $\Sigma$, i.e. an isotopy class of $\operatorname{Diff}^+(\Sigma)$. Pick an area form $\omega$ and a representative $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$ of $g$. Then ${HF_*}(\phi)$ is an invariant of $g$, which is denoted by ${HF_*}(g)$. Note that ${HF_*}(g)$ is independent of the choice of an area form $\omega$ by Moser’s isotopy theorem [@Mo] and naturality of Floer homology. Let $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$. We give a brief outline of the definition of ${HF_*}(\phi)$ in the special case where all the fixed points of $\phi$ are non-degenerate. This means that for all $y\in\operatorname{Fix}(\phi)$, $\det(\operatorname{id}-{\mathrm{d}}\phi_y)\ne0$. In particular, it follows that $\operatorname{Fix}(\phi)$ is a finite set and the ${\mathbb{Z}}_2$-vector space $${CF_*}(\phi) := {\mathbb{Z}}_2^{\operatorname{Fix}(\phi)}$$ admits a ${\mathbb{Z}}_2$-grading with $(-1)^{\deg y}=\operatorname{sign}(\det(\operatorname{id}-{\mathrm{d}}\phi_y))$, for all $y\in\operatorname{Fix}(\phi)$. The Floer boundary operator is defined as follows. Let $J=(J_t)_{t\in{\mathbb{R}}}$ be a smooth path of $\omega$-compatible complex structures on $\Sigma$ such that $J_{t+1}=\phi^*J_t$. For $y^\pm\in\operatorname{Fix}(\phi)$, let ${\mathcal{M}}(y^-,y^+;J,\phi)$ denote the space of smooth maps $u:{\mathbb{R}}^2{\rightarrow}\Sigma$ which satisfy the Floer equations $$\label{eq:corbit} \left\{\begin{array}{l} u(s,t) = \phi(u(s,t+1)), \\ {\partial}_s u + J_t(u){\partial}_t u = 0, \\ \lim_{s{\rightarrow}\pm\infty}u(s,t) = y^\pm. \end{array}\right.$$ One way to think of the Floer equations is in terms of the symplectic action. Let ${\Omega_{\phi}}=\{y\in C^{\infty}({\mathbb{R}},\Sigma)\,|\,y(t)=\phi(y(t+1))\}$ denote the twisted loop space. The action form is the one-form $\alpha_\omega$ on ${\Omega_{\phi}}$ defined by $$\label{eq:daction} \alpha_\omega(y)\xi = \int_0^1\omega\big(\frac{{\mathrm{d}}y}{{\mathrm{d}}t}(t),\xi(t)\big)\,{\mathrm{d}}t,$$ where $y\in{\Omega_{\phi}}$ and $\xi\in T_y{\Omega_{\phi}}$, i.e. $\xi(t)\in T_{y(t)}\Sigma$ and $\xi(t)={\mathrm{d}}\phi_{y(t+1)}\xi(t+1)$ for all $t\in{\mathbb{R}}$. If $\xi,\xi'\in T_y{\Omega_{\phi}}$, then $\int_0^1\omega(\xi'(t),J_t\xi(t)){\mathrm{d}}t$ defines a metric on ${\Omega_{\phi}}$. The negative gradient lines of $\alpha_\omega$ with respect to this metric are solutions of . Now to every $u\in{\mathcal{M}}(y^-,y^+;J,\phi)$ is associated a Fredholm operator ${\mathrm{D}}_u$ which linearizes (\[eq:corbit\]) in suitable Sobolev spaces. The index of this operator is given by the so called Maslov index $\mu(u)$, which satisfies $\mu(u)=\deg(y^+)-\deg(y^-)\text{ mod }2$. For a generic $J$, every $u\in{\mathcal{M}}(y^-,y^+;J,\phi)$ is regular, meaning that ${\mathrm{D}}_u$ is onto. Hence, by the implicit function theorem, ${\mathcal{M}}_k(y^-,y^+;J,\phi)$ is a smooth $k$-dimensional manifold, where ${\mathcal{M}}_k(y^-,y^+;J,\phi)$ denotes the subset of those $u\in{\mathcal{M}}(y^-,y^+;J,\phi)$ with $\mu(u)=k\in{\mathbb{Z}}$. Translation of the $s$-variable defines a free ${\mathbb{R}}$-action on ${\mathcal{M}}_1(y^-,y^+;J,\phi)$ and hence the quotient is a discrete set of points. Assume for the moment that for all $y^\pm\in\operatorname{Fix}(\phi)$ this quotient is a finite set and let $n(y^-,y^+)\in{\mathbb{Z}}_2$ denote its cardinality mod 2. Define the linear map $${\partial}_J:{CF_*}(\phi){\longrightarrow}CF_{*+1}(\phi)\quad\text{by}\quad \operatorname{Fix}(\phi)\ni y {\longmapsto}\sum_{z} n(y,z)z.$$ That ${\partial}_J$ is of degree $1$ follows from the equation relating the index and the degree. That ${\partial}_J$ is a boundary operator, i.e. that ${\partial}_J\circ{\partial}_J=0$, is due to the so-called gluing theorem. For this theorem to hold, as well as for ${\mathcal{M}}_1(y^-,y^+;J,\phi)/{\mathbb{R}}$ to be a finite set, one needs certain bounds on the energy. Note that bubbling is not an issue here, since $\pi_2(\Sigma)=0$. The energy of a map $u:{\mathbb{R}}^2{\rightarrow}\Sigma$ is given by $$E(u) = \int_{{\mathbb{R}}}\int_0^1 \omega\big({\partial}_tu(s,t),J_t{\partial}_tu(s,t)\big)\,{\mathrm{d}}t{\mathrm{d}}s.$$ It is proven in [@S2 Lemma 9] that if $\phi$ is monotone, then the energy is constant on each ${\mathcal{M}}_k(y^-,y^+;J,\phi)$. It follows that $({CF_*}(\phi),{\partial}_J)$ is a chain complex and that its homology is an invariant of $\phi$, denoted by ${HF_*}(\phi)$, i.e. it is independent of $J$. Next we introduce the quantum cap product on ${HF_*}(\phi)$. For this, choose a Morse function $f:\Sigma{\rightarrow}{\mathbb{R}}$ and set $${CM^*}(f) := {\mathbb{Z}}_2^{\operatorname{Crit}(f)},$$ with a ${\mathbb{Z}}$-grading given by the Morse index $\operatorname{ind}_f$. Choose a Riemannian metric on $\Sigma$ such that $\nabla\!f$ is a Morse-Smale vector field. If $x^\pm\in\operatorname{Crit}(f)$ and $\operatorname{ind}_f(x^+)=\operatorname{ind}_f(x^-)+1$, denote by $l(x^-,x^+)\in{\mathbb{Z}}_2$ the number mod 2 of positive gradient lines going from $x^-$ to $x^+$. Define the Morse coboundary operator $$\delta_{\nabla\!f}:{CM^*}(f){\longrightarrow}CM^{*+1}(f)\quad\text{by}\quad \operatorname{Crit}(f)\ni x {\longmapsto}\sum_y l(x,y)y.$$ The cohomology of $({CM^*}(f),\delta_{\nabla\!f})$ is naturally isomorphic to $H^*(\Sigma;{\mathbb{Z}}_2)$, see [@Sc]. Now by a suitable choice of the function $f$ or the metric, we may assume that for all $y^\pm\in\operatorname{Fix}(\phi),x\in\operatorname{Crit}(f)$ and $k\in{\mathbb{Z}}$, the evaluation map $$\eta_k:{\mathcal{M}}_k(y^-,y^+;J,\phi){\longrightarrow}\Sigma,\quad u{\longmapsto}u(0,0),$$ is transverse to the unstable manifold $W^u(\nabla\!f,x)\subset\Sigma$. Note that the dimension of $W^u(\nabla\!f,x)$ is $2-\operatorname{ind}_f(x)$. Hence, if $k=\operatorname{ind}_f(x)$, then $\eta_k^{-1}(W^u(f,x))$ is a discrete set of points. It is in fact a finite set, see [@S2 page 8]. To prove this one uses the Gromov-Floer compactification of the moduli spaces and the fact that $\pi_2(\Sigma)=0$. Denote by $q(x;y^-,y^+)\in{\mathbb{Z}}_2$ the cardinality mod 2 of $\eta_k^{-1}(W^u(f,x))\subset{\mathcal{M}}_k(y^-,y^+;J,\phi)$, where $k=\operatorname{ind}_f(x)$. Define the linear map $$\label{eq:qca} {CM^*}(f)\otimes{CF_*}(\phi){\longrightarrow}{CF_*}(\phi),\quad x\otimes y{\longmapsto}\sum_{z}q(x;y,z)z.$$ It can be shown that this is a chain map and that the induced map on homology is independent of $\nabla\!f$ and $J$. It is called the quantum cap product. For details we refer to [@S2] and the references given therein. If $\phi$ has degenerate fixed points one needs to perturb equations in order to define the Floer homology. Equivalently, one could say that the action form needs to be perturbed. At this point Seidel’s approach differs from the usual one. He uses a larger class of perturbations, but such that the perturbed action form is still cohomologous to the unperturbed. As a consequence, the usual invariance of Floer homology under Hamiltonian isotopies is extended to the stronger property stated above. This ends the general discussion of Floer homology . To compute the Floer homology of a mapping class one needs to pick a monotone representative. We now give two criteria for monotonicity which we use later on. Let $\omega$ be an area form on $\Sigma$ and $\phi\in\operatorname{Symp}(\Sigma,\omega)$. \[lemma:monotone1\] Assume that every class $\alpha\in\ker(\operatorname{id}-\phi_*)\subset H_{1}(\Sigma;{\mathbb{Z}})$ is represented by a map $\gamma:S{\rightarrow}\operatorname{Fix}(\phi)$, where $S$ is a compact oriented 1-manifold. Then $\phi$ is monotone. By dualizing the exact sequence , we get the following exact sequence for homology with real coefficients $$\label{eq:homology} 0{\longrightarrow}H_2(\Sigma;{\mathbb{R}})\stackrel{\iota_*}{{\longrightarrow}} H_2({T_{\phi}};{\mathbb{R}})\stackrel{\hat{{\partial}}}{{\longrightarrow}} \ker(\operatorname{id}-\phi_*)\subset H_1(\Sigma;{\mathbb{R}}),$$ where $\hat{{\partial}}$ is dual to $\delta$. Hence, $\phi$ is monotone if and only if $$\langle m(\phi),\alpha\rangle=0, \quad \forall\;\alpha\in\ker(\operatorname{id}-\phi_*)\subset H_{1}(\Sigma;{\mathbb{R}}).$$ If we think of $H_1(\Sigma;{\mathbb{Z}})$ as a lattice in $H_1(\Sigma;{\mathbb{R}})$, it is furthermore enough to consider $\alpha\in H_1(\Sigma;{\mathbb{Z}})\cap\ker(\operatorname{id}-\phi_*)$.\ Let $\gamma:S{\rightarrow}\operatorname{Fix}(\phi)$ and define $u:S\times S^1{\rightarrow}{T_{\phi}}$ by $u(s,t)=(t,\gamma(s))$. From the definition of $\delta$ on page , it is straight forward to check that $$\langle\delta\alpha,[u]\rangle = \langle\alpha,[\gamma]\rangle,$$ for all $\alpha\in H_1(\Sigma;{\mathbb{R}})$, i.e. that $\hat{{\partial}}[u]=[\gamma]$. Here, the brackets denote homology classes. Now on one hand, since ${\partial}_t u(s,t)=(1,0)$, we have that $u^*{\omega_{\phi}}=0$ and hence $\langle[{\omega_{\phi}}],[u]\rangle=0$. On the other hand, $\langle c_\phi,[u]\rangle=0$. This is because the bundle $u^*V_\phi$ is isomorphic to the bundle $\gamma^*T\Sigma\times S^1$, which is trivial. Hence, it follows that $$\langle m(\phi),[\gamma]\rangle = \langle[{\omega_{\phi}}],[u]\rangle- (\operatorname{area}_\omega(\Sigma)/\chi(\Sigma))\langle c_\phi,[u]\rangle = 0.$$ This proves the lemma. \[lemma:monotone2\] If $\phi^k$ is monotone for some $k>0$, then $\phi$ is monotone. If $\phi$ is monotone, then $\phi^k$ is monotone for all $k>0$. Recall that ${T_{\phi}}$ is the orbit space of the ${\mathbb{Z}}$-action $n\cdot(t,x) = (t+n,\phi^{-n}(x))$, where $n\in{\mathbb{Z}}$ and $(t,x)\in{\mathbb{R}}\times\Sigma$. If we only divide out by the subgroup $k{\mathbb{Z}}$, for $k\in{\mathbb{N}}_{>0}$, we naturally get the mapping torus of $\phi^k$. Further dividing by ${\mathbb{Z}}/k{\mathbb{Z}}$ defines the $k$-fold covering map $p_k:T_{\phi^k}{\rightarrow}{T_{\phi}}$. It is straight forward to check that $$p_k^*[\omega_\phi] = [\omega_{\phi^k}] \quad\text{and}\quad p_k^*c_\phi = c_{\phi^k}. \label{eq:iterate}$$ The first equality follows immediately from the definitions. To prove the second, note that $$p_k^*\big((TM\times{\mathbb{R}})/{\mathbb{Z}}\big) \cong (TM\times{\mathbb{R}})/k{\mathbb{Z}}\cong V_{\phi^k},$$ where the ${\mathbb{Z}}$-action on ${\mathbb{R}}\times T\Sigma$ is given by $n\cdot(t,\xi_x)=(t+n,{\mathrm{d}}\phi_x^{-n}\xi_x)$, for $n\in{\mathbb{Z}}$ and $\xi_x\in T_x\Sigma$. The lemma follows from and the fact that $p_k^*$ is injective. To prove injectivity, define the map $a^k_*:H^2(T_{\phi^k};{\mathbb{R}}){\rightarrow}H^2(T_\phi;{\mathbb{R}})$ by averaging differential forms; $a^k_*$ is a left inverse of $p_k^*$, i.e. $a^k_*\circ p_k^*=\operatorname{id}$. This ends the proof of the lemma. Diffeomorphisms of finite type {#se:diff} ============================== We begin with the basic definition. Note that $S^1$ is always identified with ${\mathbb{R}}/{\mathbb{Z}}$. \[def:ftype\] We call $\phi\in\operatorname{Diff}_+(\Sigma)$ of [**finite type**]{} if the following holds. There is a $\phi$-invariant finite union $N\subset\Sigma$ of disjoint non-contractible annuli such that:\ (1) $\phi|\Sigma\setminus N$ is periodic, i.e. there exists $\ell>0$ such that $\phi^\ell|\Sigma\setminus N=\operatorname{id}$.\ (2) Let $N'$ be a connected component of $N$ and $\ell'>0$ be the smallest integer such that $\phi^{\ell'}$ maps $N'$ to itself. Then $\phi^{\ell'}|N'$ is given by one of the following two models with respect to some coordinates $(q,p)\in{[0,1]}\times S^1$:\ (twist map) $(q,p){\longmapsto}(q,p-f(q))$ \ (flip-twist map) $(q,p){\longmapsto}(1-q,-p-f(q))$, \ where $f:{[0,1]}{\rightarrow}{\mathbb{R}}$ is smooth and strictly monotone. A twist map is called [**positive/negative**]{}, if $f$ is increasing/decreasing.\ (3) Let $N'$ and $\ell'$ be as in (2). If $\ell'=1$ and $\phi|N'$ is a twist map, then $\operatorname{im}(f)\subset[0,1]$, i.e. $\phi|\text{int}(N')$ has no fixed points.\ (4) If two connected components of $N$ are homotopic, then the corresponding local models of $\phi$ are either both positive or both negative twists. \(i) Let $\phi$ be a diffeomorphism of finite type and $\ell$ be as in (1). Then $\phi^\ell$ is the product of (multiple) [**Dehn twists**]{} “along $N$”. Moreover, two parallel Dehn twists have the same sign, by (4). We say that $\phi$ has [**uniform twists**]{}, if $\phi^\ell$ is the product of only positive, or only negative Dehn twists.\ (ii) A mapping class of $\Sigma$ is called [**algebraically finite**]{} if it does not have any pseudo-Anosov components in the sense of Thurston’s theory of surface diffeomorphism. Every such class is represented by a diffeomorphism of finite type. To see this, recall Thurston’s classification theorem, [@Th Theorem 4]: for every mapping class of $\Sigma$, there exists a diffeomorphism $\phi$ representing the class and a $\phi$-invariant finite union $C\subset\Sigma$ of non-contractible disjoint circles such that:\ $(1')$ The components of $C$ are pairwise non-homotopic,\ $(2')$ If $\Sigma'$ is a $\phi$-invariant union of connected components of $\Sigma\setminus C$, then $\phi|\Sigma'$ is isotopic to either a periodic or a pseudo-Anosov map.\ The set $C$ is called a reducing set. Starting with a mapping class without pseudo-Anosov components, one first chooses a minimal reducing set $C$, meaning that it has the minimal number of components of all reducing sets. Minimality guarantees that after isotopying the Nielsen-Thurston representative $\phi$ on a complement of a tubular neighborhood $N$ of $C$ to a periodic map, $\phi|N$ does not have periodic components. One can thus achieve condition (2) above, by isotopying $\phi|N$ relative to ${\partial}N$. If (3) is not satisfied, this is achieved in a last step by introducing further components of $C$, violating $(1')$, but such that (4) still holds.\ (iii) The term algebraically finite goes back to Nielsen [@Ni]. Fried [@Fr] defined the notion of algebraically finite diffeomorphism in any dimension. In two dimensions, these are special representatives of algebraically finite mapping classes. Fried’s definition, however, is adopted to the theory of dynamical systems. For our purpose, a representative which is of the special type defined above is most convenient.\ (iv) The term flip-twist map is taken from [@JG]. The rest of this section is devoted to the study of diffeomorphisms of finite type. The points of interest for Floer homology are: fixed point classes, monotonicity and action. The results we obtain are used in Section \[se:main\] to compute the Floer homology. From now on, $\phi$ denotes a diffeomorphism of finite type and $N$ the associated $\phi$-invariant union of annuli. By $\Sigma_0$ we denote the union of the components of $\Sigma\setminus\text{int}(N)$, where $\phi$ restricts to the identity. Furthermore, we denote by $\ell$ the smallest positive integer such that $\phi^\ell$ restricts to the identity on $\Sigma\setminus N$. The first proposition describes the set of fixed point classes of $\phi$. It is a special case of a theorem by B. Jiang and J. Guo [@JG], which gives for any mapping class a representative that realizes its Nielsen number. \[prop:fclass\] Each fixed point class of $\phi$ is either a connected component of $\Sigma_0$ or consists of a single fixed point. A fixed point $x$ of the second type satisfies $\det(\operatorname{id}-{\mathrm{d}}\phi_x)>0$. The crucial step in our proof of this proposition is to prove it in the special case of products of disjoint Dehn twists. For this, we refer to Appendix \[ap:dehn\]. First note, that if $x\in\operatorname{Fix}(\phi)\cap\text{int}(N)$, then $\phi$ restricted to the component of $N$ containing $x$, is a flip-twist map and $x=(\frac{1}{2},-\frac{1}{2}f(\frac{1}{2}))$ or $(\frac{1}{2},\frac{1}{2}-\frac{1}{2}f(\frac{1}{2}))$. Now let $x\ne y$ be arbitrary fixed points in the same fixed point class. We prove in three steps, that $x$ and $y$ are in the same connected component of $\Sigma_0$. [1]{} $x$ and $y$ are in the same component of either $\text{int}(N)$ or $\Sigma\setminus\text{int}(N)$. Note that every connected component of $\Sigma\setminus\text{int}(N)$ is a connected component of $\operatorname{Fix}(\phi^\ell)$. Similarly, if $x\in\text{int}(N)$, then $x$ is contained in a fixed circle of $\phi^\ell$. Such a circle is also a connected component of $\operatorname{Fix}(\phi^\ell)$. By Corollary \[cor:dehn\] in Appendix \[ap:dehn\], however, every connected component of $\operatorname{Fix}(\phi^\ell)$ is a fixed point class of $\phi^\ell$. Since $x$ and $y$ are in the same fixed point class of $\phi^\ell$, this proves claim 1. Denote by $M$ the connected component of $N$ or $\Sigma\setminus\text{int}(N)$, containing $x$ and $y$. [2]{} $x$ and $y$ are in the same fixed point class of $\phi|M$. By assumption, there exists a map $u:{[0,1]}^2{\rightarrow}\Sigma$ with $$\quad u(0,t) = x,\quad u(1,t) = y \quad\text{and}\quad u(s,1)=\phi(u(s,0)),$$ for all $s,t\in[0,1]$. By Corollary \[cor:dehn\], we can assume that $u(s,0)\in M$, for all $s\in{[0,1]}$. Let $M'$ be the union of $M$ and a tubular neighborhood of ${\partial}M$. We prove that $u$ can be deformed in the interior of ${[0,1]}^2$ such that its image is contained in a $M'$. First note, that by a small perturbation, we may assume that $u$ is transverse to ${\partial}M'$. Hence $u^{-1}({\partial}M')\subset{[0,1]}^2$ is a 1-dimensional submanifold with boundary ${\partial}u^{-1}({\partial}M')=u^{-1}({\partial}M')\cap{\partial}{[0,1]}^2=\emptyset$. Every component of $u^{-1}({\partial}M')$ is therefore a circle and bounds a disk in ${[0,1]}^2$. The restriction of $u$ to this such a disk represents an element of $\pi_2(\Sigma,{\partial}M')$. Since $\pi_2(\Sigma,{\partial}M')=0$, $u$ can be deformed in the interior of ${[0,1]}^2$ to a map $v$ such that the number of components of $v^{-1}({\partial}M')$ is less than that of $u^{-1}({\partial}M')$. It follows inductively, that $u$ can be deformed in the interior of ${[0,1]}^2$ to a map $w$ with $w^{-1}({\partial}M')=\emptyset$. This proves claim 2 and we are left with [3]{} Let $\varphi$ be either a flip-twist map or a non-trivial orientation preserving periodic diffeomorphism of a compact connected surface $M$ of Euler characteristic $\leq0$. Then each fixed point class of $\varphi$ consists of a single point. For a flip-twist map, this is checked explicitly by using the model. The other case was first proven in [@J]. We repeat the argument here. First assume that $M$ is closed. The uniformization theorem states that in every conformal class of metrics on $M$, there is a unique metric of constant curvature $-1$ if $\chi(M)<0$ or 0 if $\chi(M)=0$. This implies that the unique representative of a $\varphi$-invariant conformal class, such a class exists since $\varphi$ is finite order, is itself $\varphi$-invariant. Hence we can pick a $\varphi$-invariant metric of constant curvature $-1$ or 0 on $M$ and lift $\varphi$ to an isometry $\tilde{\varphi}$ of the universal cover $\tilde{M}$ of $M$. $\tilde{M}$ is either isometric to the hyperbolic plane ${\mathbb{H}}^2$ or the Euclidean plane ${\mathbb{R}}^2$. Let $x\in\operatorname{Fix}(\varphi)$ and let $\tilde{\varphi},\tilde{x}$ be lifts of $\varphi,x$ to $\tilde{M}$, such that $\tilde{\varphi}(\tilde{x})=\tilde{x}$. Note, that a fixed point of $\varphi$ is in the same class as $x$ if and only if it can be lifted to a fixed point of $\tilde{\varphi}$. Assume by contradiction that $\tilde{y}\ne\tilde{x}$ is a fixed point of $\tilde{\varphi}$. It follows that the unique geodesic going through $\tilde{x}$ and $\tilde{y}$ is pointwise fixed by $\tilde{\varphi}$. In particular, since $\tilde{\varphi}$ preserves orientation, ${\mathrm{d}}\tilde{\varphi}_{\tilde{x}}=\operatorname{id}$. This implies that $\tilde{\varphi}=\operatorname{id}$, because an isometry of ${\mathbb{H}}^2$ or ${\mathbb{R}}^2$ is determined by its value and differential at one point. This proves claim 3 in the case that $M$ is closed. The case ${\partial}M\ne\emptyset$ is reduced to the above case by gluing two copies of $M$ together along a $\varphi$-invariant tubular neighborhood of ${\partial}M$. The glued manifold is closed and of Euler characteristic $\leq0$; $\varphi$ extends to a non-trivial diffeomorphism $\varphi'$, which is orientation preserving and of finite order. Hence, every fixed point class of $\varphi'$ is a single point. The same therefore holds for $\varphi$. This ends the proof of claim 3. Finally we have [4]{} If $x\in\operatorname{Fix}(\phi)\setminus\Sigma_0$, then $\det(\operatorname{id}-{\mathrm{d}}\phi_x)>0$. The point $x$ is a fixed point of either a flip-twist map or an orientation preserving non-trivial isometry. In the first case, the assertion is checked by using the local model. Similarly, one checks in the second case, that $\det(\operatorname{id}-{\mathrm{d}}\phi_x)\leq0$ if and only if ${\mathrm{d}}\phi_x=\operatorname{id}$. As shown in the proof of claim 3, however, ${\mathrm{d}}\phi_x=\operatorname{id}$ implies that $x\in\Sigma_0$, which is a contradiction. The next issue is monotonicity. First note that if $\omega'$ is an area form on $\Sigma$ which is the standard form ${\mathrm{d}}q\wedge{\mathrm{d}}p$ with respect to the $(q,p)$-coordinates on $N$, then $\omega:=\sum_{i=1}^\ell(\phi^i)^*\omega'$ is standard on $N$ and $\phi$-invariant, i.e. $\phi\in\operatorname{Symp}(\Sigma,\omega)$. To prove that $\omega$ can be chosen such that $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$, we distinguish two cases: uniform and non-uniform twists. In the first case we have the following stronger statement. \[prop:monotone3\] If $\phi$ has uniform twists and $\omega$ is a $\phi$-invariant area form, then $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$. By Lemma \[lemma:monotone2\], it is enough to prove that $\phi^\ell$ is monotone with respect to any $\phi$-invariant area form. Replace $\phi$ by $\phi^\ell$. By the uniform twist condition, $\phi$ is the product of disjoint Dehn twists which are all, say positive. We prove that $\phi$ satisfies the hypothesis of Lemma \[lemma:monotone1\] and is therefore monotone. We use the Picard-Lefschetz formula for the action of a positive Dehn twist on $H_1(\Sigma;{\mathbb{Z}})$: if the twist is along $C\subset\Sigma$, then $\alpha\mapsto\alpha-(\alpha\cdot[C])[C]$, where $\alpha\in H_1(\Sigma;{\mathbb{Z}})$ and $[C]$ denotes the homology class of $C$ with respect to some orientation. The dot stands for the intersection pairing. Let $C_1,\dots,C_n\subset\Sigma$ be the disjoint non-contractible circles along which $\phi$ twists. Choose orientations of the $C_i$. Let $\alpha\in\ker(\operatorname{id}-\phi_*)\subset H_1(\Sigma;{\mathbb{Z}})$. We claim that for all $i=1,\dots,n$, $\alpha\cdot[C_i]=0$. This is equivalent to the condition that $\alpha$ is represented by a map $S{\rightarrow}\operatorname{Fix}(\phi)$, where $S$ is a compact oriented 1-manifold, and therefore ends the proof of the proposition. Since the $C_i$ are pairwise disjoint, it follows from the Picard-Lefschetz formula that $$\alpha = \phi_*\alpha = \alpha - \sum_{i=1}^n (\alpha\cdot[C_i])[C_i]$$ and hence that $\sum_{i=1}^n (\alpha\cdot[C_i])[C_i]=0$. Pairing with $\alpha$, we get $\sum_{i=1}^n (\alpha\cdot[C_i])^2 = 0$, which implies that $\alpha\cdot[C_i]=0$, for all $i=1,\dots,n$. In the non-uniform case, monotonicity is a more subtle point and does not hold for arbitrary $\phi$-invariant area forms. \[prop:monotone4\] If $\phi$ does not have uniform twists, there exists a $\phi$-invariant area form $\omega$ such that $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$. Moreover, $\omega$ can be chosen such that it is the standard form ${\mathrm{d}}q\wedge{\mathrm{d}}p$ on $N$. The strategy of the proof is the following. Assume first that $\phi|\Sigma\setminus N=\operatorname{id}$. We begin by defining a $\phi$-invariant area form $\omega$ with $\operatorname{area}_\omega(\Sigma)=-\chi(\Sigma)$. Then we construct for every class $[\gamma]\in\ker(\operatorname{id}-\phi_*)$, a class $[\Gamma]\in H_2({T_{\phi}};{\mathbb{Z}})$ such that $$\hat{{\partial}}[\Gamma]=[\gamma] \quad\text{and}\quad \langle[{\omega_{\phi}}],[\Gamma]\rangle=-\langle c_\phi,[\Gamma]\rangle,$$ with $\hat{{\partial}}$ as defined in the sequence . Finally, we show how the general case is reduced to the case above. We start with the following set-up. Fix a union $\tilde{N}\subset\Sigma$ of disjoint non-contractible and pairwise non-homotopic annuli such that $\phi|\Sigma\setminus{\tilde{N}}=\operatorname{id}$. Moreover, let for every connected component $N'$ of $\tilde{N}$, $\ell'$ be a positive integer and $f:[0,1]{\rightarrow}{\mathbb{R}}$ be a smooth monotone function with $f(0)=0,f(1)=\ell'$ and such that $\phi|N'$ is (an $\ell'$-fold Dehn twist) given by the model $(q,p)\mapsto(q,p-f(q))$ for $(q,p)\in{[0,1]}\times S^1$. We emphasize here, that not only the function $f$ but also the local chart of $N'$ is fixed for the rest of the proof. In a first step, we choose for every component $N'$ of ${\tilde{N}}$, an embedded circle $C'\subset N'$ as follows. Look at the set $\text{graph}(-f)\subset{[0,1]}\!\times\![-\ell',0]$ if $\ell'>0$, respectively ${[0,1]}\!\times\![0,-\ell']$ if $\ell'<0$. See the figures below.\ (0,0)![image](monotone2.pstex) \#1\#2\#3\#4\#5[ @font ]{} (5100,5643)(3976,-6373) (5776,-2161)[(0,0)\[lb\]]{} (6751,-5836)[(0,0)\[lb\]]{} (6451,-1336)[(0,0)\[lb\]]{} (8326,-1336)[(0,0)\[lb\]]{} (4501,-886)[(0,0)\[lb\]]{} (9076,-1411)[(0,0)\[lb\]]{} (3976,-6286)[(0,0)\[lb\]]{} Figure 1: $\text{graph}(-f)$\ for a positive twist (0,0)![image](monotone1.pstex) \#1\#2\#3\#4\#5[ @font ]{} (5250,6081)(3826,-6811) (7021,-2191)[(0,0)\[lb\]]{} (6061,-5851)[(0,0)\[lb\]]{} (6556,-6736)[(0,0)\[lb\]]{} (4501,-886)[(0,0)\[lb\]]{} (9076,-6211)[(0,0)\[lb\]]{} (8326,-6811)[(0,0)\[lb\]]{} (3826,-1786)[(0,0)\[lb\]]{} Figure 2: $\text{graph}(-f)$\ for a negative twist \ For any $a\in(0,1)$, the complement of the union of $\text{graph}(-f)$ and the set $\{q=a\}$ has four components. If $\phi|N'$ is a positive twist, we choose $a$ such that the left upper component (indicated with a $-$ sign in Figure 1) and the right lower component (indicated with a $+$ sign) have the same area with respect to the standard area form on ${[0,1]}\!\times\![-\ell',0]$. If $\phi|N'$ is a negative twist, left upper is replaced by left lower and right lower by right upper and the sings are interchanged. In both cases, we set $C':=\{a\}\times S^1\subset N'$, with orientation induced from $S^1$. The purpose of this construction will become clear below. Let $C$ denote the union of the loops $C'$. Let $\Sigma_1,\dots,\Sigma_m\subset\Sigma$ denote the closures of the connected components of $\Sigma\setminus C$. Since the $C'$ are disjoint non-contractible and pairwise non-homotopic, it follows that $\chi(\Sigma_j)<0$ for all $j=1,\dots,m$. Now choose an area form $\omega$ on $\Sigma$ such that $$\label{eq:mono4area}\begin{split} \operatorname{area}_\omega(\Sigma_j) &= -\chi(\Sigma_j) \quad\text{for all }j=1,\dots,m,\\ \omega|{\tilde{N}}&= {\varepsilon}\cdot{\mathrm{d}}q\wedge{\mathrm{d}}p, \end{split}$$ where ${\varepsilon}>0$ is sufficiently small. By the first condition, we have that $\operatorname{area}_\omega(\Sigma)=-\chi(\Sigma)$ and from the second it follows that $\phi^*\omega=\omega$. We now prove in several steps that $[\omega_\phi]=-c_\phi$ in $H^2(T_\phi;{\mathbb{R}})$. Let $S$ be a compact oriented 1-manifold and $\gamma:S{\rightarrow}\Sigma$ be an immersion which is transverse to $C$. Moreover, assume that $[\gamma]=[\phi\circ\gamma]$ in $H_1(\Sigma;{\mathbb{Z}})$. The goal is to lift the 1-cycle $\gamma$ to a 2-cycle $\Gamma$ in ${T_{\phi}}$. For this, we first define a 2-chain $A$ in $\Sigma$ which satisfies $${\partial}A=\gamma-\phi\!\circ\!\gamma-\sum_{i=1}^n \ell_i\big([\gamma]\cdot[C_i]\big)C_i,$$ where we think of the right hand side as a 1-chain. Here, we have introduced a numbering of the components of $C$. The chain $A$ can be described a follows, compare Figure 1, 2: At every intersection point of $\gamma$ and $C$ where $\gamma$ runs in the positive $q$-direction, there is a local contribution to $A$ given by the regions in Figure 1, 2 which are labelled by $\pm$. The sign of the contribution is as indicated in the figure. If $\gamma$ runs in the negative $q$-direction, the signs are interchanged. Note that by our choice of $\omega$, we have that $\int_A\omega=0$. Next, we use that $\gamma$ is homologous to $\phi\circ\gamma$, i.e. that $$\sum_{i=1}^n \ell_i([\gamma]\!\cdot\![C_i])[C_i]=0.$$ This means that there exist integers $k_1,\dots,k_m$ such that $${\partial}\big(\sum_{j=1}^m k_j\Sigma_j\big) = \sum_{i=1}^n \ell_i\big([\gamma]\cdot[C_i]\big)C_i.$$ We can now define the 2-chain $\Gamma$ in ${T_{\phi}}={[0,1]}\times\Sigma/(0,\phi(x))\sim(1,x)$, by $$\Gamma:=-[0,1/2]\times(\phi\circ\gamma) -\{1/2\}\times\big(A+{\textstyle\sum_{j=1}^m k_j\Sigma_j}\big) -[1/2,1]\times\gamma.$$ By construction, $\Gamma$ is a cycle; indeed $$\begin{split} {\partial}\Gamma &= \{0\}\times(\phi\circ\gamma) - \{1/2\}\times(\phi\circ\gamma) - \{1/2\}\times{\partial}A \\ &\quad {}- \{1/2\}\times{\partial}(\textstyle{\sum_{j=1}^m}k_j\Sigma_j) +\{1/2\}\times\gamma - \{1\}\times\gamma \\ &= 0. \end{split}$$ By a similar calculation as in the proof of Lemma \[lemma:monotone1\], it furthermore follows that $\hat{{\partial}}[\Gamma]=[\gamma]$. [1]{} $\langle[{\omega_{\phi}}],[\Gamma]\rangle=-\sum_{j=1}^m k_j\operatorname{area}_\omega(\Sigma_j)$. Only the middle summand of $\Gamma$ contributes to $\langle[{\omega_{\phi}}],[\Gamma]\rangle$. Since $A$ has vanishing $\omega$-area, this already proves claim 1. [2]{} $\langle c_\phi,[\Gamma]\rangle=-\sum_{j=1}^m k_j\chi(\Sigma_j)$. To prove this, we use the following property of the Euler class. If a smooth section $s:T_\phi{\rightarrow}V_\phi$ is transverse to the zero-section, then $s^{-1}(0)\subset T_\phi$ is a submanifold of codimension 2 and its homology class, with respect to a suitable orientation, is Poincar[é]{}-dual to $c_\phi$. In particular, $\langle c_\phi,[u]\rangle$ equals the intersection number $[s^{-1}(0)]\cdot[u]$, for any $[u]\in H_2(T_{\phi};{\mathbb{Z}})$. The orientation of $s^{-1}(0)$ at a point $x$ is defined as follows. Let $\{e_1,e_2,e_3\}$ be an oriented basis of $T_xT_\phi$ such that $e_1$ is tangent to $s^{-1}(0)$. Then $e_1$ is said to be oriented, if $\{e_1,e_2,e_3,{\mathrm{d}}s_xe_2,{\mathrm{d}}s_xe_3\}$ is an oriented basis of $$T_{(x,0)}V_\phi\cong {\mathbb{R}}\oplus T_x\Sigma\oplus T_x\Sigma.$$ We now define a smooth section of $V_\phi$. To begin with, we choose a vector field $\xi$ on $\Sigma$ with only non-degenerate zeros and such that $\xi|{\tilde{N}}={\partial}/{\partial}q$. Furthermore, we require that $\xi^{-1}(0)$ is disjoint from $\operatorname{im}(\gamma)$. That such a vector field exists is a standard result in differential topology. By the Poincar[é]{}-Hopf Theorem, the sum of indices of $\xi$ over all zeros in $\Sigma_j$ equals $\chi(\Sigma_j)$, for all $j=1,\dots,m$.\ Note, that the vector field $\phi^*\xi-\xi$ is supported in ${\tilde{N}}$, where it is given by $(0,-f'(q))$ with respect to the local model. Hence, there exists a smooth path $(\xi_t)_{t\in{\mathbb{R}}}$ of vector fields such that $\xi_{t+1}=\phi^*\xi_t$, $\xi_t=\xi$ on $\Sigma\setminus{\tilde{N}}$ and $\xi_t^{-1}(0)=\xi^{-1}(0)$. Let the section $s:T_\phi{\rightarrow}V_\phi$ be defined by $s([t,x]):=[t,\xi_t(x)]$; recall that $$V_\phi = {\mathbb{R}}\times T\Sigma/(t+1,\xi_x)\sim(t,{\mathrm{d}}\phi_x\xi_x).$$ By our choice of the vector field $\xi$, $s$ in transverse to the zero-section and thus, $[s^{-1}(0)]$ is Poincar[é]{}-dual to $c_\phi$. Moreover, $$\begin{aligned} [s^{-1}(0)]\cdot[\Gamma] &=& -\sum_{j=1}^m k_j \big(s^{-1}(0)\cdot\Sigma_j\big) \\ &=& -\sum_{j=1}^m k_j\cdot\chi(\Sigma_j).\end{aligned}$$ The numbers $s^{-1}(0)\cdot\Sigma_j$ are well defined because $s^{-1}(0)$ intersects $\Sigma_j$ transversally and in the interior of $\Sigma_j$. Note, that the sign of an intersection point equals the index of $\xi$ at that point. This proves claim 2. From claim 1, 2 and the first equation in , we conclude that $$\langle[{\omega_{\phi}}],[\Gamma]\rangle=-\langle c_\phi,[\Gamma]\rangle,$$ and hence that $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$. We end the proof of the proposition with the following observation. Let $\phi$ be a diffeomorphism of finite type; apply the above construction to $\phi^\ell$ and let $\omega$ be an area form which satisfies . It follows that $$\omega':=\frac{1}{\ell}\sum_{i=1}^\ell(\phi^i)^*\omega$$ also satisfies and hence that $\phi^\ell\in\operatorname{Symp}^m(\Sigma,\omega')$. On the other hand $\phi^*\omega'=\omega'$, and therefore $\phi\in\operatorname{Symp}^m(\Sigma,\omega')$, by Lemma \[lemma:monotone2\]. The idea of the proof is the following. As above, we first replace $\phi$ by $\phi^\ell$. Then we construct an $\phi$-invariant area form $\omega$ with $\operatorname{area}_\omega(\Sigma)=-\chi(\Sigma)$. The main step is the following: for every integer class $[\gamma]\in\ker(\operatorname{id}-\phi_*)$ we construct a class $[\Gamma]\in H_2({T_{\phi}};{\mathbb{Z}})$ such that $${\partial}[\Gamma]=[\gamma] \quad\text{and}\quad \langle[{\omega_{\phi}}],[\Gamma]\rangle=-\langle c_\phi,[\Gamma]\rangle.$$ This proves the proposition for $\phi$. Finally, we show that $\omega$ can be chosen such that it is invariant under the original $\phi$. By Lemma \[lemma:monotone2\] this proves the proposition. Next we consider the symplectic action $\alpha_\omega$ on the twisted loop space ${\Omega_{\phi}}$ and prove that it is exact. See for the definition of $\alpha_\omega$. This result is crucial for the computation of the Floer homology, in particular for the use of the connecting orbits proposition proved in the next section. We need the following lemma, which holds for any $\phi\in\operatorname{Symp}(\Sigma,\omega)$. First note that a loop in ${\Omega_{\phi}}$ is represented by a map $u:S^1\times{[0,1]}{\rightarrow}\Sigma$ with $$u(s,t)=\phi(u(s,t+1)) \quad\text{for all }(s,t)\in S^1\times{[0,1]}.$$ By $[u]$ we denote the homology class of the loop $u$ in ${\Omega_{\phi}}$. \[lemma:mu\] Let $u$ and $v$ be two loops in ${\Omega_{\phi}}$. If $u(.,0)$ and $v(.,0)$ are freely homotopic loops in $\Sigma$, then $\langle[\alpha_\omega],[u]\rangle=\langle[\alpha_\omega],[v]\rangle$. Let $w:S^1\times{[0,1]}{\rightarrow}\Sigma$ be such that $w(.,0)=u(.,0)$ and $w(.,1)=v(.,0)$. Define the map $u':=w^{-1}\#u\#(\phi^{-1}\circ w):S^1\times{[0,1]}{\rightarrow}\Sigma$ by $$u'(s,t)=\begin{cases} w(s,1-3t) & \text{if $t\in[0,1/3]$} \\ u(s,3t-1) & \text{if $t\in[1/3,2/3]$} \\ \phi^{-1}(w(s,3t-2)) & \text{if $t\in[2/3,1]$}. \end{cases}$$ Since for all $t\in[0,1/3],s\in S^1$, $u'(s,1-t)=\phi^{-1}(w(s,1-3t))=\phi^{-1}(u'(s,t))$, it follows that $u'$ and $u$ are homotopic loops in ${\Omega_{\phi}}$; in particular $[u]=[u']$. Note that since $u'(s,0)=v(s,0)$ and $u'(s,1)=v(s,1)$ for all $s\in S^1$, the map $u'\#v^{-1}$ descends to a map $S^1\times S^1{\rightarrow}\Sigma$. Now if $w$ is an arbitrary loop in ${\Omega_{\phi}}$, then $$\langle[\alpha_\omega],[w]\rangle=-\int_{S^1\times{[0,1]}}w^*\omega.$$ Therefore $$\langle[\alpha_\omega],[u']\rangle-\langle[\alpha_\omega],[v]\rangle = -\int_{S^1\times S^1}(u'\#v^{-1})^*\omega = 0.$$ In the last equality we use the fact that the mapping degree of $u'\#v^{-1}$ vanishes, since the genus of $\Sigma$ is $\geq2$. This proves the proposition. We return to the situation where $\phi$ is a diffeomorphism of finite type. The proof of the following proposition relies on the results discussed in Appendix \[ap:dehn\]. \[prop:action\] If $\omega$ is a $\phi$-invariant area form, then $\alpha_\omega$ has vanishing periods. For any loop $u$ in ${\Omega_{\phi}}$, define $v:S^1\times[0,\ell]{\rightarrow}\Sigma$ by $$v(s,t) = \phi^{-j}(u(s,t-j)) \quad\text{for}\quad (s,t)\in S^1\times[j,j+1],j<\ell.$$ Since $v(s,0)=\phi^\ell(v(s,\ell))$ for all $s\in S^1$, $v$ can be considered a loop in $\Omega_{\phi^\ell}$. Note that $$-\int_{S^1\times[0,\ell]}v^*\omega = \ell\cdot\langle[\alpha_\omega],[u]\rangle.$$ Now observe that $v(.,0)$ is freely homotopic to $\phi^{-1}(v(.,0))=v(.,1)$. By Corollary \[cor:dehn\] in Appendix \[ap:dehn\], we therefore know that $v(.,0)$ is freely homotopic to a loop $\gamma:S^1{\rightarrow}\operatorname{Fix}(\phi^\ell)$. Hence, it follows from the previous lemma with $\phi$ replaced by $\phi^\ell$, that $$\int_{S^1\times[0,\ell]}v^*\omega = \int_{S^1}\gamma^*\omega \;=\; 0.$$ Therefore $\langle[\alpha_\omega],[u]\rangle=0$, which ends the proof of the proposition. Connecting orbits {#se:corbits} ================= The main result of this section is a separation mechanism for Floer connecting orbits. Together with the topological separation of fixed points discussed in Proposition \[prop:fclass\], it allows us to compute the Floer homology of diffeomorphisms of finite type. We expect, however, that the results of this section are applicable to a larger class of surface diffeomorphisms. We start by introducing the setup. The notation is reminiscent of the situation encountered in Section \[se:diff\]. However, it is only in the next section where we return our attention to diffeomorphisms of finite type. Let $\Sigma_0\subset\Sigma$ be a compact submanifold, not necessarily connected. Let $N_0\subset\Sigma_0$ be a collar neighborhood of ${\partial}\Sigma_0$. On every connected component of $N_0$, we choose coordinates $(q,p)\in[0,1]\times S^1$ such that ${\partial}\Sigma_0\cong\bigcup\{1\}\times S^1$. Let $\omega$ be an area form on $\Sigma$ which is given by ${\mathrm{d}}q\wedge{\mathrm{d}}p$ on $N_0$. Let $\Phi\in\operatorname{Symp}(\Sigma,\omega)$, $H:\Sigma{\rightarrow}{\mathbb{R}}$ a smooth function and $J=(J_t)_{t\in{\mathbb{R}}}$ such that the following holds:\ (H1) $\Sigma_0$ is $\Phi$-invariant. Moreover, $\Phi(x)=\psi_1(x)$ for all $x\in\Sigma_0$, where $(\psi_t)_{t\in{\mathbb{R}}}$ denotes the Hamiltonian flow generated by $H$. This means, that ${\partial}_t\psi_t=X\circ\psi_t$, where the vector field $X$ is defined by ${\mathrm{d}}H=\omega(X,\cdot)$.\ (H2) There exists a constant $0<\delta<1/4$ such that on each connected component of $N_0$, we have $\Phi(q,p)=(q,p\mp\delta)$. The sign may depend on the component.\ (H3) $\operatorname{Fix}(\Phi)\cap\Sigma_0=\operatorname{Crit}(H)\cap\Sigma_0$.\ (H4) If $\omega'$ is a $\Phi$-invariant area form such that $\omega'=\omega$ on $\Sigma\setminus N_0$, then $\alpha_{\omega'}$ has vanishing periods on $\Omega_\Phi$.\ (H5) For all $t\in{\mathbb{R}}$, $J_t$ is an $\omega$-compatible complex structure which restricts to the standard complex structure on $N_0$ with respect to the $(q,p)$-coordinates. Moreover, $J_{t+1}=\Phi^*J_t$.\ Assuming (H1–5) we prove that \[prop:corbits\] Let $x^-,x^+\in\operatorname{Fix}(\Phi)\cap\Sigma_0$ be in the same connected component of $\Sigma_0$. If $u\in{\mathcal{M}}(x^-,x^+;J,\Phi)$, then $\operatorname{im}u \subset \Sigma_\delta$, where $\Sigma_\delta$ denotes the $\delta$-neighborhood of $\Sigma_0\setminus N_0$ with respect to any of the metrics $\omega(.,J_t.)$. To prove this proposition we vary the symplectic form. Fix ${\varepsilon}>0$ sufficiently small and set $N_{\varepsilon}:=\bigcup\,[{\varepsilon},1-{\varepsilon}]\times S^1\subset N_0$. For every $R>0$, let $\lambda_R:\Sigma{\rightarrow}{\mathbb{R}}_{>0}$ be a $\Phi$-invariant smooth function such that $$\lambda_R\equiv\begin{cases} R & \text{on $N_{\varepsilon}$}, \\ 1 & \text{on $\Sigma\setminus N_0$}. \end{cases}$$ Set $$\omega_R:=\lambda_R^2\cdot\omega \quad\text{and}\quad g_{R,t}:=\omega_R(\cdot,J_t\cdot).$$ Note that $\omega_R$ is $\Phi$-invariant and $\omega_R=\omega$ on $\Sigma\setminus N_0$. By (H4), we can define an action functional ${\mathcal{A}}_R:\Omega_\Phi{\rightarrow}{\mathbb{R}}$ such that $$\begin{aligned} {\mathcal{A}}_R(y')-{\mathcal{A}}_R(y) &=& \int_0^1\alpha_{\omega_R}(u(s,\cdot)){\partial}_su(s,\cdot)\,{\mathrm{d}}s \\ &=& \int_0^1\int_0^1\omega_R({\partial}_tu(s,t),{\partial}_su(s,t))\,{\mathrm{d}}t{\mathrm{d}}s\end{aligned}$$ for all $y,y'\in\Omega_\Phi$ and $u:{[0,1]}\times{\mathbb{R}}{\rightarrow}\Sigma$ with $u(s,t)=\Phi(u(s,1+t))$ and $u(0,\cdot)=y,u(1,\cdot)=y'$. The following observation is crucial for the proof of the proposition. \[le:ar\] Let $x^-,x^+\in\operatorname{Fix}(\Phi)\cap\Sigma_0$ be in the same connected component of $\Sigma_0$. Then $${\mathcal{A}}_R(x^+) - {\mathcal{A}}_R(x^-) = H(x^-)- H(x^+),$$ for every $R>0$. Choose a path $\gamma:{[0,1]}{\rightarrow}\Sigma_0\setminus N_0$ from $x^-$ to $x^+$ and define $h:{[0,1]}\times{\mathbb{R}}{\rightarrow}\Sigma$ by $$h(s,t):=\psi_{-t}(\gamma(s)).$$ By (H1), we have $h(s,t)=\Phi(h(s,t+1))$ and furthermore, by (H3), $h(0,.)=x^-,h(1,.)=x^+$. Hence, $$\begin{aligned} {\mathcal{A}}_R(x^+)-{\mathcal{A}}_R(x^-) &=& \int_0^1\!\int_0^1\!\omega_R\big({\partial}_t h(s,t),{\partial}_s h(s,t)\big)\,{\mathrm{d}}s{\mathrm{d}}t \\ &=& -\int_0^1\!\Big(\int_0^1\!{\mathrm{d}}H\big(h(s,t)\big){\partial}_sh(s,t)\,{\mathrm{d}}s\Big){\mathrm{d}}t \\ &=& \int_0^1\!\big(H(h(0,t))-H(h(1,t))\big){\mathrm{d}}t \\ &=& H(x^-) - H(x^+).\end{aligned}$$ In the second line we use that $$\omega_R(\,.\,,{\partial}_t h(s,t))=\omega(\,.\,,{\partial}_t h(s,t))={\mathrm{d}}H(h(s,t)),$$ since $\operatorname{im}(h)\subset\Sigma_0\setminus N_0$. Let $u\in{\mathcal{M}}(x^-,x^+;J,\Phi)$, i.e. $u:{\mathbb{R}}^2{\rightarrow}\Sigma$ is a smooth function satisfying $$\label{eq:floer} \left\{\begin{array}{l} u(s,t) = \phi(u(s,t+1)), \\ {\partial}_s u + J_t(u){\partial}_t u = 0, \\ \lim_{s{\rightarrow}\pm\infty}u(s,t) = x^\pm \end{array}\right.$$ and $$\int_{{\mathbb{R}}}\int_0^1 \omega\big({\partial}_tu(s,t),J_t{\partial}_tu(s,t)\big)\,{\mathrm{d}}t{\mathrm{d}}s <\infty.$$ We prove that $$\operatorname{im}(u)\cap N'_{\varepsilon}=\emptyset,$$ where $N'_{{\varepsilon}}=\bigcup\,({\varepsilon}+\delta,1/2]\times S^1\subset N_0$. Assume by contradiction that there exist $s_1<s_2$ and $t'$ such that $u(s,t')\in N'_{\varepsilon}$ for all $s\in[s_1,s_2]$.\ Denote by $g$ the standard Euclidean metric on $N_0$. Note, that if $x\in N'_{\varepsilon}$, then $B_g(x,\delta)\subset N_{\varepsilon}$. Here $B_g(x,\delta)$ denotes the $g$-disk of radius $\delta$ around $x$. Moreover, if $x=(q,p)$, then $\Phi^{-1}(x)=(q,p\pm\delta)\in{\partial}B_g(x,\delta)$.\ Hence, it follows from the first equation in , that $u(s,t'+1)\in{\partial}B_g(u(s,t'))$ for all $s\in[s_1,s_2]$. Fix $s\in[s_1,s_2]$ and let $r\in(0,1]$ be such that $$u(\{s\}\times[t',t'+r])\subset B_g(u(s,t'),\delta),\quad u(s,t'+r)\in{\partial}B_g(u(s,t'),\delta).$$ This implies that $$\delta \leq \int_{t'}^{t'+r} \,|{\partial}_tu(s,t)|_g \,{\mathrm{d}}t.$$ By our choice of $J_t$, we know that $g_{R,t}=R^2g$ on $N_{\varepsilon}$, and hence that $R\cdot|{\partial}_tu(s,t)|_g=|{\partial}_tu(s,t)|_{g_{R,t}}$ for all $t\in[t',t'+r]$. Therefore, $$\begin{split} R\cdot\delta &\leq \int_{t'}^{t'+r} \,|{\partial}_tu(s,t)|_{g_{R,t}} \,{\mathrm{d}}t \\ &\leq \int_{t'}^{t'+1} \,|{\partial}_tu(s,t)|_{g_{R,t}} \,{\mathrm{d}}t \,=\, \int_0^1 \,|{\partial}_tu(s,t)|_{g_{R,t}}\,{\mathrm{d}}t. \end{split}$$ In the last step we use that $|{\partial}_tu(s,t)|_{g_{R,t}}$ is a 1-periodic function in $t$. By Hölder’s inequality we get that $$R^2\cdot\delta^2 \leq \int_0^1 \,|{\partial}_tu(s,t)|_{g_{R,t}}^2 \,{\mathrm{d}}t,$$ for all $R>0$ and $s\in[s_1,s_2]$. Now integrate over $[s_1,s_2]$: $$\label{eq:ineq} \begin{split} R^2\cdot\delta^2\cdot|s_1-s_2| &\leq \int_{s_1}^{s_2}\!\int_0^1\!|{\partial}_tu(s,t)|_{g_{R,t}}^2\,{\mathrm{d}}s{\mathrm{d}}t \\ &\leq \int_{\mathbb{R}}\int_0^1\!|{\partial}_tu(s,t)|_{g_{R,t}}^2\,{\mathrm{d}}s{\mathrm{d}}t. \end{split}$$ Finally, we use the energy identity $$|{\partial}_tu(s,t)|_{g_{R,t}}^2=\omega_R({\partial}_su(s,t),{\partial}_tu(s,t)),$$ which follows from the second equation in . Note, that since $u$ has finite energy with respect to $\omega_R$, the energy identity implies that $$\int_{\mathbb{R}}\int_0^1\!|{\partial}_tu(s,t)|_{g_{R,t}}^2\,{\mathrm{d}}s{\mathrm{d}}t = {\mathcal{A}}_R(x^-)-{\mathcal{A}}_R(x^+).$$ From and Lemma \[le:ar\] we therefore get $$R^2\cdot\delta^2\cdot|s_1-s_2| \leq H(x^+)-H(x^-),$$ for all $R>0$. For large $R$ this is a contradiction and proves that $\operatorname{im}(u)$ is disjoint from $N'_{\varepsilon}$. Since ${\varepsilon}$ can be chosen arbitrarily small by an appropriate choice of functions $\lambda_R$, $\operatorname{im}(u)$ is disjoint from $\bigcup\,(\delta,1/2]\times S^1$. This proves the proposition. The advantage of our approach to the connecting orbits proposition compared to Seidel’s original approach in [@S1] is that we do not have to make the function $H$ depend on $R$. Moreover, the bubbling argument completely disappears. This is relevant in the case where $\Phi$ twists with different signs at different “ends” of $\Sigma_0$. In that case, when the ends are stretched as in [@S1 Lemma 4], the energy difference of certain fixed points may go to infinity and the bubbling argument fails. Floer homology of finite type diffeomorphisms {#se:main} ============================================= In this section we prove Theorem \[thm:main1\]. We return to the notation of Section \[se:diff\], i.e. $\phi$ is a diffeomorphism of finite type and $\Sigma_0$ denotes the union of connected components of $\Sigma\setminus\text{int}(N)$, where $\phi$ restricts to the identity.\ (Monotonicity) By Propositions \[prop:monotone3\] and \[prop:monotone4\] we can choose an area form $\omega$ such that $\phi\in\operatorname{Symp}^m(\Sigma,\omega)$. Note that every Hamiltonian perturbation of $\phi$ is also in $\operatorname{Symp}^m(\Sigma,\omega)$. We impose an additional condition on $\omega$ in the next paragraph.\ (Hamiltonian perturbation) As a preparation, let $f_1,f_2:[0,3]{\rightarrow}{\mathbb{R}}$ be two functions which are constant on $[0,1]$ and such that $f_1(q)=f_2(q)$ for all $q\in[2,3]$. Define the function $$h(q) =\int_q^3\big(f_1(r)-f_2(r)\big){\mathrm{d}}r,$$ for $q\in[0,3]$. It follows that $$h(q)=\begin{cases} \delta\cdot q + c & \text{if $q\in[0,1]$}, \\ 0 & \text{if $q\in[2,3]$}, \end{cases}$$ where $\delta=f_2(0)-f_1(0)$ and $c=\int_0^3(f_1(r)-f_2(r)){\mathrm{d}}r$. Now consider $h(q)$ as a function on $[0,3]\times S^1$ with coordinates $(q,p)$. The Hamiltonian vector field of $h$ with respect to ${\mathrm{d}}q\wedge{\mathrm{d}}p$ is simply $(f_1(q)-f_2(q))\cdot{\partial}/{\partial}p$ at the point $(q,p)$. The time-1-maps of the flow is thus the twist map $(q,p)\mapsto(q,p+f_1(q)-f_2(q))$. In particular, $(q,p)\mapsto(q,p-\delta)$ if $q\in[0,1]$. This has the following application. Let ${\tilde{N}}\subset\Sigma$ be a $\phi$-invariant closed tubular neighborhood of ${\partial}\Sigma_0$. On every connected component of ${\tilde{N}}$, we choose coordinates $(q,p)\in[0,3]\times S^1$ such that $\phi$ is given by $(q,p)\mapsto(q,p\mp f(q))$, for some monotone increasing $f:[0,3]{\rightarrow}[0,1)$. Moreover, we assume that $N_0:=\Sigma_0\cap{\tilde{N}}\cong\bigcup[0,1]\times S^1$. Note that $f|[0,1]\equiv0$. Furthermore note, that we can assume that $\omega={\mathrm{d}}q\wedge{\mathrm{d}}p$ on ${\tilde{N}}$. It now follows from the preliminary remarks, that for every $0<\delta<1/4$ there exists $h:{\tilde{N}}{\rightarrow}{\mathbb{R}}$ such that\ (i) On a connected component of $N_0$, $h(q,p)=\pm\delta\cdot q+c$. Here, the sign and the constant $c$ may depend on the component.\ (ii) $h\equiv0$ on $\bigcup[2,3]\times S^1$.\ (iii) Let $\psi$ denote the time-1-map of the Hamiltonian flow generated by $h$ with respect to ${\mathrm{d}}q\wedge{\mathrm{d}}p$. For every connected component of $N_0$, there exists a monotone increasing function $g:[0,3]{\rightarrow}[\delta,1)$, such that $\phi\circ\psi(q,p)=(q,p\mp g(q))$.\ As a consequence of (i) and (ii), there exists a function $H:\Sigma{\rightarrow}{\mathbb{R}}$ with $$H(x)= \begin{cases} h(x) & \text{if $x\in{\tilde{N}}$}, \\ 0 & \text{if $x\in\Sigma\setminus(\Sigma_0\cup{\tilde{N}})$}, \end{cases}$$ and such that $H|\text{int}(\Sigma_0)$ is a Morse function, meaning that all the critical points are non-degenerate. We refer to [@Sc Lemma 4.15] for the extension of Morse functions. Let $(\psi_t)_{t\in{\mathbb{R}}}$ denote the Hamiltonian flow generated by $H$ with respect to the fixed area form $\omega$ and set $$\Phi:=\phi\circ\psi_1.$$ By construction, $\omega,\Phi,H$ and $N_0$ satisfy the hypothesis (H1,2) of the last section. Moreover, by choosing $H$ and $\delta$ such that the $C^2$-norm of $H$ is sufficiently small, we also guarantee (H3).\ (Fixed points) By (iii), $\Phi|{\tilde{N}}$ has no fixed points. Since $\Phi=\phi$ on $\Sigma\setminus(\Sigma_0\cup{\tilde{N}})$, we therefore have $$\operatorname{Fix}(\Phi) = \big(\operatorname{Crit}(H)\cap\Sigma_0\big)\cup\big(\operatorname{Fix}(\phi)\setminus\Sigma_0\big).$$ In particular, $\Phi$ only has non-degenerate fixed points and the ${\mathbb{Z}}_2$-degree of a fixed point is given by $$\deg(y)=\operatorname{ind}_H(y)\bmod2 \quad\forall\; y\in\operatorname{Crit}(H)\cap\Sigma_0,$$ and $$\deg(y)=0\bmod2 \quad\forall\; y\in\operatorname{Fix}(\phi)\setminus\Sigma_0.$$ The first equality follows from [@SZ Lemma 7.2], the second from Proposition \[prop:fclass\]. Moreover, Proposition \[prop:fclass\] implies that every $y\in\operatorname{Fix}(\phi)\setminus\Sigma_0$ forms a different fixed point class of $\Phi$. This has an immediate consequence for the Floer complex $({CF_*}(\Phi),{\partial}_{J})$ with respect to a generic $J=(J_t)_{t\in{\mathbb{R}}}$. \[prop1\] $({CF_*}(\Phi),{\partial}_{J})$ splits into the subcomplexes $({\mathcal{C}}_1,{\partial}_1)$ and $({\mathcal{C}}_2,{\partial}_2)$, where ${\mathcal{C}}_1$ is generated by $\operatorname{Crit}(H)\cap\Sigma_0$ and ${\mathcal{C}}_2$ by $\operatorname{Fix}(\phi)\setminus\Sigma_0$. Moreover, ${\mathcal{C}}_2$ is graded by $0$ and ${\partial}_2=0$. The splitting is respected by the quantum cap action . If $y^\pm\in\operatorname{Fix}(\Phi)$ are in different fixed point classes, then ${\mathcal{M}}(y^-,y^+;J,\phi)=\emptyset$. This follows from the first equation in and proves the lemma. Next we show that hypothesis (H4) holds. \[le:action2\] If $\omega'$ is a $\Phi$-invariant area form such that $\omega'=\omega$ on $\Sigma\setminus N_0$, then $\alpha_{\omega'}$ has vanishing periods. The proof is an extension of the proof of Proposition \[prop:action\]. Let $u$ be a loop in $\Omega_{\Phi}$. We claim that $\langle[\alpha_{\omega'}],[u]\rangle=0$. Since $\Phi^\ell$ is isotopic to $\phi^\ell$, we can assume that $u(.,0)$ is contained either in $\Sigma\setminus(\Sigma_0\cup {\tilde{N}})$ or in $\Sigma_0\setminus N_0$. The argument is similar as in the above mentioned proof and relies on Lemma  \[lemma:mu\] and Corollary \[cor:dehn\]. The first case reduces to the case considered in Proposition \[prop:action\]. In the second case, define $h:S^1\times{\mathbb{R}}{\rightarrow}\Sigma_0\setminus N_0$ by $h(s,t):=\psi_{-t}(u(s,0))$; $h$ is a loop in $\Omega_\Phi$. Since $\omega'=\omega$ on $\operatorname{im}(h)$, it follows that $\int h^*\omega'=\int h^*\omega=0$ and hence, from Lemma \[lemma:mu\], that $\int u^*\omega'=0$. (Path of complex structures) Let $J_0$ be a $\omega$-compatible complex structure on $\Sigma$ which restricts to the standard complex structure on $N_0$. Let $J=(J_t)_{t\in{\mathbb{R}}}$ be a smooth path of $\omega$-compatible complex structures such that $J_{t+1}=\Phi^*J_t$ and $J_t(x)=(\psi_t^*J_0)(x)$ for all $t\in{\mathbb{R}}$ and $x\in\Sigma_0$. The existence of such a $J$ relies on the contractibility of the space of $\omega$-compatible complex structures on $\Sigma$. Note that $J$ satisfies (H5). Below, we impose an additional regularity condition on $J_0$. We are now in position to apply Proposition \[prop:corbits\] and compute the homology of $({\mathcal{C}}_1,{\partial}_1)$. Let ${\partial}_{\pm}\Sigma_0$ denote the union of components of ${\partial}\Sigma_0$ where in a neighborhood, $\Phi$ is given by $(q,p)\mapsto(q,p\mp\delta)$. \[prop2\] The homology of $({\mathcal{C}}_1,{\partial}_1)$ is isomorphic to $H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)$. The quantum cap product is given by the ordinary cap product $$H^*(\Sigma;{\mathbb{Z}}_2)\otimes H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2) {\longrightarrow}H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2).$$ The proof is by the same technique as in [@S1]. By modifying $J_0$ in a neighborhood of $\operatorname{Crit}(H)\cap\Sigma_0$, we can assume that $\nabla H$ is a Morse-Smale vector field on $\Sigma_0$, where the gradient is with respect to the metric $\omega(.,J_0.)$. This means that stable and unstable manifolds are all transverse to each other and that the stable, unstable manifolds are all transverse to ${\partial}\Sigma_0$. Note that by Proposition \[prop:fclass\], $({\mathcal{C}}_1,{\partial}_1)$ splits into subcomplexes generated by fixed points of $\Phi$ which are in the same connected component of $\Sigma_0$. Let $x^\pm$ be a pair of such fixed points and set ${\mathcal{M}}:={\mathcal{M}}(x^-,x^+;J,\Phi)$. For every $u\in{\mathcal{M}}$, it follows from Proposition \[prop:corbits\] that $\operatorname{im}(u)\subset\Sigma_0$. Define the map $\tilde{u}:{\mathbb{R}}^2{\rightarrow}\Sigma_0,(s,t)\mapsto\psi_t(u(s,t))$. A straight forward calculation, using that $u(s,t)=\psi_1(u(s,t+1)),\psi_t\circ\psi_1=\psi_{t+1}$ and $J_t=\psi_t^*J_0$ on $\operatorname{im}(u)$, shows that $$\label{eq:morse} \left\{\begin{array}{l} \tilde{u}(s,t) = \tilde{u}(s,t+1), \\ {\partial}_s\tilde{u}+J_0(\tilde{u})\big({\partial}_t\tilde{u}-X_H(\tilde{u})\big) = 0,\\ \lim_{s{\rightarrow}\pm\infty}\tilde{u}(s,t) = x^\pm, \end{array}\right.$$ where $X_H$ denotes the Hamiltonian vector field of $H$. The system  was studied in [@SZ Theorem 7.3]. There it is shown that if $H|\Sigma_0$ is replaced by ${\varepsilon}\cdot H|\Sigma_0$ with ${\varepsilon}>0$ sufficiently small, then every solution of is independent of the $t$-variable and is therefore a solution of $${\mathrm{d}}\tilde{u}/{\mathrm{d}}s = {\varepsilon}\cdot\nabla H(\tilde{u}).$$ Moreover, $\tilde{u}$ is regular in the sense that the operator which is defined by linearizing the equations  is surjective, see page . If we go back to the definition of $H$, we can assume from now on that ${\varepsilon}=1$. That the ambient space $\Sigma_0$ has non-empty boundary does not affect the argument in [@SZ Theorem 7.3]. It is essential however, that $\pi_2(\Sigma_0)=0$. It follows that every $u\in{\mathcal{M}}$ is regular and that the map $u\mapsto\tilde{u}$ induces a diffeomorphism of ${\mathcal{M}}$ and the space of (parameterized) flow lines of $\nabla H$ which are contained in $\Sigma_0$ and connect the critical points $x^\pm$. Furthermore, these diffeomorphisms, one for each pair $x^\pm$, induce an isomorphism $({\mathcal{C}}_1,{\partial}_1)\cong({CM_*}(H|\Sigma_0),{\partial}_{\nabla\!H})$ of chain complexes. Here, ${CM_*}(H|\Sigma_0)$ is freely generated by $\operatorname{Crit}(H)\cap\Sigma_0$ and ${\partial}_{\nabla\!H}$ is defined by counting index-1 flow lines of $\nabla H$ which are contained in $\Sigma_0$. Note that $\nabla H$ points outwards/inwards at a component of ${\partial}_+/{\partial}_-\Sigma_0$. The homology of $({CM_*}(H|\Sigma_0),{\partial}_{\nabla\!H})$ is therefore isomorphic to $H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)$. See [@Sc] for details on relative Morse homology. Similarly we can identify the quantum cap product, defined in . Note that the image of the evaluation map ${\mathcal{M}}{\rightarrow}\Sigma,u\mapsto u(0,0)$, is $W^u(\nabla H,x^-)\cap W^s(\nabla H,x^+)$. Choose a Morse function $f:\Sigma{\rightarrow}{\mathbb{R}}$ such that the evaluation map is transverse to $W^u(\nabla f,x)$ for all $x\in\operatorname{Crit}(f)$. For $x\in\operatorname{Crit}(f)$ and $x^\pm\in\operatorname{Crit}(H)\cap\Sigma_0$ with $\operatorname{ind}_H(x^+)=\operatorname{ind}_H(x^-)+\operatorname{ind}_f(x)$, let $q(x;x^-,x^+)\in{\mathbb{Z}}_2$ be the cardinality mod 2 of $W^u(\nabla f,x)\cap W^u(\nabla H,x^-)\cap W^s(\nabla H,x^+)$. The map $${CM^*}(f)\otimes {CM_*}(H|\Sigma_0){\longrightarrow}{CM_*}(H|\Sigma_0),\quad x\otimes y{\longmapsto}\sum_{z}q(x;y,z)z,$$ which induces the quantum cap product on homology, is therefore given in purely Morse theoretical terms. On the level of homology, it is the ordinary cap product. This finishes the proof of the lemma. From Lemma \[prop1\] and \[prop2\], it follows that $${HF_*}(\phi) \cong H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)\oplus {\mathbb{Z}}_2^{\#\operatorname{Fix}(\phi|\Sigma\setminus\Sigma_0)}.$$ Moreover, $H^*(\Sigma;{\mathbb{Z}}_2)$ acts on the first summand by ordinary cap product. Since every fixed point of $\phi|\Sigma\setminus\Sigma_0$ has fixed point index 1, the Lefschetz fixed point formula implies that $$\#(\operatorname{Fix}(\phi)\setminus\Sigma_0) = \Lambda(\phi|\Sigma\setminus\Sigma_0).$$ It remains to show that $1\in H^0(\Sigma;{\mathbb{Z}}_2)$ acts on ${\mathbb{Z}}_2^{\Lambda(\phi|\Sigma\setminus\Sigma_0)}$ by the identity and any element of $H^1(\Sigma;{\mathbb{Z}}_2)\oplus H^2(\Sigma;{\mathbb{Z}}_2)$ by the zero map. From the proof of Lemma \[prop1\], we know that if ${\mathcal{M}}(y^-,y^+;J,\Phi)\ne\emptyset$ for some $y^-,y^+\in\operatorname{Fix}(\phi)\setminus\Sigma_0$, then $y^-=y^+$. Since the action $\alpha_\omega$ has vanishing periods on $\Omega_\Phi$, by Lemma \[le:action2\], it follows that every $u\in{\mathcal{M}}(y,y;J,\Phi)$ has energy zero. Hence, ${\mathcal{M}}(y,y;J,\Phi)={\mathcal{M}}_0(y,y;J,\Phi)$ consists of the constant map. Now choose a Morse function $f:\Sigma{\rightarrow}{\mathbb{R}}$ with only one critical point $x_0$ of index 0 and such that $\operatorname{Fix}(\phi|\Sigma\setminus\Sigma_0)\subset W^u(\nabla f,x_0)$. It follows that $q(x;y^-,y^+)\ne0$ if and only if $x=x_0$ and $y^-=y^+$. This ends the proof. Using Theorem \[thm:main1\], we can verify Seidel’s result [@S2 Theorem 1] in the special case of an algebraically finite mapping class: if $g$ is a non-trivial mapping class, then the quantum cap product is trivial on $H^2(\Sigma;{\mathbb{Z}}_2)\otimes{HF_*}(g)$. To see this, assume that $g$ is algebraically finite. By Theorem \[thm:main1\], the only possibly non-trivial part of the the quantum cap product is given by the cap product on $H^*(\Sigma;{\mathbb{Z}}_2)\otimes H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)$. The submanifold $\Sigma_0\subset\Sigma$ has non-trivial boundary since $g$ is non-trivial. Now the cap product on $H^*(\Sigma;{\mathbb{Z}}_2)\otimes H_*(\Sigma_0,{\partial}_+\Sigma_0;{\mathbb{Z}}_2)$ factors through the homomorphism $\iota^*:H^*(\Sigma;{\mathbb{Z}}_2){\rightarrow}H^*(\Sigma_0;{\mathbb{Z}}_2)$ which is induced by the inclusion $\iota:\Sigma_0{\hookrightarrow}\Sigma$. Since $H^2(\Sigma_0;{\mathbb{Z}}_2)=0$, this proves the claim. Similarly, it follows from Theorem \[thm:main1\], that if $\alpha\in H^1(\Sigma;{\mathbb{Z}}_2)$ acts non-trivially on ${HF_*}(g)$, then there exists a map $\gamma:S^1{\rightarrow}\Sigma_0$ such that $\langle\alpha,[\gamma]\rangle=1$. This is a special case of [@S2 Theorem 2]. Isolated plane curve singularities {#se:singularity} ================================== In this section we prove Theorem \[thm:main2\] and Corollary \[cor:dehn1\]. We begin with a brief summary of the basic facts on isolated plane curve singularities. The standard reference is Milnor’s book [@Mi1]. An [**isolated plane curve singularity**]{} is a germ $[f]$ of holomorphic functions $f:(U,0){\rightarrow}({\mathbb{C}},0)$, where $U\subset{\mathbb{C}}^2$ is a neighborhood of $0$, with $({\mathrm{d}}f)^{-1}(0)=\{0\}$. Let $f:U{\rightarrow}{\mathbb{C}}$ be such a function. For ${\varepsilon}>0$ sufficiently small, the singular fiber $f^{-1}(0)$ intersects transversally with the 3-sphere $S_{\varepsilon}:=\{(x,y)\in{\mathbb{C}}^2:|x|^2+|y|^2={\varepsilon}\}$. The intersection $L:=f^{-1}(0)\cap S_{\varepsilon}\subset S_{\varepsilon}$ is a compact oriented 1-manifold, i.e. a link. A link obtained in this way is called an [**algebraic link**]{}. An algebraic link is a fibred link: the map $$\pi:S_{\varepsilon}\setminus L{\longrightarrow}\{z\in{\mathbb{C}}:|z|=1\}, \quad z{\longmapsto}f(z)/|f(z)|,$$ is a fibration, the Milnor fibration, and the [**Milnor fiber**]{} $M:=\pi^{-1}(1)\cup L$ is a Seifert surface of $L$. This means that $M\subset S_{\varepsilon}$ is an compact connected oriented embedded 2-manifold with ${\partial}M=L$. The [**geometric monodromy**]{} is an isotopy class of the group $\operatorname{Diff}^+_c(M)$ of orientation preserving diffeomorphisms which are the identity near ${\partial}M$ and is defined as follows. Given a connection on $S_{\varepsilon}\setminus L$, i.e. a rank-1 subbundle of the tangent bundle of $S_{\varepsilon}\setminus L$ which is transversal to $\ker({\mathrm{d}}\pi)$, parallel transport induces an orientation preserving diffeomorphism of $\pi^{-1}(1)$; a so-called characteristic diffeomorphism. To extend this diffeomorphism to $M$, we specify the connection in a neighborhood of $L$. For this observe the following. Let $L'$ be a connected component of $L$ and $T\subset S_{\varepsilon}$ be a tubular neighborhood of $L'$. A standard meridian of $(T,L')$ is an embedded circle in $T\setminus L'$, which is homologically trivial in $T$ and has linking number $1$ with $L'$ in $S_{\varepsilon}$. There is a fibration of $T\setminus L'$ such that every fiber is a standard meridian of $(T,L')$. This fibration is unique up to isotopy and if $T$ is sufficiently small, induces a connection on $T\setminus L'$. In this way, we get a standard connection in neighborhood of $L$. A connection on $S_{\varepsilon}\setminus L$ which restricts to this standard connection induces a characteristic diffeomorphism which is compactly supported and hence extends trivially to $M$. Moreover, any two such diffeomorphisms are isotopic in $\operatorname{Diff}^+_c(M)$. The isotopy class obtained in this way is therefore an invariant of $\pi$. The proof of Theorem \[thm:main2\] relies on the following result, which is a refinement of the classical result of A’Campo [@AC2] and L[ê]{} [@L] that the Lefschetz number of the geometric monodromy of an isolated plane curve singularity vanishes. We use the following notation: we denote by $\iota:\operatorname{Diff}^+_c(M){\rightarrow}\operatorname{Diff}^+(M,{\partial}M)$ the inclusion, where $\operatorname{Diff}^+(M,{\partial}M)$ denotes the group of orientation preserving diffeomorphisms which are the identity on ${\partial}M$. \[prop:monodromy\] Let $M$ be the Milnor fiber and $g$ be the geometric monodromy of an isolated plane curve singularity. There is a representative $\phi\in\iota_*g$ which is a diffeomorphism of finite type (same definition as for closed surfaces) and such that $\operatorname{Fix}(\phi)={\partial}M$. Moreover, $\phi$ only has positive twists. As already mentioned in the introduction, this proposition follows from the work of A’Campo [@AC1], [@AC4] on the geometric monodromy. Our proof, given in Appendix \[ap:monodromy\], relies on the work of Eisenbud and Neumann [@EN] on the monodromy of plane curve singularities. The relevant results from [@EN] are summarized in Appendix \[ap:monodromy\]. We remark, that the vanishing of the Lefschetz number of diffeomorphism of finite type is not enough to exclude the existence of pointwise fixed annuli. We recall that $\Sigma$ denotes a closed oriented surface of genus $\geq2$ and $M\subset\Sigma$ the Milnor fiber of an isolated plane curve singularity. Moreover, $g$ denotes the mapping class of $\Sigma$ which is obtained by extending the geometric monodromy of the singularity trivially to $\Sigma$. Assume for the moment that no component of $\Sigma\setminus\text{int}(M)$ is a disk. From Proposition \[prop:monodromy\], it follows that there exists a representative $\phi\in g$ which is of finite type and such that $\Sigma_0=\Sigma\setminus\text{int}(M)$, ${\partial}_+\Sigma_0={\partial}M$ and $\Lambda(\phi|\text{int}(M))=0$. Theorem \[thm:main1\] therefore implies that $${HF_*}(\phi)\cong H_*(\Sigma\setminus\text{int}(M),{\partial}M;{\mathbb{Z}}_2).$$ By excision, it follows that ${HF_*}(\phi)\cong H_*(\Sigma,M;{\mathbb{Z}}_2)$. Now assume that $D_1,\dots,D_n$ are disk components of $\Sigma\setminus\text{int}(M)$. Set $M':=M\cup D_1\cup\cdots\cup D_n$. We claim that there exists a representative $\phi\in g$ which is of finite type and such that $\Sigma_0=\Sigma\setminus\text{int}(M')$, ${\partial}_+\Sigma_0={\partial}M'$ and $\Lambda(\phi|\mathrm{int}(M'))=n$. Theorem \[thm:main1\] then implies that $${HF_*}(\phi) \cong H_*(\Sigma\setminus\text{int}(M'),{\partial}M';{\mathbb{Z}}_2)\oplus{\mathbb{Z}}_2^n \cong H_*(\Sigma,M';{\mathbb{Z}}_2)\oplus{\mathbb{Z}}_2^n.$$ Since $H_*(\Sigma,M';{\mathbb{Z}}_2)\oplus{\mathbb{Z}}_2^n\cong H_*(\Sigma,M;{\mathbb{Z}}_2)$, this proves Theorem \[thm:main2\] up to the claim above. The claim follows from Proposition \[prop:monodromy\] by “collapsing” each disk component of $\Sigma\setminus\text{int}(M)$ to a point. Next we prove Corollary \[cor:dehn1\]. First, we recall some terminology. Let $k\in{\mathbb{N}}_{>0}$. An $A_k$[**-configuration**]{} in $\Sigma$ is a $k$-tuple $(C_1,\dots,C_k)$ of embedded circles in $\Sigma$ such that $$\#(C_{i}\pitchfork C_{i+1})=1 \quad\text{for } i=1,\dots,k-1, \quad \#(C_i\pitchfork C_{j})=0 \quad\text{if } |i-j|>1.$$ The $A_k$[**-singularity**]{} is the germ of the function $f(x,y)=x^2+y^{k+1}$. It is a classical result in the theory of singularities, that the Milnor fiber $M$ of this singularity contains an $A_k$-configuration, which is (i) a spine of $M$ and (ii) a distinguished basis of vanishing cycles. See [@Ar Section 2.9], [@AC3]. Let $(C_1,\dots,C_k)$ be an $A_k$-configuration in $\Sigma$. From the remark (i) above, it follows that a tubular neighborhood $N$ of $C_1\cup\cdots\cup C_k$ can be identified with the Milnor fiber of the $A_k$-singularity such that, by (ii), $C_1\cup\cdots\cup C_k$ is a distinguished set of vanishing cycles. This implies, that the class $g$ of the product $\tau_1\circ\cdots\circ\tau_k$ of right Dehn twists can also be obtained by extending the geometric monodromy of the $A_k$-singularity trivially to $\Sigma$. It therefore follows from Theorem \[thm:main2\] that $${HF_*}(g) \cong H_*(\Sigma,N;{\mathbb{Z}}_2) \cong H_*(\Sigma,C_1\cup\cdots\cup C_k;{\mathbb{Z}}_2).$$ This together with naturality of Floer homology proves the corollary. Let $M$ be the Milnor fiber and $g$ be the geometric monodromy of an isolated plane curve singularity. Let $\phi\in\iota_*g$ be as in Proposition \[prop:monodromy\]. Since $\phi$ only has positive twists, we can perturb it near the boundary to a diffeomorphism $\phi_+$ such that $\operatorname{Fix}(\phi_+)=\emptyset$. Furthermore, it follows as in the proof of Proposition \[prop:monotone3\], that if $\omega$ is a $\phi$-invariant area form on $M$, then $[\omega_\phi]=0$. Hence, $\phi$ is monotone in the sense defined in Appendix \[ap:open\]. Therefore, ${HF_*}(g,+)={HF_*}(\phi,+)=0$. Products of disjoint Dehn twists {#ap:dehn} ================================ The goal of this appendix is to prove Proposition \[prop:dehn\], which was also stated in [@S1 Lemma 3]. This result is used for the proof of the fixed point as well as the action proposition in section\[se:diff\]. Let $I$ denote either ${[0,1]}$ or $S^1$. For every continuous map $\gamma:I{\rightarrow}\Sigma$ and compact 1-dimensional submanifold $C\subset\Sigma$, there is the geometric intersection number $$i(\gamma,C) = \min\{\#\beta^{-1}(C)\,|\,\beta \text{ is homotopic rel endpoints to } \gamma\}.$$ If $I=S^1$, homotopic rel endpoints means freely homotopic. \[prop:dehn\] Let $C\subset\Sigma$ be a finite union of non-contractible disjoint circles. Let $\phi$ be a product of Dehn twists along $C$ which twists with the same sign along parallel components of $C$. Let $\gamma:I{\rightarrow}\Sigma$ be such that $\gamma({\partial}I)\cap\operatorname{supp}(\phi)=\emptyset$. If $\gamma$ is homotopic rel endpoints to $\phi\circ\gamma$, then $i(\gamma,C)=0$. An immediate consequence is the following \[cor:dehn\] Let $\phi$ be of finite type and $\ell>0$ be as such that $\phi^\ell|\Sigma\setminus N=\operatorname{id}$. Let $\gamma:I{\rightarrow}\Sigma$ be such that $\gamma({\partial}I)\cap\operatorname{supp}(\phi^\ell)=\emptyset$. If $\gamma$ is homotopic rel endpoints to $\phi^\ell\circ\gamma$, then there exists $\gamma':I{\rightarrow}\operatorname{Fix}(\phi^\ell)$ which is homotopic rel endpoints to $\gamma$. We begin with the case $I={[0,1]}$. For every component of $C$, let $N'$ be the closed tubular neighborhood, where the Dehn twist is supported. For different components of $C$, these tubular neighborhoods are disjoint. Let $N$ be the union of the $N'$ and ${\tilde{C}}:={\partial}N$. We prove in several steps that $i(\gamma,{\tilde{C}})=0$, if $\gamma$ is homotopic rel endpoints to $\phi\circ\gamma$. Let $\beta:{[0,1]}{\rightarrow}\Sigma$ and $u:{[0,1]}^2{\rightarrow}\Sigma$ be such that $\#\beta^{-1}({\tilde{C}})=i(\gamma,{\tilde{C}})$ and $$u(s,0)=\beta(s),\quad u(s,1)=\phi(\beta(s)),\quad u(0,t)=\beta(0), \quad u(1,t)=\beta(1)$$ for all $s,t\in{[0,1]}$. We assume that $\beta(0),\beta(1)\in\Sigma\setminus N$. Without loss of generality we further assume that $u$ is transverse to ${\tilde{C}}$. Hence $B:=u^{-1}({\tilde{C}})\subset {[0,1]}^2$ is a compact 1-dimensional submanifold with boundary ${\partial}B=\beta^{-1}({\tilde{C}})\cup (\phi\circ\beta)^{-1}({\tilde{C}})$. Every component of $B$ is either a circle or an arc. [1]{} We may assume that no component of $B$ is a circle. Let $S$ be a circle component of $B$. The interior of $S$, i.e. the region of ${[0,1]}^2$ bounded by $S$, is a disk. The restriction of $u$ to this disk induces an element of $\pi_2(\Sigma,C')$, where $C'$ is the component of ${\tilde{C}}$ where $S$ is mapped to. Since $C'$ is non-contractible, $\pi_2(\Sigma,C')=0$, and hence $u$ can be deformed in a neighborhood of the interior of $S$ in such a way that $B$ has less circle components. Repeating this argument finitely many times proves claim 1. From now on, we assume that every component of $B$ is an arc. [2]{} There is no component $B'$ of $B$ with ${\partial}B'\subset{[0,1]}\times0$ or ${\partial}B'\subset{[0,1]}\times1$. Assume that $B'$ is such that ${\partial}B'\subset{[0,1]}\times0$. Note that the boundary points of $B'$ are intersection points of $\beta$ with ${\tilde{C}}$. Since $B'$ is mapped to some component of ${\tilde{C}}$ under $u$, it follows that these intersection points can be removed by a homotopy of $\beta$. This however, contradicts the definition of $\beta$. Hence every $B'$ with one boundary point on ${[0,1]}\times0$ has the other boundary point on ${[0,1]}\times1$. On the other hand, $B$ has the same the number of boundary points on ${[0,1]}\times0$ as on ${[0,1]}\times1$, since $\beta$ and $\phi\circ\beta$ intersect ${\tilde{C}}$ in the same number of points. This proves claim 2. The rest of the proof is devoted to [3]{} $B$ is empty, i.e. $\#(\operatorname{im}\beta\cap{\tilde{C}})=0$. The proof is by contradiction; assume that $B\ne\emptyset$. First we define an integer valued function on $\pi_0(B)$. Let $B'$ be a component of $B$. By claim 1 and 2, $B'$ is an arc and ${\partial}B'=\{(b,0),(b,1)\}$, for some $b\in(0,1)$. Let $C'$ be the component of ${\tilde{C}}$ where $B'$ is mapped to under $u$. Since $u(b,1)=\phi(u(b,0))=u(b,0)$, the restriction $u|B'$ has a well defined mapping degree $\deg u|B'$, once orientations of $B'$ and $C'$ are fixed. We orient $B'$ “from the bottom to the top” and choose an orientation of ${\tilde{C}}$ such that the orientations of two homotopic components match. We thus have the map $${\mathrm{d}}:\pi_0(B){\longrightarrow}{\mathbb{Z}},\quad B'{\longmapsto}\deg u|B'.$$ Let $\pi_0(B)=\{B_1,\dots,B_{2n}\}$ be ordered such that $$b_i<b_j \quad{\Longleftrightarrow}\quad i<j,$$ where ${\partial}B_i=\{(b_i,0),(b_i,1)\}$. Note that the cardinality of $\pi_0(B)$ is even, since $\beta(0)$ and $\beta(1)$ are both in the complement of $N$. Observe that the loops $u|B_1$ and $u|B_{2n}$ are both homotopic to the constant loop and hence $${\mathrm{d}}(B_1) = {\mathrm{d}}(B_{2n}) = 0.$$ To prove claim 3, we will now show that ${\mathrm{d}}(B_{2n})\ne0$, which is a contradiction. In fact, we will prove by induction that $$\label{eq:ind}\tag{$\star_k$} {\mathrm{d}}(B_{2k})\ne0, \quad \operatorname{sign}(b_{2k}) = \operatorname{sign}(b_{2k-2}) \quad\mathrm{and}\quad C_{2k}\sim C_{2k-2},$$ for all $1\leq k\leq n$. Here $\operatorname{sign}(b_i)$ denotes the sign of $b_i$ as an intersection point of $\beta$ and ${\tilde{C}}$ and $C_i$ is the component of ${\tilde{C}}$ where $B_i$ is mapped to under $u$. We will use the following formula for the function ${\mathrm{d}}$, which we prove later on: $$\label{eq:de} {\mathrm{d}}(B_{2k}) = -\sum_{i=1}^k\operatorname{sign}(b_{2i})\cdot{\varepsilon}_{2i},$$ for all $1\leq k\leq\ell$. Here ${\varepsilon}_i=\pm$ is the sign of the twist of $\phi$ along $C_i$. For $k=1$, formula (\[eq:de\]) gives ${\mathrm{d}}(B_2)=\operatorname{sign}(b_2)\cdot{\varepsilon}_2\ne0$. This proves the induction hypothesis $(\star_1)$, since the other two statements are empty in this case. Assume that $(\star_k)$ holds for all $1\leq i\leq k<n$. To show that it also holds for $k+1$ first note, that if $$\label{eq:ind1} \operatorname{sign}(b_{2k})=\operatorname{sign}(b_{2k+2}) \quad\mathrm{and}\quad C_{2k}\sim C_{2k+2},$$ then by formula , we get that $${\mathrm{d}}(B_{2k+2}) \,=\, (k+1)\cdot\operatorname{sign}(b_{2k+2})\cdot{\varepsilon}_{2k+2} \,\ne\, 0.$$ To prove , we consider the restriction of $u$ to the region $D\subset {[0,1]}^2$ that is bounded by the arcs $B_{2k},B_{2k+1}$ and $[b_{2k},b_{2k+1}]\times0,[b_{2k},b_{2k+1}]\times1$. Note that since $u(D)\subset\Sigma\setminus\text{int}(N)$, $u(s,1)=u(s,0)$ for all $s\in[b_{2k},b_{2k+1}]$. Hence $u|D$ gives a homotopy between the loops ${\mathrm{d}}(B_{2k})\cdot C_{2k}$ and ${\mathrm{d}}(B_{2k+1})\cdot C_{2k+1}$. This implies that $C_{2k}\sim C_{2k+1}$ and that ${\mathrm{d}}(B_{2k})={\mathrm{d}}(B_{2k+1})$. Since $C_{2k+1}$ and $C_{2k+2}$ are both contained in the boundary of one component of $N$, they are clearly homotopic and hence $C_{2k}\sim C_{2k+2}$. It thus remains to show that $\operatorname{sign}(b_{2k})=\operatorname{sign}(b_{2k+2})$. For this we need [4]{} $C_{2k}\ne C_{2k+1}$ and $C_{2k+1}\ne C_{2k+2}$. Assume that $C_{2k}=C_{2k+1}$ and consider the path $s\mapsto u(s,0),s\in[b_{2k},b_{2k+1}]$. We claim that since ${\mathrm{d}}(B_{2k})={\mathrm{d}}(B_{2k+1})\ne0$, the path can be deformed into $C_{2k}$, which contradicts minimality of $\beta$. For the proof of the claim, we refer to [@Ga1 Corollary 5]. Similarly, $C_{2k+1}=C_{2k+2}$ is also a contradiction to minimality of $\beta$. [5]{} $\operatorname{sign}(b_{2k})=\operatorname{sign}(b_{2k+1})=\operatorname{sign}(b_{2k+2})$. Since $C_{2k}$ and $C_{2k+1}$ are homotopic and disjoint, there exists an embedded annulus $A\subset\Sigma$ which bounds $C_{2k}$ and $C_{2k+1}$. Since ${\mathrm{d}}(B_{2k})={\mathrm{d}}(B_{2k+1})\ne0$, it follows that $u(D)=A$, where $D$ is as above. Otherwise, we could find a map from the torus to $\Sigma$ with nonzero degree. Such a map does not exist since the genus of $\Sigma$ is $>1$. Thus we know that the path $\beta$ enters $A$ at $b_{2k}$ and leaves at $b_{2k+1}$. The intersection points therefore have the same sign, $\operatorname{sign}(b_{2k})=\operatorname{sign}(b_{2k+1})$. The proof that $\operatorname{sign}(b_{2k+1})=\operatorname{sign}(b_{2k+2})$ is similar. In this case however, we know that the region $D'$ of ${[0,1]}^2$ between $B_{2k+1}$ and $B_{2k+2}$ is mapped to a component $N'$ of $N$ under $u$. Since ${\mathrm{d}}(B_{2k+1})\ne0$, it follows that $u(D')=N'$ and hence that $b_{2k+1}$ and $b_{2k+2}$ have the same sign, as above. This proves claim 5 and ends the proof of . It remains to proof formula . \ Formula is the consequence the following properties of the function ${\mathrm{d}}$. First of all, as was shown above, we have that $$\label{eq:de1} {\mathrm{d}}(B_{2k})={\mathrm{d}}(B_{2k+1}),$$ for all $1\leq k\leq n-1$. We now show that $$\label{eq:de2} {\mathrm{d}}(B_{2k})={\mathrm{d}}(B_{2k-1})-\operatorname{sign}(b_{2k})\cdot{\varepsilon}_{2k},$$ for all $2\leq k\leq\ell$. The idea is to look again at the region $D'\subset{[0,1]}^2$ which is bounded by the arcs $B_{2k-1},B_{2k}$ and $[b_{2k-1},b_{2k}]\times0,[b_{2k-1},b_{2k}]\times1$. Let $N'$ be the component of $N$ where $D'$ is mapped to. As in claim 4, we conclude that ${\partial}N'=C_{2k-1}\cup C_{2k}$. Choose an orientation preserving diffeomorphism $[0,1]\times S^1\cong N'$ such that $C_{2k-1}\cong 0\times S^1$ and $C_{2k}\cong 1\times S^1$. Assume for the moment that the orientations of $C_{2k-1}$ and $C_{2k}$ are compatible with these diffeomorphisms. This is equivalent to saying that $\operatorname{sign}(b_{2k-1})=1$. Let $\operatorname{pr}:[0,1]\times S^1{\rightarrow}S^1$ denote the projection onto the second factor. The map $\operatorname{pr}\circ u:{\partial}D{\rightarrow}S^1$ is the composition of four loops, denoted by $v$, $u|B_{2k}$, the inverse of $w$ and the inverse of $u|B_{2k-1}$. Here $v,w$ are the loops $s\mapsto\operatorname{pr}(u(s,0)),s\mapsto\operatorname{pr}(u(s,1))$ for $s\in[b_{2k-1},b_{2k}]$. Since the mapping degree of $\operatorname{pr}\circ u|{\partial}D$ vanishes, it follows that $$\deg v + {\mathrm{d}}(B_{2k}) - \deg w - {\mathrm{d}}(B_{2k-1}) \,=\, 0.$$ However, $\deg v-\deg w={\varepsilon}_{2k}$, and this proves equation  in the case that $\operatorname{sign}(b_{2k})=1$. If $\operatorname{sign}(b_{2k})=-1$, $\operatorname{pr}$ induces orientation reversing maps on $C_{2k-1}$ and $C_{2k}$. Hence, the same argument as above gives $$\deg v - {\mathrm{d}}(B_{2k}) - \deg w + {\mathrm{d}}(B_{2k-1}) \,=\, 0,$$ which again implies . Formula is now an immediate consequence of , and the fact that ${\mathrm{d}}(B_1)=0$. This ends the proof of the proposition in the case $I={[0,1]}$. The proof in the case $I=S^1$ follows the same line of arguments as above. In this case, however, $B=u^{-1}({\tilde{C}})$ is a subset of $S^1\times{[0,1]}$ instead if ${[0,1]}^2$. This does not affect the proofs of claims 1 and 2 above and hence, we can assume that every component $B'$ of $B$ is an arc with ${\partial}B'=\{(b,0),(b',1)\}$. Note that $b,b'\in S^1$ are not necessarily equal. We show how the proof of claim 3 extends to this case. The idea is to consider the $\ell$-fold catenation $v:S^1\times[0,\ell]{\rightarrow}\Sigma$, defined by $$v(s,t):=\phi^j(u(s,t-j)) \quad\text{for}\quad (s,t)\in S^1\times[j,j+1],j<\ell,$$ where $\ell>0$ is chosen such that every component $A'$ of $A:=v^{-1}({\tilde{C}})$ satisfies ${\partial}A'=\{(a,0),(a,\ell)\}$ for some $a\in S^1$. The rest of the argument is analogous to the above. Define the function ${\mathrm{d}}:\pi_0(A){\rightarrow}{\mathbb{Z}}$ and prove that it satisfies $$\label{eq:de3} {\mathrm{d}}(A_{2k}) = {\mathrm{d}}(A_1) - k\cdot\ell\cdot\operatorname{sign}(a_{2k})\cdot{\varepsilon}_{2k}.$$ Here, the signs $\operatorname{sign}(a_i)$ and ${\varepsilon}_i$ are defined as above. The additional factor $\ell$ appears since $\phi^\ell$ twists with multiplicity $\ell$ along every component of $C$. The order on $\pi_0(A)$ is a cyclic order induced from the boundary points as above. Formula  is therefore a contradiction, if $\pi_0(A)\ne\emptyset$. This ends the proof of the proposition. Decomposition of the monodromy {#ap:monodromy} ============================== In this appendix we prove Proposition \[prop:monodromy\]. For this we use the theory of splice diagrams which was developed by Eisenbud and Neumann [@EN], [@Ne] to compute invariants of plane curve singularities. In the following we summarize their results on the geometric monodromy. We would like to emphasis here, that our discussion is far from being self-contained. Instead, we only concentrate on the facts which are useful for our purpose. For details, proofs as well as the general picture, we refer to the excellent monograph [@EN]. Let us introduce some terminology. We denote by $M$ a compact oriented 2-manifold with boundary.\ (i) Let $\phi:M{\rightarrow}M$ be a diffeomorphism and $C\subset M$ be a $\phi$-invariant union of disjoint non-contractible circles. A $\phi$[**-component**]{} of $(M,C)$ is a a union of connected components of $M\setminus C$ which are cyclically permuted by $\phi$. The [**topological type**]{} of a $\phi$-component $M'$ is the triple $(\chi,d,h)$, where $\chi$ is the Euler characteristic, $d$ the number of connected components and $h$ the number of ends of $\text{int}(M')$.\ (ii) An orientation preserving diffeomorphism $\phi:M{\rightarrow}M$ is called an [**admissible twist map**]{} if $M$ is a union of annuli and:\ (1) $\phi$ cyclically permutes the connected components of $M$.\ (2) If $n>0$ denotes the number of connected components of $M$, then $\phi^n$ is given by the local model $$[0,1]\times S^1\ni(q,p){\longmapsto}\big(q,p-f(q)\big),$$ where $f:[0,1]{\rightarrow}{\mathbb{R}}$ is monotone.\ (3) There exists $q>0$ such that $\phi^q|{\partial}M=\operatorname{id}$.\ If $\phi:M{\rightarrow}M$ is an admissible twist map, then its [**twist number**]{} is defined by $$\ell:=\frac{1}{q}\cdot\big(\text{var}(\phi^q)\xi\cdot\xi\big) \in{\mathbb{Q}}.$$ Here $\xi$ is a generator of $H_*(M',{\partial}M')$, with $M'$ a connected component of $M$, $\text{var}(\phi^q):H_*(M',{\partial}M'){\rightarrow}H_*(M')$ is the variation homomorphism of $\phi^q$ and the dot stands for the intersection pairing.\ (iii) An [**admissible triple**]{} $(\phi,M,C)$ consists of an orientation preserving diffeomorphism $\phi:M{\rightarrow}M$ and a $\phi$-invariant finite union $C\subset M$ of disjoint non-contractible circles, such that the following holds. Let $M'$ be a $\phi$-component of $(M,C)$ and $\chi$ denote its Euler characteristic.\ (1) If $\chi=0$, then $\phi|\text{cl}(M')$ is an admissible twist map.\ (2) If $\chi<0$, then $\phi|\text{cl}(M')$ is a periodic map.\ By the [**period**]{} of a periodic diffeomorphism $\phi$, we mean the smallest integer $\ell>0$ such that $\phi^\ell=\operatorname{id}$.\ (iv) Set $${\mathcal{T}}:= \{(\chi,d,h;\ell)\in{\mathbb{Z}}^3\times{\mathbb{Q}}:d,h>0;\chi\leq0;\chi<0{\Rightarrow}\ell\in{\mathbb{N}}_{>0}\}.$$ Let $(\phi,M,C)$ be an admissible triple. For every $\phi$-component $M'$ of $(M,C)$, set $t(M'):=(\chi,d,h;\ell)\in{\mathcal{T}}$, where $(\chi,d,h)$ denotes the topological type of $M'$ and $\ell$ either the period of $\phi|M'$ if $\chi<0$ or the twist number of $\phi|M'$ if $\chi=0$. The [**characteristic set**]{} of $(\phi,M,C)$ is defined by $${\mathfrak{t}}(\phi,M,C):=\big\{t(M'):M'\text{ is a $\phi$-component of }(M,C)\big\}\subset{\mathcal{T}}.$$ Recall that an isolated plane curve singularity, or simply [**isolated singularity**]{}, is a germ $[f]$ of holomorphic functions $f:(U,0){\rightarrow}({\mathbb{C}},0)$, where $U\subset{\mathbb{C}}^2$ is a neighborhood of $0$, with $({\mathrm{d}}f)^{-1}(0)=\{0\}$. The Milnor fiber of $[f]$ is a compact connected oriented 2-manifold $M$ with boundary. The geometric monodromy $g$ of $[f]$ is an isotopy class of $\operatorname{Diff}_c^+(M)$. We now proceed as follows: we explain in \[subse:splice\] how to associate a diagram $\Gamma$, called splice diagram, to $[f]$ and in \[subse:set\], how to associate a set ${\mathfrak{t}}_\Gamma\subset{\mathcal{T}}$ to such a diagram $\Gamma$. The significance of these constructions is expressed by the following result from [@EN], which we state precisely in Theorem \[thm:EN\]: There exists an admissible triple $(\phi,M,C)$ with characteristic set ${\mathfrak{t}}_\Gamma$ and such that $\phi\in\iota_*g$ [^1]. Moreover, we have a formula for the twist map components of $\phi$. This is used in \[subse:proof\] to prove Proposition \[prop:monodromy\]. As the title of the monograph [@EN] indicates, Eisenbud and Neumann’s point of view is that of link theory. Our discussion of splice diagrams, however, is without any reference to link theory. We simply consider them as a tool for encoding some algebraic data. This has the advantage that the reader is not assumed to be familiar with 3-manifold theory. The downside is that these diagrams seem to lack intrinsic geometric relevance and that it is not clear, how the stated results are actually proven. For this we refer to the original literature [@EN], [@Ne]. Puiseux data and splice diagrams {#subse:splice} -------------------------------- Let $[f]$ be an isolated singularity. In the following, we define the diagram $\Gamma[f]$ as shown in [@EN Appendix 1]. The construction can be divided into 4 steps. [1]{} The first step uses the so-called Newton method for solving the equation $f(x,y)=0$ for $y$ in terms of $x$ in a neighborhood of $0$. We only state the result without going into detail and refer the reader to [@BK Chapter 8.3]. Recall that a [**fractional power series**]{} is a pair $(P,d)$, where $$P(x)=\sum_{i=1}^ra_ix^{n_i}, r\in{\mathbb{N}}\cup\{\infty\},a_i\in{\mathbb{C}},a_i\ne0,n_i\in{\mathbb{N}}_{>0},n_i<n_{i+1},$$ is a power series converging in a neighborhood of $0$, and $d$ is a positive integer which is relatively prime to the set $\{n_i:i\in{\mathbb{N}},i\leq r\}$. Two fractional power series $(P,d),(\tilde{P},\tilde{d})$ are called equivalent, we write $(P,d)\sim (\tilde{P},\tilde{d})$, if $d=\tilde{d}$ and there exists $\theta\in{\mathbb{C}}$ such that $\theta^d=1$ and $\tilde{P}(x)=P(\theta\cdot x)$. Now if $[f]$ is an isolated singularity, there is a collection $\{(P_1,d_1),\dots,(P_\kappa,d_\kappa)\}$ of pairwise non-equivalent fractional power series, called [**Puiseux series**]{}, such that $$f(x,y)=0 \quad{\Longleftrightarrow}\quad \exists\; z\in{\mathbb{C}},\exists\; j\in\{1,\dots,\kappa\}:x=z^{d_j},y=P_j(z).$$ The Puiseux series are uniquely determined up to equivalence; $\kappa$ is the number of branches of $[f]$. [2]{} Given a collection $\{(P_1,d_1),\dots,(P_\kappa,d_\kappa)\}$ of pairwise non-equivalent fractional power series, we now define another such collection $\{(P'_1,d_1),\dots,(P'_\kappa,d_\kappa)\}$ with the property that each $P'_j$ is a finite series. For this we need the following notation. Let $\Pi=(P,d)$ be a fractional power series. For $s\in{\mathbb{N}}$, set $$d_s(\Pi) := \min\{d'\in{\mathbb{N}}:1\leq i\leq\min(s,r){\Rightarrow}d'\cdot n_i\in d\cdot{\mathbb{N}}\}.$$ and $$P^{(s)}(x) := \textstyle\sum_{i=1}^{\min(s,r)} a_ix^{m_i}, \quad m_i := n_id_s(\Pi)/d.$$ Note that $\Pi^{(s)}:=(P^{(s)},d_s(\Pi))$ is a fractional power series and that $d_s(P,d)$ is increasing in $s$ and eventually equals $d$. For every $j=1,\dots,\kappa$, set $\Pi_j:=(P_j,d_j)$ and define $$r_j:=\min\{s\in{\mathbb{N}}:d_{s}(\Pi_j)=d_j \text{ and } j\ne j'{\Rightarrow}\Pi_j^{(s)}\not\sim\Pi_{j'}^{(s)} \}.$$ Note that $\Pi_j^{(r_j)}\not\sim\Pi_{i}^{(r_i)}$ if $j\ne i$. Now let $\{\Pi_1,\dots,\Pi_\kappa\}$ be the Puiseux series of $[f]$. We call $\{\Pi_1^{(r_1)},\dots,\Pi_\kappa^{(r_\kappa)}\}$ the [**Puiseux data**]{} of $[f]$. [3]{} Using the Puiseux data of $[f]$, one can now define the diagram $\tilde{\Gamma}[f]$ from which the diagram $\Gamma[f]$ is obtained in the next step. For the sake of simplicity, we give the precise definition of $\tilde{\Gamma}[f]$ only for the cases $\kappa=1,2$. Assume that $\kappa=1$ and let $(P,d)$ denote the Puiseux data. Define the integers $q_1,\dots,q_r,p_1,\dots,p_r>0$ such that $\gcd(q_i,p_i)=1$ and $$P(x^{1/d}) = x^{\frac{q_1}{p_1}}(a_1+x^{\frac{q_2}{p_1p_2}} (\dots(a_{r-1}+a_rx^{\frac{q_r}{p_1\cdots p_r}})\dots)).$$ Define the integers $\alpha_1,\dots,\alpha_r$ recursively by $$\alpha_1=q_1,\quad \alpha_{i+1}=p_ip_{i+1}\alpha_i+q_i.$$ Note that $\gcd(\alpha_i,p_i)=1$ for all $i=1,\dots,r$. The graph $\tilde{\Gamma}[f]$ is given by (0,0)![image](puiseux1.pstex) \#1\#2\#3\#4\#5[ @font ]{} (5592,1145)(2106,-4266) (2701,-3286)[(0,0)\[lb\]]{} (3376,-3286)[(0,0)\[lb\]]{} (6901,-3286)[(0,0)\[lb\]]{} (5776,-3811)[(0,0)\[lb\]]{} (3301,-3811)[(0,0)\[lb\]]{} (6826,-3811)[(0,0)\[lb\]]{} (6226,-3286)[(0,0)\[lb\]]{} (5851,-3286)[(0,0)\[lb\]]{} (4876,-3286)[(0,0)\[lb\]]{} The coefficients $a_i$ only enter the description of $\tilde{\Gamma}[f]$ if $\kappa>1$. Let $\{\Pi,\tilde{\Pi}\}$ be the Puiseux data of $[f]$. To give the diagram $\tilde{\Gamma}[f]$ one distinguishes three cases. Set $$t := \min\big\{s\geq0:\Pi^{(s+1)}\not\sim\tilde{\Pi}^{(s+1)}\big\}$$ and let $\tilde{r},\tilde{\alpha}_1,\dots,\tilde{\alpha}_{\tilde{r}}$, $\tilde{p}_1,\dots,\tilde{p}_{\tilde{r}}$ denote the integers associated to $\tilde{\Pi}$.\ (i) Assume that $t<r,\tilde{r}$ and $q_{t+1}=\tilde{q}_{t+1},p_{t+1}=\tilde{p}_{t+1}$. (0,0)![image](puiseux2.pstex) \#1\#2\#3\#4\#5[ @font ]{} (10842,2420)(1881,-4866) (8401,-3886)[(0,0)\[lb\]]{} (11251,-3886)[(0,0)\[lb\]]{} (9901,-3886)[(0,0)\[lb\]]{} (8401,-2611)[(0,0)\[lb\]]{} (11251,-2611)[(0,0)\[lb\]]{} (9901,-2611)[(0,0)\[lb\]]{} (8251,-4411)[(0,0)\[lb\]]{} (2476,-3286)[(0,0)\[lb\]]{} (3151,-3286)[(0,0)\[lb\]]{} (5551,-3286)[(0,0)\[lb\]]{} (3001,-3811)[(0,0)\[lb\]]{} (5476,-3811)[(0,0)\[lb\]]{} (4876,-3286)[(0,0)\[lb\]]{} (6001,-3811)[(0,0)\[lb\]]{} (6931,-3826)[(0,0)\[lb\]]{} (8251,-3136)[(0,0)\[lb\]]{} (10726,-3136)[(0,0)\[lb\]]{} (11776,-3136)[(0,0)\[lb\]]{} (10726,-4411)[(0,0)\[lb\]]{} (11776,-4411)[(0,0)\[lb\]]{} (5866,-3286)[(0,0)\[lb\]]{} (6931,-3136)[(0,0)\[lb\]]{} (7351,-4111)[(0,0)\[lb\]]{} (7321,-2686)[(0,0)\[lb\]]{} (11851,-2611)[(0,0)\[lb\]]{} (11851,-3886)[(0,0)\[lb\]]{} (10801,-2611)[(0,0)\[lb\]]{} (10801,-3886)[(0,0)\[lb\]]{} \(ii) Assume that $t<r,\tilde{r}$ and $\frac{q_{t+1}}{p_{t+1}}<\frac{\tilde{q}_{t+1}}{\tilde{p}_{t+1}}$. (0,0)![image](puiseux3.pstex) \#1\#2\#3\#4\#5[ @font ]{} (10842,2420)(1881,-4866) (8401,-3886)[(0,0)\[lb\]]{} (11251,-3886)[(0,0)\[lb\]]{} (8401,-2611)[(0,0)\[lb\]]{} (11251,-2611)[(0,0)\[lb\]]{} (9901,-2611)[(0,0)\[lb\]]{} (8251,-4411)[(0,0)\[lb\]]{} (2476,-3286)[(0,0)\[lb\]]{} (3151,-3286)[(0,0)\[lb\]]{} (5551,-3286)[(0,0)\[lb\]]{} (3001,-3811)[(0,0)\[lb\]]{} (5476,-3811)[(0,0)\[lb\]]{} (4876,-3286)[(0,0)\[lb\]]{} (8251,-3136)[(0,0)\[lb\]]{} (10726,-3136)[(0,0)\[lb\]]{} (11776,-3136)[(0,0)\[lb\]]{} (10726,-4411)[(0,0)\[lb\]]{} (11776,-4411)[(0,0)\[lb\]]{} (5866,-3286)[(0,0)\[lb\]]{} (6931,-3136)[(0,0)\[lb\]]{} (7351,-4111)[(0,0)\[lb\]]{} (7321,-2686)[(0,0)\[lb\]]{} (6676,-3811)[(0,0)\[lb\]]{} (11851,-2611)[(0,0)\[lb\]]{} (11851,-3886)[(0,0)\[lb\]]{} (9901,-3886)[(0,0)\[lb\]]{} (10801,-2611)[(0,0)\[lb\]]{} (10801,-3886)[(0,0)\[lb\]]{} \(iii) Assume that $t=\tilde{r}<r$. In this case, the diagram is obtained from the diagram in case (ii) by terminating the edge with weight $p_{t+1}$ by an arrowhead.\ By interchanging $\Pi$ and $\tilde{\Pi}$ if necessary, (i–iii) define $\tilde{\Gamma}[f]$ in the case $\kappa=2$. The general case is obtained by induction on $\kappa$. The induction step involves operations of the kind (i–iii). Instead of giving the precise definition, we give a list of properties of the diagram $\tilde{\Gamma}=\tilde{\Gamma}[f]$ for an arbitrary singularity $[f]$. In the case $\kappa=1,2$, these properties are easily verified from the definitions above.\ (A1) $\tilde{\Gamma}$ has the structure of a weighted tree. All weights are positive integers.\ (A2) $\tilde{\Gamma}$ has three kinds of vertices: arrowhead, knob and box vertices. The number of arrowhead vertices equals the number of branches of $[f]$. The arrowhead and knob vertices have $1$ incoming edge, the boxed ones at least $3$. A box vertex has at most $2$ neighboring knob vertices.\ (A3) An edge of $\tilde{\Gamma}$ carries a weight at each ending box vertex. The edge-weights at a box vertex are pairwise relatively prime.\ (A4) Let $b$ denote a box vertex of $\tilde{\Gamma}$. Let $E$ be the set of edges which connect $b$ to its neighboring box vertices. The graph $\tilde{\Gamma}\setminus E$ has $\# E+1$ connected components. There is at most one component which does neither contain $b$ nor any arrowhead vertex. Assume that there is such a component. Let $e$ be the edge which connects $b$ to that component and let $b'$ denote the other vertex of $e$. Then $e$ has weight $1$ at $b'$.\ (A5) Let $b,b'$ denote connected box vertices of $\tilde{\Gamma}$. Let $a_1,\dots,a_k$ be the weights at $b$ of the edges connecting $b$ and its neighboring vertices. Similarly, let $a'_1,\dots,a'_{k'}$ be the weights at $b'$. Assume that $a_1,a'_1$ are the weights of the edge connecting $b$ and $b'$. Then $$a_1a'_1-a_2\cdots a_ka'_2\cdots a'_{k'}>0.$$ [4]{} The diagram $\Gamma=\Gamma[f]$ is obtained from $\tilde{\Gamma}=\tilde{\Gamma}[f]$ by the following algorithm. Set $\Gamma_0:=\tilde{\Gamma}$. [Step]{}: Let $e$ be an edge of $\Gamma_i$ connecting a box vertex $b$ and a knob vertex $v$ and having weight $1$. If $e$ does not exist, set $\Gamma:=\Gamma_i$ and [Stop]{}. Otherwise, let $k\geq3$ denote the number of incoming edges at $b$ and $n\leq2$ the number of neighboring box vertices of $b$. If $k=3$ and $n=0$, set $\Gamma:=\Gamma_*:=\includegraphics{gamma1.pstex}$ and [Stop]{}. Otherwise, apply the operation $\Gamma_i{\rightarrow}\Gamma_{i+1}$ defined as follows\ $\quad$ if $k=3,n=1,2$ (0,0)![image](operation12n.pstex) \#1\#2\#3\#4\#5[ @font ]{} (6187,1175)(3511,-4191) (9001,-3511)[(0,0)\[lb\]]{} (4876,-3826)[(0,0)\[lb\]]{} (4426,-3211)[(0,0)\[lb\]]{} (4951,-3211)[(0,0)\[lb\]]{} (5626,-3211)[(0,0)\[lb\]]{} (4516,-4156)[(0,0)\[lb\]]{} \ $\quad$ if $k>3$ (0,0)![image](operation3.pstex) \#1\#2\#3\#4\#5[ @font ]{} (3997,569)(4876,-3908) (4876,-3661)[(0,0)\[lb\]]{} (5761,-3541)[(0,0)\[lb\]]{} \ Finally repeat the [Step]{}. Among all the possible diagrams $\Gamma$ which are obtained by this algorithm, $\Gamma_*$ is exceptional in the sense that it does not have box vertices. It is important to note, that in all other cases the properties (A1–5) still hold if $\tilde{\Gamma}$ is replaced by $\Gamma$. Additionally, we have\ (A6) Every edge of $\Gamma$ connecting a box and a knob vertex has weight $>1$. \ We end the discussion of splice diagrams with the remark that the diagram associated to the quadratic singularity $(x,y)\mapsto x^2+y^2$ is $\Gamma_*$. Characteristic set {#subse:set} ------------------ Let $\Gamma$ be the splice diagram of an isolated singularity. In this section, we define the set ${\mathfrak{t}}_\Gamma\subset{\mathcal{T}}$. In the exceptional case $\Gamma=\Gamma_*$, this is simply the set $\{(0,1,2;1)\}$, which is the characteristic set of a positive Dehn twist. Assume from now on that $\Gamma\ne\Gamma_*$. Denote by ${\mathcal{A}}$ respectively ${\mathcal{B}}$ the set of arrowhead respectively box vertices of $\Gamma$. Moreover, denote by ${\mathcal{E}}$ the set of edges of $\Gamma$ which connect ${\mathcal{B}}$ to ${\mathcal{A}}\cup{\mathcal{B}}$. In the following, we define [**(i)**]{} for each $b\in{\mathcal{B}}$ an element $t_b\in{\mathcal{T}}$ and [**(ii)**]{} for each $e\in{\mathcal{E}}$ an element $t_e\in{\mathcal{T}}$. We then set $${\mathfrak{t}}_\Gamma:=\{t_x:x\in{\mathcal{B}}\cup{\mathcal{E}}\}.$$ Finally we define [**(iii)**]{} for each $e\in{\mathcal{E}}$ an admissible twist map $\phi_e$. Let ${\mathcal{V}}$ denote the set of ordered pairs of connected vertices of $\Gamma$. We start by introducing the function $m:{\mathcal{V}}{\rightarrow}{\mathbb{N}}$. Let $(v,v')\in{\mathcal{V}}$ and let $e$ be the edge connecting $v$ and $v'$. The graph $\Gamma\setminus\{e\}$ has two components. Denote by $\Gamma'$ that component which contains $v'$. Let ${\mathcal{A}}'$ denote the set of arrowhead vertices of $\Gamma'$. For each $a\in{\mathcal{A}}'$, there is a unique path in $\Gamma'$, denoted by $\gamma_a$, which connects $v'$ and $a$. Define $\sigma_a>0$ to be the product of all edge-weights adjacent to $\gamma_a$, but not on $\gamma_a$. For examples see [@EN page 84]. If $\gamma_a$ is the constant path, set $\sigma_a:=1$. Define $$m(v,v'):= \sum_{a\in{\mathcal{A}}'}\sigma_a.$$ Note that $m(v,v')=0$ if and only if ${\mathcal{A}}'=\emptyset$. Together with (A4), this has the following consequences:\ (B1) If $m(v,v')=0$, then $m(v',v)\ne0$.\ (B2) If $b,b'\in{\mathcal{B}}$ and $m(b,b')=0$, then the edge connecting $b$ and $b'$ has weight $1$ at $b'$.\ (B3) For each $b\in{\mathcal{B}}$, there exists at most one neighboring vertex $b'\in{\mathcal{B}}$ such that $m(b,b')=0$.\ (B4) If $b\in{\mathcal{B}}$ and $a\in{\mathcal{A}}$, then $m(b,a)=1$.\ [**(i)**]{} Let $b\in{\mathcal{B}}$ and denote by $\{v_1,\dots,v_n,v_{n+1},\dots,v_k\},1\leq n\leq k,$ the set of vertices which are connected to $b$, ordered such that $v_i\in{\mathcal{A}}\cup{\mathcal{B}}$ if and only if $i\leq n$. Note that $k\geq 3$ and that $k-n\in\{0,1,2\}$. Further denote by $a_i>0,i=1,\dots,k,$ the weight at $b$ of the edge connecting $b$ and $v_i$. Define the numbers $$d_b:=\gcd\big(m(b,v_1),\dots,m(b,v_n)\big),\quad h_b:=\sum_{i=1}^n\gcd\big(m(b,v_i),m(v_i,b)\big)$$ and $$\label{eq:b} \ell_b:=\sum_{i=1}^nm(b,v_i)\cdot a_1\cdots\widehat{a_i}\cdots a_k,\quad \chi_b:=\ell_b\cdot\big(2-k+\sum_{i=n+1}^k\frac{1}{a_i}\big),$$ where the hat means that the underlying factor is omitted. Note that (B3), (B4) respectively (B1) imply that $d_b$ respectively $h_b$ is well defined and therefore $>0$. Similarly, it follows from (B3), (B4) that the integer $\ell_b$ is $>0$. We furthermore claim, that $\chi_b<0$. This is because $k\geq3$ and $a_{n+1},\dots,a_k>1$ are pairwise relatively prime. Hence, we can set $$t_b:=(\chi_b,d_b,h_b;\ell_b).$$ Finally note, that from the definition of $m$, it follows that for each $i=1,\dots,n$, $$\label{eq:ellb} \ell_b = m(b,v_i)\cdot a_1\cdots\widehat{a_i}\cdots a_k + m(v_i,b)\cdot a_i.$$ [**(ii)**]{} Let $e\in{\mathcal{E}}$ be an edge which connects the vertices $b,b'\in{\mathcal{B}}$. Let $a_1,\dots,a_k$ be the weights at $b$ of the edges connecting $b$ and its neighboring vertices. Similarly, let $a'_1,\dots,a'_{k'}$ be the weights at $b'$. Assume that $a_1,a'_1$ are the weights of $e$. Define the numbers $$\label{eq:deltae} d_e:=\gcd\big(m(b,b'),m(b',b)\big),\quad \Delta_e:=a_1a'_1-a_2\cdots a_ka'_2\cdots a'_{k'}$$ and $$\ell_e:=\frac{d_e\cdot\Delta_e}{\ell_b\cdot\ell_{b'}}.$$ By (B1), $d_e$ is well defined and therefore $>0$. Set $$t_e:=(0,d_e,2d_e;\ell_e)\in{\mathcal{T}}.$$ Now let $e\in{\mathcal{E}}$ be an edge which connects the vertex $b\in{\mathcal{B}}$ to an arrowhead vertex. In this case set $$t_e := (0,d_e,2d_e;\ell_e) := (0,1,2;1/\ell_b)\in{\mathcal{T}}.$$ [**(iii)**]{} Let $e$ be an edge which connects $b\in{\mathcal{B}}$ and $b'\in{\mathcal{A}}\cup{\mathcal{B}}$. By (B1),(B4) we can assume without loss of generality that $m(b,b')\ne0$. Now choose integers $n,n'$ with $m(b,b')\cdot n'+m(b',b)\cdot n=d_e$. Denote by $a$ the weight of $e$ at $b$ and set $m:=m(b,b')$. Define $\phi_e$ to be the admissible twist map which cyclically permutes $d_e$ annuli and such that $$\label{eq:e} \phi_e^{d_e}(q,p) = \Big(q,p-q\cdot d_e\ell_e-\frac{d_e}{m}\big(n-\frac{d_e\cdot a}{\ell_b}\big)\Big),$$ for all $(q,p)\in[0,1]\times S^1$. Proof of Proposition \[prop:monodromy\] {#subse:proof} --------------------------------------- We first state the precise results that are needed for the proof. \[thm:EN\] Let $[f]$ be an isolated plane curve singularity with Milnor fiber $M$ and geometric monodromy $g$. There exists an admissible triple $(\phi,M,C)$ such that $\phi\in\iota_*g$ and ${\mathfrak{t}}(\phi,M,C)={\mathfrak{t}}_{\Gamma[f]}$. Moreover, the twist map components of $\phi$ are given by the model . \(i) This theorem is a summary of results from Sections 9, 10, 11 and 13 of [@EN]. The periodic components of the monodromy are described in Lemma 11.4 and the twist map components in Theorems 13.1, 13.5. The positive twist property (A5) is contained in Theorem 9.4. We remark that our notation differs at some points from that of [@EN].\ (ii) Eisenbud, Neumann give a detailed description of the periodic components of the monodromy in terms of cyclic branched coverings in [@EN Lemma 11.4].\ (iii) The graph $\Gamma[f]$ contains more information than the characteristic set ${\mathfrak{t}}_{\Gamma[f]}$. It also shows how the $\phi$-components are pieced together to give the Milnor fiber. The second main ingredient for our proof is the following result of [@AC2] and [@L]. We would like to point out that this result holds in much greater generality than we use it here, namely for holomorphic hypersurface singularities, isolated or non-isolated, in any dimension. \[thm:AL\] Let $g$ be the geometric monodromy of an isolated plane curve singularity. Then $\Lambda(g)=0$, where $\Lambda$ denotes the Lefschetz number. Let $[f]$ be an isolated plane curve singularity. Let $M$ denote the Milnor fiber and $g$ the geometric monodromy of $[f]$. By Theorem \[thm:EN\], there exists a representative $\phi\in\iota_*g$ and a finite $\phi$-invariant union of circles $C\subset M$, such that if $M'$ is a $\phi$-component of $M\setminus C$, then either $M'$ has negative Euler characteristic and $\phi|M'$ is periodic, or $M'$ is a union of annuli and $\phi|M'$ is an admissible twist map. If $(\chi,d,h)$ denotes the topological type of $M'$ and $\ell$ the order/twist number of $\phi|M'$, then $(d,\chi,h;\ell)\in{\mathfrak{t}}_{\Gamma[f]}$. Moreover, if $M'$ is a union of annuli, then is a model for $\phi|M'$. The strategy of the proof is now as follows. We will prove below that [1]{} If $\chi=0$, then $\operatorname{Fix}(\phi)\cap\text{int}(M')=\emptyset$. [2]{} If $\chi<0$, then $\ell>1$. [3]{} $\phi$ only has positive twists. \ Claim 1 implies that $\phi$ is a diffeomorphism of finite type. From Proposition \[prop:fclass\] about the fixed point classes of a diffeomorphism of finite type and claim 2, it follows that $\operatorname{Fix}(\phi)\cap\text{int}(M)$ is a discrete set of fixed points with fixed point index $1$. The Lefschetz fixed point theorem therefore implies that $$\Lambda(\phi)=\#\big(\operatorname{Fix}(\phi)\cap\text{int}(M)\big).$$ Note that ${\partial}M$ has fixed point index $0$. From Theorem \[thm:AL\], it hence follows that $\operatorname{Fix}(\phi)\cap\text{int}(M)=\emptyset$. Together with claim 3, this proves Proposition \[prop:monodromy\], up to claim 1,2 and 3. Note that if $\Gamma[f]=\Gamma_*$, then $M$ is an annulus and $\phi$ is a positive Dehn twist. Claim 1,2 and 3 obviously hold in this case and we assume from now on that $\Gamma[f]\ne\Gamma_*$. We begin by proving claim 2. Recall that ${\mathfrak{t}}_{\Gamma[f]}=\{t_x:x\in{\mathcal{B}}\cup{\mathcal{E}}\}$ and that if $\chi<0$, there exists $b\in{\mathcal{B}}$ such that $(\chi,d,h;\ell)=(\chi_b,d_b,h_b;\ell_b)$. Consider equation  and assume that $\ell_b=1$. In this case, only one summand of $\ell_b$ is non-zero, which, by (B3), is only possible if $k\leq2$. Since $n\geq3$, however, it follows from (A6) that $a_n>1$ and hence that $1=\ell_b\geq a_n>1$, a contradiction. To proof claim 1, let $e\in{\mathcal{E}}$ be such that $(\chi,d,h;\ell)=(0,d_e,2d_e;\ell_e)$. We can assume that $d_e=1$, otherwise the claim is obviously true. Hence $M'$ is an annulus and $\phi|M'=\phi_e$. Consider equation  and assume that $\phi_e(q,p)=(q,p)$ for some $(q,p)\in(0,1)\times S^1$. This is only possible if $$-q\cdot\ell_e-\frac{1}{m}\big(n-\frac{a}{\ell_b}\big) \in{\mathbb{Z}},$$ where we use the same notation as in . This in turn implies that $$\label{eq:q} a-qm\ell_b\ell_e\in\ell_b{\mathbb{Z}}.$$ Assume for the moment that $0<m\ell_b\ell_e\leq a$. Since $0<q<1$, it follows from that $0<\ell_b<a$. To show that this is a contradiction, recall from that $\ell_b=ma_2\cdots a_r+m'a$, where $m':=m(b',b)$. If $m'\ne0$, it follows that $\ell_b\geq a$ and hence $m'=0$. By (B2) however, this implies that $a=1>\ell_b$, which is a contradiction. It remains to prove that $0<m\ell_b\ell_e\leq a$. First note, that $m\ell_b\ell_e>0$ iff $\ell_e>0$. If $b'\in{\mathcal{A}}$, then $\ell_b>0$ by definition. If $b\in{\mathcal{B}}$, then (A5) is exactly the statement that $\ell_e>0$. This in fact proves claim 3. To prove that $m\ell_b\ell_e\leq a$, first assume that $b'\in{\mathcal{A}}$. Then $m=1,a=1,\ell_b\ell_e=1$ and we are finished. If $b'\in{\mathcal{B}}$, then it follows from and that $$\ell_{b'} \geq ma', \quad \Delta_e \leq aa',$$ where $a'$ denotes the weight of $e$ at $b'$. This implies that $$\ell_e\leq\frac{a}{\ell_bm},$$ which proves the required inequality. This ends the proof of the proposition. Floer homology on surfaces with boundary {#ap:open} ======================================== Let $M$ be a compact connected oriented 2-manifold with boundary ${\partial}M\ne\emptyset$. Recall that $\operatorname{Diff}_c^+(M)$ denotes the group of orientation preserving diffeomorphisms which are the identity near ${\partial}M$. This appendix addresses Floer homology theory for elements of $\operatorname{Diff}_c^+(M)$. In higher dimensions, this is known as Floer homology theory for exact symplectomorphisms of exact symplectic manifolds with contact type boundary and was used in [@S5 Section 4], see also [@CFH]. Due to the dimensional restriction, the theory exhibits auxiliary structure, namely isotopy invariance, as in the closed case. The central notion around this issue is that of monotonicity. We start by defining monotonicity and show that it has naturality, isotopy and inclusion properties similar to the ones discussed on page  for the closed case. Let $\omega$ be an area form on $M$ and denote by $\operatorname{Symp}_c(M,\omega)$ the group of $\omega$-preserving diffeomorphisms which are the identity near ${\partial}M$. If $\phi\in\operatorname{Symp}_c(M,\omega)$, $\omega$ induces a closed 2-form $\omega_\phi$ on the mapping torus $T_\phi$. \[def:open\] $\phi\in\operatorname{Symp}_c(M,\omega)$ is called [**monotone**]{}, if $[\omega_\phi]=0$ in $H^2(T_\phi;{\mathbb{R}})$. $\operatorname{Symp}_c^m(\Sigma,\omega)$ denotes the set of monotone symplectomorphisms. As in the closed case it is useful to look at the short exact sequence $$0 {\longrightarrow}\frac{H^1(M;{\mathbb{R}})}{\operatorname{im}(\operatorname{id}-\phi^*)} \stackrel{\delta}{{\longrightarrow}} H^2({T_{\phi}};{\mathbb{R}}) \stackrel{\iota^*}{{\longrightarrow}} H^2(M;{\mathbb{R}})=0 {\longrightarrow}0$$ and define the class $m(\phi)\in H^1(M;{\mathbb{R}})/\operatorname{im}(\operatorname{id}-\phi^*)$ satisfying $\delta m(\phi)=[\omega_\phi]$. The naturality, isotopy and inclusion properties discussed on page  in the closed case carry over word by word to the current situation with the addition of a subscript $c$ to all diffeomorphism groups. For the first two properties this is straight forward to check. The inclusion property needs separate consideration. Recall:\ (Inclusion) The inclusion $\operatorname{Symp}_c^m(M,\omega){\hookrightarrow}\operatorname{Diff}_c^+(M)$ is a homotopy equivalence. In particular, every connected component of $\operatorname{Symp}_c^m(\Sigma,\omega)$ is contractible.\ The proof is analogous to the closed case and uses the following three facts. Firstly, the inclusion $\operatorname{Symp}_c(M,\omega){\hookrightarrow}\operatorname{Diff}_c^+(M)$ is a homotopy equivalence. This follows from an extension of Moser’s theorem, see [@MS Exercise 3.18]. Secondly, every connected component of $\operatorname{Diff}_c^+(M)$ is contractible. If the genus of $M$ is $\ne0$, this is shown using the Earl-Eells Theorem [@EE]. If $g=0$, it follows from the corresponding result for the disk, which is due to Smale [@Sm1 Theorem B]. Thirdly, we have If $\phi\in\operatorname{Symp}_c(M,\omega)$, there exists a closed 1-form $\theta\in m(\phi)$ such that $\operatorname{supp}(\theta)\subset\mathrm{int}(M)$. The flow $(\psi_t)_{t\in{\mathbb{R}}}$ of the vector field $X$ which is uniquely defined by $\omega(X,\cdot)=-\theta$, satisfies $\psi_t\in\operatorname{Symp}_c(M,\omega)$ and $m(\phi\circ\psi_1)=0$. The second part of the statement follows immediately from the first one and the isotopy property: $$m(\phi\circ\psi_1)=m(\phi)+[\operatorname{Flux}(\psi_t)]$$ in $H^1(M;{\mathbb{R}})/\operatorname{im}(\operatorname{id}-\phi^*)$. To prove the first statement, let $\beta\in m(\phi)$ be a closed 1-form. Let $S_1,\dots,S_n$ denote the connected components of ${\partial}M$ and choose for each $i$ a collar neighborhood $N_i\subset M$ of $S_i$ and a closed 1-form $\theta_i$ on $N_i$ such that $\langle[\theta_i],[S_i]\rangle=1$. There exist $f:M{\rightarrow}{\mathbb{R}}$ smooth and $t_1,\dots,t_n\in{\mathbb{R}}$ such that $$\label{eq:open1} (\beta+{\mathrm{d}}f)|N_i = t_i\cdot\theta_i, \quad \forall\, i=1,\dots,n.$$ We claim that $\theta:=\beta+{\mathrm{d}}f$ is the required 1-form. Recall from page , that $\delta:H^1(M;{\mathbb{R}}){\rightarrow}H^2({T_{\phi}};{\mathbb{R}})$ is given by $\delta[\theta]=[\rho\cdot\theta\wedge{\mathrm{d}}t]$, with $\rho:{[0,1]}{\rightarrow}{\mathbb{R}}$ a smooth function vanishing near 0 and 1, and satisfying $\int_0^1\!\rho\,{\mathrm{d}}t=1$. Furthermore note, that $S^1\times S_i\subset T_\phi$ is an embedded 2-torus for each $i=1,\dots,n$. Using , it follows that $$\begin{aligned} \big\langle[\omega_\phi],[S^1\times S_i]\big\rangle &=& \big\langle\delta[\alpha],[S^1\times S_i]\big\rangle \\ &=& -\big\langle[\rho\cdot{\mathrm{d}}t],[S^1]\big\rangle\cdot \big\langle[\alpha],[S_i]\big\rangle \\ &=& -t_i\cdot\big\langle[\theta_i],[S_i]\big\rangle \;=\; -t_i.\end{aligned}$$ On the other hand, $$\langle[\omega_\phi],[S^1\times S_i]\rangle=0,$$ since $\omega_\phi$ has no ${\mathrm{d}}t$-component. Hence, $t_i=0$ for all $i=1,\dots,n$, which proves the claim. Recall that in the closed case monotonicity (i) guarantees compactness of the space of Floer connecting orbits and (ii) is used to prove invariance. The same holds in the current situation.\ (Floer homology) To every $\phi\in\operatorname{Symp}_c^m(\Sigma,\omega)$ symplectic Floer homology theory assigns a pair of ${\mathbb{Z}}_2$-graded vector spaces ${HF_*}(\phi,\pm)$ over ${\mathbb{Z}}_2$, with multiplicative structures $$H^*(M;{\mathbb{Z}}_2)\otimes{HF_*}(\phi,\pm){\longrightarrow}{HF_*}(\phi,\pm).$$ Floer homology is natural in the sense that ${HF_*}(\phi,\pm)$ and ${HF_*}(\psi^*\phi,\pm)$ are naturally isomorphic as modules over $H^*(M;{\mathbb{Z}}_2)$, for all $\psi\in\operatorname{Diff}_c^+(M)$.\ (Invariance) If $\phi,\phi'\in\operatorname{Symp}_c^m(\Sigma,\omega)$ are isotopic, then ${HF_*}(\phi,\pm)$ and ${HF_*}(\phi',\pm)$ are naturally isomorphic as modules over $H^*(M;{\mathbb{Z}}_2)$.\ The appearance of the sign in the Floer homology corresponds to two ways of perturbing $\phi\in\operatorname{Symp}_c^m(\Sigma,\omega)$ near ${\partial}M$. To be more precise, let $\jmath:\bigcup(-{\varepsilon},0]\times S^1{\rightarrow}M$ be a collar neighborhood of ${\partial}M$, such that $\jmath^*\omega={\mathrm{d}}q\wedge{\mathrm{d}}p$ with $(q,p)\in(-{\varepsilon},0]\times S^1$. Choose $H:M{\rightarrow}{\mathbb{R}}$ with support near ${\partial}M$ and such that $\jmath^*H(q,p)=-q$. Let $\psi_t$ denote the Hamiltonian flow generated by $H$, choose $0<\delta<1$ and set $$\phi_+ := \phi\circ\psi_\delta, \quad \phi_- := \phi\circ\psi_{-\delta}.$$ The definition of the Floer complex for $\phi_\pm$ is along the same line as that in the closed case [@S2], with the usual modifications that are needed in the presence of a contact type boundary. The modifications include a condition on the path $J=(J_t)_{t\in{\mathbb{R}}}$ of $\omega$-compatible complex structures that is used to define the Floer connecting orbits; namely that $\jmath^*J_t$ is the standard complex structure on $\bigcup(-{\varepsilon},0]\times S^1$, for all $t\in{\mathbb{R}}$. We briefly recall the use of this condition. Assume without loss of generality that $\phi|\operatorname{im}\jmath=\operatorname{id}$. Now let $u:{\mathbb{R}}^2{\rightarrow}M$ be a smooth map satisfying $$\label{eq:open2} \left\{\begin{array}{l} u(s,t) = \phi_+(u(s,t+1)), \\ {\partial}_s u + J_t(u){\partial}_t u = 0, \\ \lim_{s{\rightarrow}\pm\infty}u(s,t) \in\operatorname{Fix}(\phi_+). \end{array}\right.$$ We claim that $\operatorname{im}u\subset M\setminus\operatorname{im}\jmath$. Assume by contradiction that $u^{-1}(\operatorname{im}\jmath)$ is non-empty and let $u_q:u^{-1}(\operatorname{im}\jmath){\rightarrow}{\mathbb{R}}$ denote the $q$-component of $\jmath^{-1}\circ u$. By construction, $u_q$ is smooth and not locally constant. Using the first and third equation in , one can now show that $u_q$ has a global maximum. From the second equation in together with the above assumption on $J_t$, it furthermore follows that $u_q$ is a harmonic function. This contradicts the maximum principle and hence proves the claim, which assures that the Floer connecting orbits are contained in a compact subset of $\text{int}(M)$. We close this section by remarking that ${HF_*}(\phi,+),{HF_*}(\phi,-)$ are independent of the choices of the local chart $\jmath$ and perturbation data $H,\delta$. They are invariants of the isotopy class of $\phi$ in $\operatorname{Diff}_c^+(M)$. [10]{} N. A’Campo. La fonction zêta d’une monodromy. , 50:233–248, 1975. N. A’Campo. On monodromy maps of hypersurface singularities. In [*Manifolds-Tokyo 1973*]{}, pages 151–152. University of Tokyo Press, 1973. N. A’Campo. Real deformations and complex topology of plane curve singularities. , 8(1):5–23, 1999. N. A’Campo. Sur la monodromie des singularit[é]{}s isol[é]{}e d’hypersurface complexes. , 20:147–170, 1973. V. I. Arnold, S. M. Gusein-Zade, and A. N. Varchenko. , volume 83 of [*Monographs in Mathematics*]{}. Birckh[ä]{}user, 1988. E. Brieskorn and H. Kn[ö]{}rrer. . Birckh[ä]{}user, 1986. K. Cieliebak, A. Floer, and H. Hofer. Symplectic homology [II]{}. [A]{} general construction. , 218(1):103–122, 1995. C. J. Earle and J. Eells. The diffeomorphism group of a compact [R]{}iemann surface. , 73:557–559, 1967. D. Eisenbud and W. Neumann. , volume 110 of [*Annals of Mathematical Studies*]{}. Princeton University Press, 1985. A. Floer. Symplectic fixed points and holomorphic spheres. , 120(2):575–611, 1989. D. Fried. Monodromy and dynamical systems. , 25(4):443–453, 1986. R. Gautschi. Freiheitssatz for surface groups. , 2002. Submitted to J. Reine Angew. Mathematik B. Jiang. Fixed point classes from a differentiable viewpoint. In [*Fixed point theory*]{}, volume 886 of [*Lecture Notes in Math.*]{}, pages 163–170. Springer, 1981. B. Jiang and J. Guo. Fixed points of surface diffeomorphisms. , 160(1):67–89, 1993. D. T. L[ê]{}. La monodromie n’a pas de points fixes. , 22(3):409–427, 1975. J. Milnor. , volume 61 of [ *Annals of Mathematical Studies*]{}. Princeton University Press, 1968. D. McDuff and D. A. Salamon. . Oxford Mathematical Monographs. Oxford Science Publications, 1998. J. Moser. On the volume elements on a manifold. , 120:286–294, 1965. W. Neumann. Splicing algebraic links. In [*Comlex analytic singularities*]{}, volume 8 of [*Adv. Stud. Pure Math.*]{}, pages 349–361. North Holland, 1987. J. Nielsen. Surface transformations of algebraically finite type. , 21, 1944. D. A. Salamon and E. Zehnder. Morse theory for solutions of [H]{}amiltonian systems and the [M]{}aslov index. , 45(10):1303–1360, 1992. M. Schwarz. , volume 111 of [*Progress in Mathematics*]{}. Birchh[ä]{}user, 1993. M. Po[ź]{}niak. Floer homology, [N]{}ovikov rings and clean intersections. In [*Northern California Symplectic Geometry Seminar*]{}, volume 196 of [*Amer. Math. Soc. Transl. Ser.2*]{}, pages 119–181. American Mathematical Society, 1999. P. Seidel. Floer homology of a [D]{}ehn twist. , 3(6):829–834, 1997. P. Seidel. More on vanishing cycles and mutation. , October 2000. math.SG/0010032. P. Seidel. Symplectic [F]{}loer homology and the mapping class group. , March 2001. math.SG/0010301. S. Smale. Diffeomorphisms of the 2-sphere. , 10:621–626, 1959. W. P. Thurston. On the geometry and dynamics of diffeomorphisms of surfaces. , 19(2):417–431, October 1988. [^1]: Here, $\iota:\operatorname{Diff}^+_c(M){\rightarrow}\operatorname{Diff}^+(M,{\partial}M)$ is the inclusion.
--- abstract: 'An epoch of Higgs relaxation may occur in the early universe during or immediately following postinflationary reheating. It has recently been pointed out that leptogenesis may occur in minimal extensions of the Standard Model during this epoch [@Kusenko:2014lra]. We analyse Higgs relaxation taking into account the effects of perturbative and non-perturbative decays of the Higgs condensate, and we present a detailed derivation of the relevant kinetic equations and of the relevant particle interaction cross sections. We identify the parameter space in which a sufficiently large asymmetry is generated.' author: - Louis Yang - Lauren Pearce - Alexander Kusenko bibliography: - 'Reference.bib' title: Leptogenesis via Higgs Relaxation --- Introduction ============ During the inflationary era, the Higgs field may develop a stochastic distribution of vacuum expectation values (VEVs) due to the flatness of its potential [@Bunch:1978yq; @Hawking:1981fz; @Linde:1982uu; @Starobinsky:1982ee; @Vilenkin:1982wt; @Starobinsky:1994bd; @Enqvist:2013kaa; @Enqvist:2014bua], or it may be trapped in a quasi-stable minimum. In both cases, the Higgs field relaxes to its vacuum state via a coherent motion, during which time the Sakharov conditions [@Sakharov:1967dj], necessary for baryogenesis, are satisfied by the time-dependent Higgs condensate and the lepton-number-violating Majorana masses in the neutrino sector. At large VEVs, the Higgs field may be sensitive to physics beyond the Standard Model, which can generate an effective chemical potential which increases the energy of antileptons in comparison to leptons. In Ref. [@Kusenko:2014lra], we used a $\mathcal{O}_6$ operator familiar from spontaneous baryogenesis models (e.g., [@Dine:1990fj]) to produce a baryon asymmetry matching cosmological observations. In this work, we build on our previous analysis. In particular, we replace the estimate of the Higgs-neutrino cross section with a tree-level calculation which includes resonant effects. Additionally, we include the effects of Higgs condensate decay, with both perturbative and non-perturbative contributions. We also present a detailed derivation of the relevant Boltzmann equation. Additionally, the effective $\mathcal{O}_6$ operator can be generated through fermionic loops; therefore, its scale can be set either by some heavy mass scale or by the temperature of the plasma. We consider additional combinations of these scales along with mechanisms to generate the large Higgs VEV during inflation, and in particular, we also present an analysis of the relevant parameter space. We note that as in Ref. [@Kusenko:2014lra], we consider an asymmetry produced via the scattering of neutrinos and Higgs bosons in the plasma produced by the decays of the inflaton, and therefore this scenario requires a relatively fast reheating. This is in contrast to Ref. [@Pearce:2015nga], which similarly considered the same $\mathcal{O}_6$ operator but produced the matter asymmetry via the decay of the Higgs condensate. While we focus here on the relaxation of the Higgs field, it has also been observed that the axion field can undergo a similar post-inflationary relaxation [@Chiba:2003vp; @Kusenko:2014uta], and our analysis can easily be extended to this scenario. The structure of this paper is as follows. In the next section, we consider two specific mechanisms by which the Higgs field can acquire a large vacuum expectation value during inflation. In the scenarios considered here, the subsequent evolution of the Higgs VEV produces an effective chemical potential, which influences the interactions of leptons in the thermal plasma produced via reheating. The presence of the plasma, however, also influences the evolution of the Higgs VEV through finite temperature corrections to the effective potential. Therefore, we discuss reheating in section \[sec:Reheat\] before we consider the evolution of the Higgs condensate in section \[sec:Relaxation\]. Next, we introduce a higher-dimensional operator, involving only Standard Model fields, which represents new physics at some high energy scale. In section \[sec:chemical\_potential\], we demonstrate that, while the Higgs VEV is in motion, this operator induces an effective chemical potential which distinguishes leptons from antileptons. We derive the resulting Boltzmann equation for lepton number in section \[sec:boltzmann\_equation\_ahhhh!\]. In section \[sec:asymmetry\_produced\], we present a numerical analysis covering a variety of initial conditions and scales for new physics, and we identify the allowed parameter space for a successful leptogenesis. Initial Conditions for the Higgs VEV {#sec:Higgs_IC} ==================================== We begin by motivating our project with the observation that the Higgs field can acquire a large vacuum expectation value (VEV) for a variety of reasons during inflation; therefore, an epoch of post-inflationary Higgs relaxation is a general feature of many cosmological scenarios. In this work we are interested in generating an excess of leptons over antileptons during this epoch. When we generate the lepton asymmetry during this epoch of Higgs relaxation, we will find that the resulting asymmetry depends on the initial value of the VEV, denoted $\sqrt{\left<\phi^{2}\right>}=\phi_{0}$. During inflation, quantum fluctuations of the Higgs field were ongoing, and therefore different patches of the Universe had slightly different VEVs at the end of inflation. Regions that begin with slightly different $\phi_0$ values consequently develop different baryon asymmetries. This produces unacceptably large baryonic isocurvature perturbations [@Peebles1987; @*1987ApJ...315L..73P; @*Enqvist:1998pf; @*Enqvist:1999hv; @*Harigaya:2014tla], which are constrained by CMB observations [@Ade:2013uln]. Therefore, in order to suppress isocurvature perturbations in the late universe, it is necessary to have a small variation in these values. In this section, we discuss two ways of generating the requisite large VEVs while suppressing the variation between different spacetime regions: through quantum fluctuations, which are suppressed by a Higgs-inflaton coupling until the end of inflation, and by trapping the Higgs field in a false vacuum. The Standard Model Higgs boson has a tree-level potential $$V(\Phi) = m^2 \Phi^\dagger \Phi + \lambda (\Phi^\dagger \Phi)^2,$$ where $\Phi$ is an $\mathrm{SU}(2)$ doublet. The classical field may be written as $$\Phi = \dfrac{1}{\sqrt{2}} \begin{pmatrix} e^{i \theta} \phi \\ 0 \end{pmatrix},$$ where $\phi(x)$ is a real scalar field. The parameters $m$ and $\lambda$, although constant at tree-level, are modified by both loop and finite temperature corrections. For the experimentally preferred top quark mass and Higgs boson mass, loop corrections result in a negative running coupling $\lambda$ at sufficiently large VEVs, with the result that the $\phi = v_\mathrm{EW} = 246 \; \mathrm{GeV}$ minimum is metastable at zero temperature [@Degrassi:2012ry]. We note, however, that a stable vacuum is possible within current experimental uncertainties [@Degrassi:2012ry]. The running of the quartic coupling produces a shallow potential, which has the consequence that that a large VEV develops during inflation due to quantum fluctuations, at least in the regime in which the Standard Model vacuum is stable [@Enqvist:2013kaa]. We consider this sort of scenario in subsection IC-2 below. Alternatively, the metastability of the electroweak vacuum is frequently possible within the inflationary paradigm [@Kusenko:1996xt; @*Kusenko:1996jn; @*Kobakhidze:2013tn], and the Higgs potential may be sensitive to higher-dimensional operators which lift the second minimum. We consider this scenario in the subsequent subsection. IC-1: Metastable Vacuum at Large VEVs {#subsec:IC1} ------------------------------------- At the large VEVs, the Higgs potential may be sensitive to the effects of higher-dimensional operators, which can lift the second minimum and consequently stabilize the electroweak vacuum. The Higgs VEV may take an initial large value during inflation, similar to the initial VEV of the inflaton field itself in chaotic inflation models. During inflation, such a VEV evolves towards the false vacuum from above, and then remains trapped in this false vacuum until destabilized by thermal corrections in reheating. Subsequently, the field rolls to the global minimum at $\phi = 0$, until electroweak symmetry is broken at a significantly later time. In order to lift the second minimum, we consider terms of the form $$\mathcal{L}_\mathrm{lift} = \dfrac{\phi^{10}}{\Lambda_{\text{lift}}^{6}}. \label{eq:lift}$$ This non-renormalizable operator may be viewed as an effective operator arising from integrating out heavy states in loops. During inflation, thermal corrections in the supercooled universe are insufficient to destabilize the metastable vacuum. We also ensure that the quantum fluctuations (discussed in detail in the next subsection) do not destabilize the vacuum by requiring that the potential barrier height $\Delta V \gg H_I^4$. In order to suppress the above-mentioned isocurvature perturbations, we will ensure that fluctuations about the false minimum are able to relax back to the minimum, for which it is sufficient to ensure $m_\mathrm{eff} \sim \sqrt{ d^2 V \slash d\phi^2} > H_I$ in the region probed by quantum fluctuations. As a specific example, we consider the Higgs potential with one loop corrections [@Degrassi:2012ry] with the experimentally preferred values $m_h = 126$ GeV and $m_t = 173.07$ GeV. Taking $\Lambda_{\text{lift}} = 6.52 \times 10^{15}$ GeV gives a metastable minimum near $\phi=10^{15}$ GeV, with a potential barrier height of $\Delta V \approx 10^{53} \; \mathrm{GeV}^4$. We will consider $H_I \sim 10^{11} \; \mathrm{GeV}$; in addition to being insufficient to probe the region beyond the barrier, this is less than the effective mass $m_\mathrm{eff} \sim 10^{13} \; \mathrm{GeV}$ in the region probed by quantum fluctuations. Provided that the maximum reheat temperature is greater than $\sim 5 \times 10^{13} \; \mathrm{GeV}$, thermal corrections during reheating are sufficient to destabilize this vacuum. IC-2: Quantum Fluctuations {#subsec:IC2} -------------------------- The running coupling constant $\lambda$ results in a shallow potential, and during inflation, scalar fields with slowly rising potentials generically develop large VEVs. Qualitatively, the scalar field in a de Sitter space can develop a large VEV via quantum effects, such as Hawking-Moss instantons [@Bunch:1978yq; @Hawking:1981fz] or stochastic growth [@Linde:1982uu; @Starobinsky:1982ee; @Vilenkin:1982wt]. The field then relaxes to its equilibrium value via a classical motion, which requires a time $$\tau_{\phi}\sim m_{\mathrm{eff}}^{-1}\sim\left( \sqrt{d^{2}V\slash d\phi^{2}}\right) ^{-1}.$$ If the universe expands sufficiently quickly during inflation, then relaxation is too slow and quantum jumps occur frequently enough to maintain a large VEV. Specifically, large VEVs occur if the Hubble parameter $H_I = \sqrt{8\pi \slash 3} \Lambda_I^2 \slash M_\mathrm{Pl} \gg \tau_\phi^{-1}$. For field values $\phi$ that satisfy this relation, Hubble friction is sufficient to prevent the system from relaxing to its equilibrium value $\phi = 0$. Averaged over superhorizon scales, the mean Higgs VEV is such that $V(\phi_I) \sim H_I^4$ [@Bunch:1978yq; @Hawking:1981fz; @Enqvist:2013kaa], provided that this VEV does not probe the second vacuum in the case that the electroweak vacuum is quasistable. Although the average vacuum expectation value is $\phi_I$, there is variation between the VEVs of different horizon-sized patches. Consequently, different patches of the observable universe began with different $\phi_0$ values, and as discussed above, this generically results in unacceptably large isocurvature perturbations. However, also as mentioned, the Higgs potential is sensitive to the effects of higher-dimensional operators at large VEVs; here we use such operators to limit the growth of the Higgs VEV to the last several e-folds of inflation. This has the result that the isocurvature perturbations are limited to smaller angular resolution scales than have been experimentally probed. Specifically, we introduce one or more couplings between the Higgs and inflaton field of the form $$\mathcal{L}_\mathrm{\phi I} = c \dfrac{(\Phi^\dagger \Phi)^{m/2} (I^\dagger I)^{n \slash 2}}{M_\mathrm{Pl}^{m+n-2}}, \label{eq:inflaton_coupling}$$ which increases the effective mass of the Higgs field during the early stages of inflation, when $\langle I \rangle $ is large (superplanckian, in the case of chaotic inflation). As explained above, when $\tau_\phi^{-1} \sim m_\mathrm{eff}(\phi_I) \sim H_I$ the expansion of the universe is not sufficiently rapid to trap the field at large VEVs. At the end of slow-roll inflation, $\left< I \right>$ decreases; consequently this term becomes negligible and the Higgs acquires a large vacuum expectation value. If the Higgs VEV grows during the last $N_\mathrm{last}$ e-folds of inflation, it reaches the average value $$\phi_0=\min [\phi_I, \sqrt{N_\mathrm{last}} H_I \slash 2 \pi]. \label{eq:v0_IC2}$$ Provided $N_\mathrm{last}\approx 5-8$, the baryonic isocurvature perturbations develop only on the smallest angular scales which are not yet constrained. We emphasize that operators of the form may be viewed as effective operators arising from integrating out heavy states in loops. We note that the change in $\langle I \rangle$ during the slow-roll phase of inflation is model-dependent, and consequently the allowed range of parameters $c$, $m$, and $n$ differs from model to model. This range may be quite narrow, and so this scenario may require some fine-tuning. As a concrete example, we consider only the term, $$V_\mathrm{mix} = \dfrac{1}{2} \dfrac{I^{2n}}{M^{2 n-2}} \phi^2,$$ which induces an effective mass $m_\mathrm{eff}(\left< I \right>) = \left< I \right>^n \slash M^{n-1}$ for the Higgs field. We define $I_1$ as the VEV of the inflaton field value at the end of slow roll inflation, and $I_2$ as the VEV of the inflaton field 8 e-folds before the end of slow roll inflation. To ensure that the Higgs VEV grows only during the last e-folds, we must choose parameters such that $m_\mathrm{eff}(I_2) \approx H_I$. We illustrate this approach with quartic inflation (although this is disfavored observationally; see Ref. [@Ade:2013uln]). With the inflaton potential $V_I = \lambda_I I^4$, slow roll inflation ends when the inflaton as a vacuum expectation value of $I_1 = M_\mathrm{Pl} \slash \sqrt{2\pi}$. The number of e-folds during the time in which the inflaton evolves from $\left<I \right>$ to $I_1$ is $$\begin{aligned} N(\left< I \right> \rightarrow I_1) = \pi \left( \dfrac{\left< I \right>}{M_\mathrm{Pl}} \right)^2 - \dfrac{1}{2},\end{aligned}$$ which gives $$I_2 = \sqrt{ \dfrac{17}{2\pi} } M_\mathrm{Pl}.$$ (Although this field value is superplanckian, this is a feature of quartic inflation which does not necessarily apply to other inflationary models.) The Hubble parameter at this field value is given by $$H_I^2(I_2) = \dfrac{8\pi}{3 M_\mathrm{Pl}^2} \lambda_I I_2^4 = \dfrac{8\pi}{3} \left( \dfrac{17}{2\pi} \right)^2 \lambda_I M_\mathrm{Pl}^2.$$ The quartic coupling $\lambda_I$ must be $\lesssim 10^{-13}$ in order to avoid large CMB temperature anisotropies, which gives $M \sim 10^6 M_\mathrm{Pl}$ for both $n=2$ and $n=4$. In this way, a coupling between the Higgs field and the inflaton field can prevent the Higgs VEV from growing until the last several $N$-folds of inflation, suppressing the scale of isocurvature perturbations. As this example illustrates, the constraints on the Higgs-inflaton coupling depends on the shape of the inflaton potential. Although we have demonstrated an explicit calculation using a quartic inflationary potential, a similar calculation can be done with other potentials. Our analysis of the final asymmetry will depend only on the VEV of the Higgs field at the end of inflation, which as noted above is $\phi_0=\min [\phi_I, \sqrt{N_\mathrm{last}} H_I \slash 2 \pi]$ provided that the parameters in whatever inflationary model is used are chosen such that the Higgs VEV does not begin to grow until the last $N_\mathrm{last}$-folds of inflation. Therefore, we do not specify a specific inflationary model in our analysis, and we take the inflationary scale $\Lambda_n$ and the inflaton decay rate $\Gamma_I$ to be free parameters. Reheating {#sec:Reheat} ========= Now that we have established that the Higgs field can develop a large VEV during inflation, we are interested in its subsequent evolution to its equilibrium value. Relaxation begins when the Hubble parameter is comparable to the effective mass of the Higgs field, $m_\mathrm{eff}(\phi) \approx H(I)$, which is within the reheating epoch. Therefore, this relaxation is sensitive to finite temperature effects due to the plasma, and so we now proceed to discuss reheating. In both scenarios, whether IC-1 or IC-2, we ensure that the energy density is never dominated by the Higgs field. Inflaton oscillations dominate until the transition to the radiation dominated era, which occurs when the inflaton decay width is comparable to the Hubble parameter, $\Gamma_I \sim H_\mathrm{RH}$, which typically occurs after the Higgs field has lost a significant portion of its energy. Consequently, the reheat temperature $T_\mathrm{RH} \sim \sqrt{ \Gamma_I M_\mathrm{Pl}}$, is generally only weakly constrained [@Dai:2014jja]. For simplicity, we assume coherent oscillations begin instantly at the end of the inflationary epoch, and as a simple model, we assume that the inflaton decays entirely to radiation at a constant rate, $$\dot{\rho}_r + 4 H(t) \rho_r = \Gamma_I \rho_I,$$ where $$\rho_I = \dfrac{\Lambda_I^4 e^{-\Gamma_I t}}{a(t)^3}$$ is the energy density of the inflaton field. The evolution of the Hubble parameter is given by $$H(t) \equiv \dfrac{\dot{a}}{a} = \sqrt{ \dfrac{8 \pi}{3 M_\mathrm{Pl}^2} (\rho_r + \rho_I) }.$$ This is a complete system of equations that may be solved independently of the evolution of the Higgs condensate. Throughout this work, we take $t=0$ to be the beginning of the coherent oscillation of the inflaton field; during the coherent oscillation epoch, the universe evolves as if it were matter dominated, until the radiation from reheating dominates. During reheating, the effective temperature of the plasma is defined using the radiation density as $$\rho_{r}=\frac{g_{*}\pi^{2}}{30}T^{4}. \label{eq:rho_R}$$ For $t \gg t_i = (2 \slash 3) \sqrt{ 3 \slash 8 \pi} M_{{\rm Pl}} \slash \Lambda_I^2$, the temperature evolves as $$T=\left( \frac{3}{g_*\pi^3}\frac{\Gamma_I M_\mathrm{Pl}^2}{t}\right)^{1/4}, \label{eq:time1}$$ until it reaches the reheat temperature $T_R\sim \sqrt{\Gamma_I M_\mathrm{Pl}}$. Subsequently, radiation dominates the energy density and the temperature evolves as $$T=\left( \frac{45}{16\pi^3 g_*}\right)^{1/4} \sqrt{M_\mathrm{Pl}/t} . \label{eq:time2}$$ Evolution of the Higgs VEV {#sec:Relaxation} ========================== We now turn our attention to the relaxation of the Higgs VEV, which evolves as [@Kolb:1990vq] $$\ddot{\phi} + 3 H(t) \dot{\phi} + V_\phi^\prime (\phi,T(t)) + \Gamma_{H}\dot{\phi} = 0, \label{eq:Higgs_VEV_eqn_motion}$$ where $V_\phi(\phi,T)$ is the Higgs effective potential, including modifications from the decays of the condensate [@Enqvist:2013kaa]. $\Gamma_{H}$ describes the effect of the perturbative decay of the condensate. In the first subsection below, we discuss the one-loop corrected potential, including one-loop corrections to the RG equations. Subsequently, we consider the non-perturbative decay of the Higgs condensate, followed by perturbative decay. Finally, we present a numerical analysis of the evolution of the Higgs condensate, before we proceed to the next section which introduces the relevant higher dimensional operator we use to produce the nonzero lepton asymmetry. Effective Potential ------------------- The Standard Model Higgs potential computed to a fixed order in perturbation theory is generally gauge-dependent, although the value of the potential at the extrema are not (see, for example, [@Andreassen:2014gha; @Andreassen:2014eha]). One can ensure gauge-invariant results by removing the gauge-dependence of the potential using Nielsen identities [@Nielsen:1975fs; @Fukuda:1975di; @Aitchison:1983ns]. Here we use the Landau gauge, which has good numerical agreement with the corrected potential [@Andreassen:2014gha; @DiLuzio:2014bua]. In our analysis, we have used the one-loop corrected potential [@Casas:1994qy], with running couplings (including one-loop corrections to the renormalization group (RG) equations, as given in [@Degrassi:2012ry]). The one-loop potential is $$\begin{aligned} V_\phi^\mathrm{1-loop} &= \dfrac{1}{2} m_\phi^2 \phi^2 + \dfrac{\lambda}{4} \phi^4 + \dfrac{1}{(4\pi)^2} \left[ \dfrac{m_H(\phi)^4}{4} \left( \ln \left( \dfrac{m_H(\phi)^2}{\mu^2} \right) - \dfrac{3}{2} \right) + \dfrac{3 m_G(\phi)^4}{4} \left( \ln \left( \dfrac{m_G(\phi)^2}{\mu^2} \right) - \dfrac{3}{2} \right) \right. \nonumber \\ &\left. + \dfrac{3 m_W(\phi)^4}{2} \left( \ln \left( \dfrac{m_W(\phi)^2}{\mu^2} \right) - \dfrac{5}{6} \right) + \dfrac{3 m_Z(\phi)^4}{4} \left( \ln \left( \dfrac{m_Z(\phi)^2}{\mu^2} \right) - \dfrac{5}{6} \right) - 3 m_t(\phi)^4 \left( \ln \left( \dfrac{m_t(\phi)^2}{\mu^2} \right) - \dfrac{3}{2} \right) \right],\end{aligned}$$ where $\mu$ is the renormalization scale and the tree-level masses for the Higgs boson, Goldstone mode, $W$ bosons, $Z$ boson, and top quark are $$\begin{aligned} {3} m_W^2 &= \dfrac{g^{2} \phi^2}{4}, &\quad m_Z^2 &= \dfrac{(g^2 + g^{\prime \, 2}) \phi^2}{4}, \quad & m_t &= \dfrac{y_t \phi}{\sqrt{2}}, \nonumber \\ m_H^2 &= m_\phi^2 + 3 \lambda \phi^2, &\quad m_G^2 &= m_\phi^2 + \lambda.\label{eq:Tree-level mass}\end{aligned}$$ We have also included the finite temperature corrections [@Anderson:1991zb; @Kapusta:2006pm], $$\begin{aligned} V_T(\phi,T) &= -\dfrac{T^2}{2 \pi^2} \left[ 6 m_W^2 J_B \left( \dfrac{m_W}{T} \right) + 3 m_Z^2 J_B \left( \dfrac{m_Z}{T} \right) \right. \nonumber \\ & \left. \qquad + 12 m_t^2 J_F \left( \dfrac{m_t}{T} \right) \right],\end{aligned}$$ where $$\begin{aligned} J_{B}(y) & =\sum_{n=1}^{\infty}\frac{1}{n^{2}}K_{2}(ny),\\ J_{F}(y) & =\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^{2}}K_{2}(ny),\end{aligned}$$ and we have ignored the contributions from Higgs bosons and Goldstone mode, which only dominate when $\phi\lesssim v_{\mathrm{EW}}$. We emphasize that we do not use the high temperature expansion, as during reheating the condition $T(t) \gg \phi(t)$ is not satisfied at all times. The renormalization scale $\mu$ is taken to be $\sqrt{\phi^2 + T^2}$. We note that two-loop corrections may be significant at the boundary of the metastability region [@Degrassi:2012ry]; however, a self-consistent analysis at two-loop order would include finite temperature effects in the RG equations, which is beyond the scope of this work. After the Higgs VEV passes through zero, it generally oscillates around its minimum at $\phi=0$, which remains a minimum for $T\gg v_{\mathrm{EW}}$. During this oscillation, the Higgs condensate can then decay perturbatively and non-perturbatively into Standard Model particles. The non-perturbative decay happens much faster than the perturbative decay and is the dominant channel, as pointed out by [@Enqvist:2013kaa]. We now proceed to discuss the effect of these decays. Non-Perturbative Decay ---------------------- First, we consider non-perturbative decay. The oscillation of the Higgs field provides a time-dependent mass term for all the coupled particles, which can cause resonant production of the particles. The produced particles then induce an effective mass term to the Higgs condensate as a backreaction; this attenuates the oscillation of the Higgs field until the resonant production is off [@Enqvist:2013kaa; @Enqvist:2014tta]. The non-perturbative decay channel of Higgs is dominated by $h\rightarrow WW,\: ZZ$. The Lagrangian containing the Standard Model weak gauge fields and the Higgs sector is $$\begin{aligned} \mathcal{L} & =\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V_{\phi}(\phi,T)+g^{\mu\nu}\left[\frac{1}{4}g^{2}W_{\mu}^{+}W_{\nu}^{-}\right.\nonumber \\ & \;\left.+\frac{1}{8}\left(g^{2}+g'^{2}\right)Z_{\mu}Z_{\nu}\right]\phi^{2}+\mathcal{L}_{A\text{, kin}},\end{aligned}$$ where the kinetic terms of the gauge fields can be expanded as $$\begin{aligned} \mathcal{L}_{A\text{, kin}} & =-\frac{1}{2}\left(\nabla_{\mu}W_{\nu}^{+}-\nabla_{\nu}W_{\mu}^{+}\right)\left(\nabla^{\mu}W^{\nu-}-\nabla^{\nu}W^{\nu-}\right)\nonumber \\ & -\frac{1}{4}\left(\nabla_{\mu}Z_{\nu}-\nabla_{\nu}Z_{\mu}\right)^{2}+O(g)(\text{non-Abelian terms}).\end{aligned}$$ Since the non-Abelian contributions are small at the beginning of the resonant production of $W$ and $Z$ bosons, we ignore those terms [@Enqvist:2013kaa; @Enqvist:2014tta]. We also work specifically in flat FLRW spacetime, $g_{\mu\nu}=a^{2}\left(\tau\right)\eta_{\mu\nu}$, with conformal time $\tau=\int a^{-1}dt$. The resonant production of the weak gauge fields, $A_{\mu}=W_{\mu}^{\pm}\:\text{or}\: Z_{\mu}$, in momentum space is then described by $$A_{0}\left(\vec{k},\tau\right)=\frac{-ikA'_{L}\left(\vec{k},\tau\right)}{k^{2}+a^{2}m_{A}^{2}\left(\phi\right)}, \label{eq:A0}$$ $$A''_{T,i}+\omega_{k}^{2}\left(\phi\right)A_{T,i}=0, \label{eq:AT}$$ and $$A''_{L}+\omega_{k}^{2}\left(\phi\right)A_{L}+\frac{2k^{2}}{\omega_{k}^{2}\left(\phi\right)}\partial_{\tau}\ln\left(am_{A}\right)A'_{L}=0, \label{eq:AL}$$ where $\omega_{k}=\sqrt{k^{2}+a^{2}m_{A}^{2}\left(\phi\right)}$ and prime denotes differentiation with respect to conformal time $d\tau$. $\vec{A}_{T}\left(\vec{k},t\right)$ and $A_{L}\left(\vec{k},t\right)$ are the transverse and longitudinal components of the spatial component $\vec{A}\left(\vec{k},t\right)$, respectively. The mass term $m_{A}^{2}\left(\phi\right)$ is given in Eq.  and can include the thermal correction by replacing $\phi^{2}\rightarrow\phi^{2}+C_{A}T^{2}$ where we use $C_{W}=2/3$, and $C_Z < 1$ is determined by diagonalizing the mass matrix [@Elmfors:1993re]. Due to extra friction term in Eq.  for the longitudinal component $A_{L}$, we expect the resonance production of this mode to be suppressed. $A_{0}$, which depends only on $A_{L}$ through Eq. , should also be suppressed [@Enqvist:2014tta]. Hence, we focus on the transverse mode $A_{T}$ only. Resonant production of particles can be understood as the amplification of vacuum fluctuations. The number of particles in each mode produced from the vacuum is $$n_{k}=\frac{1}{2\omega_{k}}\left(\left|A'_{\mu}(\vec{k},\tau)\right|^{2}+\omega_{k}^{2}\left|A_{\mu}(\vec{k},\tau)\right|^{2}\right)-\frac{1}{2}.$$ The initial conditions are taken to be the WKB approximation of the vacuum solution, $$A_{T}(k,0)=\frac{1}{\sqrt{2\omega_{k}}};\quad A'_{T}(k,0)=-i\sqrt{\frac{\omega_{k}}{2}},$$ which satisfy $n_{k}(0)=0$ and the Wronskian condition $AA'^{*}-A^{*}A'=i$. Fig. \[fig:W\_prod\] shows the amplification of the $W$ field, which increases each time the Higgs VEV passes through zero. The number density $n_{k}$ is shown in Fig. \[fig:nk0\]. It has a sequence of flat steps, which are separated by peaks. Those peaks occur when $\phi=0$; due to the rapidly changing mass, the number of particles is not well defined at these points. Particle number is well defined only when $\phi$ reaches a local maxima or minima. We approximate the particle number of $A_{\mu}$ quanta within each oscillation of $\phi$ by its value when $\dot{\phi}=0$, which is supported by the flatness of the steps in Fig. \[fig:nk0\]. The resonant production begins once the Higgs VEV starts to oscillate at $\tau\sim800/\phi_{0}$. The decrease of $n_{k}$ at $\tau\sim2500/\phi_{0}$ indicates the system has a stochastic resonance, which is a distinctive feature of parametric resonance in an expanding universe [@Kofman:1997yn]. The resonant production then ceases at $\tau\sim3300/\phi_{0}$ because the amplitude of $\phi$ has decreased to the order of $T$. ![Real and imaginary parts (blue and purple lines) of $W_{T}(\tau)$ for $k=0$ for IC-1, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. The vertical lines designate the first time the Higgs VEV crosses zero, and the time of maximum reheating, from left to right.[]{data-label="fig:W_prod"}](W){width="1\columnwidth"} ![ $\text{log}\left[n_{k}(\tau)\right]$ for $k=0$, with the same parameters as Fig. \[fig:W\_prod\]. Note $n_{k}\left(\tau\right)$ stops increasing at $\tau\sim3300/\phi_{0}$ because the amplitude of $\phi$ has decreased to the order of $T$. The effective mass of $W$ was then dominated by $T$ instead of $\phi$. \[fig:nk0\]](nk0){width="1\columnwidth"} If we approximate the oscillation of the Higgs VEV by $\phi(\tau)=\phi_{m}\cos(\omega_{\phi}\tau)$, we can write Eq.  as a Mathieu equation of the form $$\frac{d^{2}A_{T}}{dz^{2}}+\left(m^{2}+b^{2}\cos^{2}z\right)A_{T}=0$$ where $z=\omega_{\phi}\tau$, $m^{2}\approx\left[k^{2}+a^{2}m_{A}^{2}\left(\phi=0,T\right)\right]/\omega_{\phi}^{2}$ and $b^{2}\approx a^{2}m_{A}^{2}\left(\phi_{m},T=0\right)/\omega_{\phi}^{2}$. The Mathieu equation has an instability only when $b\gtrsim m^{2}$. Thus, resonant production is suppressed for $$k>k_\mathrm{max}\approx\sqrt{a\omega_{\phi}m_{A}\left(\phi_{m},0\right)-a^{2}m_{A}^{2}\left(0,T\right)}.$$ The produced $W$ and $Z$ fields induce an effective mass for the Higgs field as a backreaction, $$\begin{aligned} m_{\phi,W}^{2} & =-\frac{1}{2}g^{2}\left\langle W_{\mu}^{+}W^{\mu-}\right\rangle \label{eq:mHw}\\ m_{\phi,Z}^{2} & =-\frac{1}{4}\left(g^{2}+g'^{2}\right)\left\langle Z_{\mu}Z^{\mu}\right\rangle \label{eq:mHz}\end{aligned}$$ where the expectation value of $A=W,\: Z$ can be approximated as [@Kofman:1997yn; @Enqvist:2013kaa] $$g^{\mu\nu}\left\langle A_{\mu}A_{\nu}\right\rangle \cong\frac{-2}{a^{2}}\left\langle A_{T}^{2}\right\rangle \approx\frac{-1}{\pi^{2}a^{2}}\int_{0}^{\infty}\frac{k^{2}dk}{\omega_{k}}n_{k}.\label{eq:AAexpect}$$ In general, the integral in Eq.  will need to be regularized. However, in our case there is no significant contribution from $n_{k}$ values with $k\gtrsim k_{max}$, and so the integral is finite. The upper limit can be approximated by $k_{max}$. One can then include the non-perturbative decay of the Higgs by adding the induced mass terms Eqs.  and into the Higgs potential in Eq. . Fig. \[Higgs VEV with non-perturbative decay\] shows an example of the Higgs evolution with the non-perturbative decay for the IC-1 scenario. The increasing effective masses from $W$ and $Z$ affect the oscillation of Higgs when $m_{\phi,A}^{2}\gtrsim T$; these decrease the amplitude of the Higgs oscillation. When the Higgs VEV decreases to $\phi\lesssim T$, the resonant production of $W$ and $Z$ end, because the non-perturbative decay channel is blocked by the large $W$ and $Z$ thermal masses. In this case, one has only to consider perturbative decay channels, discussed in subsection \[sub:Perturbative-Decay-Thermalization\]. ![Non-perturbative decay of the Higgs condensate for IC-1, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. The purple (blue) line corresponds to evolution of the Higgs VEV with (without) non-perturbative decay. The brown line corresponds to the temperature of the plasma. The vertical dashed line indicates the time of maximum reheating.[]{data-label="Higgs VEV with non-perturbative decay"}](Higgs_Evolution_with_Non-Perturbative_Decay){width="1\columnwidth"} Note the generated $W$ and $Z$ bosons can decay perturbatively into fermions. This decay could in principle obstruct the resonant production of $W$ and $Z$ in the usual Standard Model case [@GarciaBellido:2008ab; @Figueroa:2015rqa]. However, in the parameter space that we are interested in, the average decay times of $W$ and $Z$ bosons $\left\langle \Gamma_{W,\, Z}\right\rangle ^{-1}$ are longer than the semiperiod of the Higgs oscillation. Thus, we have ignored the decay of $W$ and $Z$ in our analysis. The analysis in this subsection can be improved by using a lattice gauge theory [@GarciaBellido:2003wd; @Figueroa:2015rqa]. However, as Fig. \[Higgs VEV with non-perturbative decay\] demonstrates, the non-perturbative decay of the Higgs condensate is relevant only after several oscillations, whereas in our scenario the asymmetry will be generated primarily during the initial oscillation of the Higgs VEV. Perturbative Decay - Thermalization\[sub:Perturbative-Decay-Thermalization\] ---------------------------------------------------------------------------- The perturbative decay of the Higgs condensate is described by the friction term $\Gamma_{H}\dot{\phi}$ in the equation of motion . The decay width can be computed through the imaginary part of the self-energy operator $$\Gamma_{H}=\frac{\text{Im}\Pi}{m_{\mathrm{eff}}},$$ where $m_{\mathrm{eff}}=\text{Re}\sqrt{\partial^{2}V_{\phi}\left(\phi,T\right) \slash \partial \phi^2}$ is the effective mass of the Higgs boson. In a finite-temperature thermal background, $\Gamma_{H}$ corresponds to the thermalization rate of the Higgs condensate. Here we consider the fermionic decay channels, motivated by the large top Yukawa coupling. (The dominant bosonic channels, $WW$ and $ZZ$, are included in the non-perturbative calculation, which dominates their perturbative contribution.) In the thermal bath of fermions, there are additional excitations which are the removals of antiparticles from the Fermi sea (holes). The dispersion relations for particles and holes are [@Weldon:1989ys; @Elmfors:1993re; @Enqvist:2004pr] $$\begin{aligned} \hat{\omega}_{p}-\hat{k}-\frac{g_{T}^{2}}{\hat{k}}-\frac{g_{T}^{2}}{2\hat{k}}\left(1-\frac{\hat{\omega}_{p}}{\hat{k}}\right)\ln\left|\frac{\hat{\omega}_{p}+\hat{k}}{\hat{\omega}_{p}-\hat{k}}\right| & =0,\label{eq:Ep}\\ \hat{\omega}_{h}+\hat{k}+\frac{g_{T}^{2}}{\hat{k}}-\frac{g_{T}^{2}}{2\hat{k}}\left(1+\frac{\hat{\omega}_{h}}{\hat{k}}\right)\ln\left|\frac{\hat{\omega}_{h}+\hat{k}}{\hat{\omega}_{h}-\hat{k}}\right| & =0,\label{eq:Eh}\end{aligned}$$ where $\hat{\omega}\left(k\right)=\omega/T$, $\hat{k}=k/T$, and the subscripts $p$ and $h$ refer to particles and holes respectively. Eq.  can also be expressed as $$\hat{\omega}_{h}=\hat{k}\coth\left(\frac{\hat{k}^{2}}{g_{T}^{2}}+\frac{\hat{k}}{\hat{\omega}_{h}+\hat{k}}\right),\label{eq:Eh coth}$$ which is a convenient form for numerical purposes. We will specify the necessary coefficient $g_T$ below. In these equations, we have made the approximation that the left- and right-handed fermions have the same thermal mass $m\left(T\right)=g_{T}T$; generically, this is not true because they are in different representations of the Standard Model gauge group. However, this difference, which is much smaller than the difference between the particle and hole contributions, is negligible [@Elmfors:1993re]. The dominant fermionic contribution to the thermalization of the Higgs condensate is from the top quark, due to the large top-Higgs Yukawa coupling. The thermal mass of the left-handed top quark is [@Elmfors:1993re] $$g_{T,t}=\sqrt{\frac{1}{6}g_{s}^{2}+\frac{3M_{W}^{2}+\frac{1}{9}\left(M_{Z}^{2}-M_{W}^{2}\right)+M_{t}^{2}+M_{b}^{2}}{8v_{\mathrm{EW}}^{2}}},\label{eq:gT mass}$$ where $M_{i}$ are the physical masses at $T=0$, and the strong coupling $g_{s}\cong1.220$. The presence of particles and holes in the fermionic plasma provides two thermalization processes for the Higgs condensate. A Higgs boson can decay into a pair of particles or a pair of holes respectively if $$m_{\text{eff}}=2\omega_{i}\left(k_{i}\right);\quad i=p,h$$ is satisfied. The contribution of each process to the decay width is $$\frac{\text{Im}\Pi_{\text{dec}}}{T^{2}}=\frac{y_t^{2}}{4\pi g_{T}^{4}}\sum_{i=p,h}\hat{k}_{i}^{2}\left(\hat{\omega}_{i}^{2}-\hat{k}_{i}^{2}\right)^{2}\left(1-2n_{i}\right) ,$$ where $y_t$ is the top Yukawa coupling, and $$n_{h,p}=\frac{1}{\exp\left(\hat{\omega}_{h,p}\right)+1}$$ are the fermion distribution functions. Although this decay channel is blocked when $m_{\phi}<2\min\left[\omega_{h}\left(k\right)\right]$, a Higgs boson can also be absorbed by a hole to produce a particle. The contribution of this absorption channel to the width is $$\frac{\text{Im}\Pi_{\text{abs}}}{T^{2}}=\frac{y_t^{2}}{2\pi g_{T}^{4}}\sum_{i}\hat{k}_{i}^{2}\left(\hat{\omega}_{p}^{2}-\hat{k}_{i}^{2}\right)\left(\hat{\omega}_{h}^{2}-\hat{k}_{i}^{2}\right)\left(n_{h}-n_{p}\right)$$ where the index $i$ sums over the solutions of $$m_{\text{eff}}+\omega_{h}\left(k_{i}\right)=\omega_{p}\left(k_{i}\right).$$ The total thermalization rate is then the sum of two channels $\text{Im}\Pi=\text{Im}\Pi_{\text{abs}}+\text{Im}\Pi_{\text{dec}}$. For IC-1, Fig. \[Higgs thermalization rate vs Hubble\] shows the thermalization rate of the Higgs condensate through the top quark compared with the Hubble parameter. We see the thermalization rate is comparable to the Hubble parameter only after the maximum reheating has been reached. Therefore, the evolution of the Higgs VEV is affected only at the end of reheating later time of reheating as shown in Fig. \[Higgs thermalization rate vs Hubble\]. ![Higgs thermalization rate through the top quark compared with the Hubble parameter for IC-1, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. The vertical lines designate the first time the Higgs VEV crosses zero, and the time of maximum reheating, from left to right.[]{data-label="Higgs thermalization rate vs Hubble"}](Higgs_Thermalization_rate_vs_H_for_IC1){width="1\columnwidth"} We have repeated the above analysis with the bottom quark in place of the top quark and verified numerically that its contribution is negligible; we also note that plasma effects can delay thermalization [@Drewes:2013iaa]. We also remark that particularly for IC-2, the thermalization rate is frequently much smaller than the Hubble parameter. Since thermalization occurs on such long time scales, it has no effect on our analysis. Numerical Results ----------------- ![The evolution of the Higgs VEV for IC-1 (blue line) and temperature (purple line) as a function of time, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. This plot includes both the effect of non-perturbative decay and thermalization. The vertical line designate the time of maximum reheating.[]{data-label="fig:Higgs_Evolution_IC1"}](Higgs_Evolution_with_P_and_NP_Decay_IC1_t){width="1\columnwidth"} ![The evolution of the Higgs VEV for IC-2 (blue line) and temperature (purple line) as a function of time, with the parameters $\Lambda_{I}=10^{17}\;\mathrm{GeV}$, $\Gamma_{I}=10^{8}\;\mathrm{GeV}$, and $N_{\mathrm{last}}=8$. This plot includes both the effect of non-perturbative decay and thermalization, although the effect of condensate decay is not appreciable in this case.[]{data-label="fig:Higgs_Evolution_IC2"}](Higgs_Evolution_with_NP_Decay_IC2_t){width="1\columnwidth"} Figures \[fig:Higgs\_Evolution\_IC1\] and \[fig:Higgs\_Evolution\_IC2\] illustrate the evolution of the Higgs VEV (and temperature) as functions of time. For IC-2, the relevant inflaton parameters are $\Lambda_I = 10^{17} \; \mathrm{GeV}$ and $\Gamma_I = 10^8 \; \mathrm{GeV}$, and we have assumed $N_\mathrm{last} = 8$ to determine $\phi_0$, which does not probe the quasistable vacuum. For IC-1, we have used the operator (with the numerical parameters) discussed in section \[subsec:IC1\] to lift the second minimum, along with the inflationary parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. For both plots, we use 126 GeV and 173.07 GeV for the masses of the Higgs boson and top quark respectively. We briefly remark on the qualitative features of these plots. Although in IC-2, the Higgs VEV is constrained to grow only in the last 8 $N$-folds of inflation, it may still reach a large value if $H_I$ is large; for these inflationary parameters, $H_I = 2 \times 10^{15}$ GeV. For other choices of $\Lambda_I$, the initial VEV $\phi_0$ for the IC-2 scenarios is significantly smaller. Conversely, in our IC-1 scenario, the initial VEV $\phi_0 \approx 10^{15}$ GeV is set by the parameters chosen in section \[subsec:IC1\]. In scenario IC-1, the Higgs VEV remains approximately constant at short times, until reheating is sufficient to destabilize the second minimum, whereas in IC-2 the field relaxes as soon as the Hubble parameter becomes sufficiently small. Subsequent oscillations have a larger amplitude in the IC-1 scenario; this is due to the difference in $\Lambda_I$ values, which result in less Hubble friction, and that the additional term contributes to the velocity of the VEV. A notable feature in \[fig:Higgs\_Evolution\_IC1\] is that shortly after maximum reheating, the Higgs condensate begins to oscillate more rapidly. This is due to the non-perturbative decay of the Higgs condensate, as illustrated in Fig. \[Higgs VEV with non-perturbative decay\] above. (In the IC-2 scenario, such features are not relevant due to the rapid decay of the amplitude of oscillation.) We see that both the thermal decay of the condensate and the non-perturbative decay of the condensate have little effect on first approach of the VEV to zero; in the scenario we outline below, the lepton asymmetry is generated primarily during this swing, and therefore, these processes have little effect on the total asymmetry generated. Effective Chemical Potential {#sec:chemical_potential} ============================ In the Introduction, we observed that the Higgs potential may be sensitive to the effects of higher dimensional operators, which are normally suppressed by powers of a high scale. In section \[sec:Higgs\_IC\], we have seen how such operators can be used to make a quasistable minimum in the Higgs potential or to suppress the growth of the Higgs VEV until the end stages of inflation. Now, we consider an operator, involving only Standard Model fields, which generates an effective external chemical potential for leptons (and also baryons). This operator is $$\mathcal{O}_6 = - \dfrac{1}{\Lambda_n^2} \phi^2 \partial_\mu j^\mu, \label{eq:O6_1}$$ where $j^\mu$ is the fermion current of all fermions which carry $\mathrm{SU}_\mathrm{L}(2) \times \mathrm{U}_\mathrm{Y}(1)$ charge. We observe that the zeroth component of is the $B+L$ charge density. We now consider how an operator of this form can be generated. Within the Standard Model itself, one can use quark loops and the CP-violating phase of the CKM matrix [@Shaposhnikov:1987tw; @Shaposhnikov:1987pf] to generate an effective operator of the form $$\mathcal{O}_6 = - \dfrac{1}{\Lambda_n^2} \phi^2 \left( g^2 W \tilde{W} - g^{\prime 2} A \tilde{A} \right), \label{eq:O6_Operator}$$ where $W$ and $A$ are the $\mathrm{SU}_\mathrm{L}(2)$ and $\mathrm{U}_\mathrm{Y}(1)$ gauge fields respectively. This term is small due to the small Yukawa couplings and small CP-violating phase. However, a term of the same form can be generated by replacing some or all of the quarks with heavier fermions, which may have larger Yukawa couplings and/or CP-violating phases. The scale in the denominator may be $T$, due to thermal loops, or the mass scale of these fermions, $M_n$ [@Shaposhnikov:1987tw; @Shaposhnikov:1987pf; @Smit:2004kh; @Brauner:2012gu]. In the latter case, it is important that the fermions not acquire masses through the Higgs mechanism; otherwise, the Higgs VEV dependence in this term cancels out. Such fermions may acquire soft masses similarly to higgsinos and gauginos in supersymmetric models. This operator could be generated in a UV-complete model. As a concrete example, we mention the fully renormalizable Lagrangian $$\begin{aligned} \mathcal{L}_{hd} &= g \bar{\psi}_{1i} \gamma^\mu \psi_{1i} W_\mu + g^\prime \bar{\psi}_{1i} \gamma^\mu \psi_{1i} A_\mu + y_{i} e^{i \delta_{i}} \phi \bar{\psi}_{1i} \psi_{2} \nonumber \\ & + M_{ij} \bar{\psi}_{1i} \psi_{1j} + m \bar{\psi}_2 \psi_2 + h.c., \label{eq:Fermion_Ex}\end{aligned}$$ where $\psi_{1i}$ are a set of $\mathrm{SU}(2)$ doublets, while $\psi_2$ is a singlet under both $\mathrm{SU}(2)$ and $\mathrm{U}(1)$. Despite the explicit mass terms, this Lagrangian is invariant under $\mathrm{SU}(2)$ rotations provided that both right and left components of the $\psi_{1i}$ doublets couple vectorially to gauge bosons. The phases of the $\psi_{1i}$ doublets may be fixed by eliminating the phases in the mass matrix. Provided that $i \geq 3$, there are more phases $\delta_i$ than can be eliminated by rotating the Higgs field $\phi$ and the singlet $\psi_2$. Fermionic loops such as those in [@Shaposhnikov:1987tw; @Shaposhnikov:1987pf], which involve sums of the Yukawa couplings $y_i e^{i \delta_i}$ due to insertions of the Higgs VEV $\left< \phi \right>$ generate an effective operator of the form . In this case, the scale in the $\mathcal{O}_6$ operator is $\Lambda_n \sim M \sim m$. Once an effective operator of the form is generated, it may be transformed into through the electroweak anomaly equation [@Dine:1990fj]. However, this is only justified if the electroweak sphalerons are in thermal equilibrium [@Ibe:2015nfa; @Daido:2015gqa]. Otherwise, the operator involves the Chern-Simons number density, which is not changed by Higgs relaxation unless the phase of the Higgs VEV evolves. At least for slowly evolving Higgs VEVs, the sphaleron transition rate per unit volume at finite temperature is $$\Gamma_\mathrm{sp} = k \alpha_W^5 T^4 \exp(-v \slash 2 T),$$ where the exponential factor accounts for the suppression due to being in the broken phase. As both the Higgs VEV and the temperature are quickly evolving in the scenario considered here, it may be difficult to arrange for the electroweak sphalerons to be in thermal equilibrium. However, additional gauge groups which couple to fermions can contribute to the anomaly and generate the requisite term, as discussed in Appendix A in [@Pearce:2015nga]. For our purposes, we simply consider a scenario with operator without specifying the mechanism by which it is generated. Returning to equation , we observe that integrating by parts and dropping an unimportant boundary term gives $$\mathcal{O}_6 = - \partial_\mu \left( \dfrac{\phi^2}{\Lambda_n^2}\right) j^\mu.$$ In the case where $\Lambda_n = M_n$ a constant (for example, the mass scale of a fermionic loop, as outlined around Eq. ), this becomes $$\mathcal{O}_{6,M_n} = - \dfrac{1}{M_n^2} (\partial_\mu \phi^2) j^\mu. \label{eq:O6_with_mass}$$ If thermal loops generate this term instead, then this becomes $$\mathcal{O}_{6,T} = - \partial_\mu \left( \dfrac{\phi^2}{T^2} \right) j^\mu \approx - \dfrac{1}{T^2} (\partial_\mu \phi^2) j^\mu, \label{eq:O6_with_T}$$ provided that the temperature is slowly varying on the time scales of the Higgs oscillation. In the IC-1 scenario specifically, the Higgs VEV remains trapped until there is sufficient reheating, which generally ensures that the temperature will be slowly varying during the evolution of the Higgs condensate. Since the Higgs VEV varies only in time, these equations become $$\begin{aligned} \mathcal{O}_{6,\Lambda_{n}} &= - \dfrac{1}{\Lambda_n^2} (\partial_0 \phi^2) j_{B+L}^0.\end{aligned}$$ For each fermionic species, its contribution to this term can be combined with its kinetic energy term, $\bar{\psi}(i \slashed{\partial}) \psi$, which is equivalent to the replacement $$i \partial_0 \rightarrow i \partial_0 - (\partial_0 \phi^2) \slash \Lambda_n^2.$$ This effective raises the energy of antiparticles, $E \rightarrow E + (\partial_0 \phi^2) \slash \Lambda_n^2$, while lowering it for particles, $E \rightarrow E - (\partial_0 \phi^2) \slash \Lambda_n^2$. This can be interpretted as an external chemical potential; further remarks along these lines are discussed in Appendix \[sec:chem\_potential\_apndx\]. In the presence of a lepton-number-violating interaction, the system will relax to its equilibrium state, in which the number of particles exceeds the number of antiparticles. For future reference, we define the effective external chemical potential, $$E_0 = \dfrac{\partial_0 \phi^2}{\Lambda_n^2}.$$ As an effective chemical potential, this operator spontaneously breaks not only $CP$, but in fact, $CPT$ [@Cohen:1987vi]. This operator has been used previously in spontaneous baryogenesis models utilizing gauge [@Dine:1990fj; @GarciaBellido:1999sv; @GarciaBellido:1999px; @GarciaBellido:2003wd; @Tranberg:2003gi] or gravitational [@Davoudiasl:2004gf] interactions. Lepton Number Violating Processes {#sec:lepton_number_violating} ================================= The universe can relax to its equilibrium state with nonzero lepton and baryon number only if there exists some lepton-number or baryon-number violating process. In order to induce such processes, we consider a minimal extension of the Standard Model with the usual seesaw mass matrix in the neutrino sector [@Yanagid1979; @*Yanagida:1980xy; @*Gell-Mann1979]. In theories with a nonzero Majorana mass, the effective lepton number $L$ is the sum of the lepton numbers of the charged leptons and the helicities of the light neutrinos. This is conserved in the limits $M_R \rightarrow \infty$ and $M_R \rightarrow 0$, but it is not conserved for a finite $M_R$. Insertions of the Majorana mass induce lepton-number-violating processes such as those shown in Fig. \[fig:lepton\_violation\]. ![Some diagrams that contribute to lepton number violation via exchange of a heavy Majorana neutrino.[]{data-label="fig:lepton_violation"}](Neutrino_Diagrams){width="1\columnwidth"} We further require that the Majorana mass $M_R$ be significantly greater than both the maximum reheat temperature and the initial mass of the Higgs bosons within the condensate ($m_\mathrm{eff}(\phi_0)$), which suppresses the production of right-handed neutrinos both from thermal production and the decay of the Higgs condensate. Consequently, the contribution from the typical leptogenesis scenario [@Fukugita:1986hr] is strongly suppressed. The lepton-number-violating diagrams shown in Fig. \[fig:lepton\_violation\] necessarily involve the exchange of the heavy right-handed neutrino in order to violate lepton number, and therefore are comparatively suppressed, leading to a naturally small value for the asymmetry. In order to calculate the thermally averaged cross section $\left< \sigma v \right>$ for these processes in the early universe, we need to know the number densities of neutrinos and Higgs bosons. These can be produced directly through the decay of the inflaton, or in the thermal plasma through weak interactions, involving weak bosons with masses $m_W \propto \phi(t)$. Generically these weak interactions may be in or out of equilibrium the plasma created by inflaton decay; however, when $\phi(t) \sim 0$, these interactions will be in equilibrium and equilibrate the distributions of charged and neutral leptons. To be concrete, we will use a thermal number density of each of these species. The calculation of the cross section and reaction rate are given in Appendix \[sec:cross\_section\_reaction\_rate\_apndx\]. We note that $y^2 \slash M_R$ is set by the mass scale of the left-handed neutrinos, such that $$\begin{aligned} \dfrac{y^2 v_{\mathrm{EW}}^{2}}{2 M_R} & = 0.1 \; \mathrm{eV}. \label{eq:neutrino_mass}\end{aligned}$$ The cross section found in Appendix \[sec:cross\_section\_reaction\_rate\_apndx\] is to a good approximation a function of $y^2 \slash M_R$ only. There is a resonance in the $s$-channel contribution to the cross section; however, we found numerically that this resonance does not change the result appreciably. This is not unexpected as the energy scale, which is set by the temperature, remains significantly below the right-handed Majorana mass scale at all times. As we will discuss below, sphaleron processes later convert this lepton charge asymmetry into a baryon asymmetry, as in the typical leptogenesis scenario [@Fukugita:1986hr]. Boltzmann Transport Equation {#sec:boltzmann_equation_ahhhh!} ============================ The reactions discussed in Section \[sec:lepton\_number\_violating\] are generally not sufficient to establish equilibrium, due to the suppression from the large Majorana mass. (Recall that we have assumed $\phi_0 \ll M_R$ and $T_\mathrm{max} \ll M_R$ in order to suppress the typical leptogenesis mechanism.) The relaxation of the system towards equilibrium can be described by a system of Boltzmann equations, based on detailed balance. The rate of change in the neutrino number density is [@Giudice:2003jh] $$\begin{aligned} & \dot{n}_{\nu_{L}}+3Hn_{\nu_{L}}=-\sum_{\ell=e,\mu,\tau}\left[\dfrac{n_{\nu_{L}}n_{h^{0}}}{n_{\nu_{L}}^{eq}n_{h^{0}}^{eq}}\gamma^{eq}(\nu_{L}h^{0}\rightarrow\bar{\nu}_{\ell}h^{0})-\dfrac{n_{\bar{\nu}_{\ell}}n_{h^{0}}}{n_{\bar{\nu}_{\ell}}^{eq}n_{h^{0}}^{eq}}\gamma^{eq}(\bar{\nu}_{\ell}h^{0}\rightarrow\nu_{L}h^{0})\right.\nonumber \\ & \left.+\dfrac{n_{\nu_{L}}n_{\nu_{\ell}}}{n_{\nu_{L}}^{eq}n_{\nu_{\ell}}^{eq}}\gamma^{eq}(\nu_{L}\nu_{\ell}\rightarrow h^{0}h^{0})-\dfrac{n_{h^{0}}^{2}}{n_{h^{0}}^{eq\,2}}\gamma^{eq}(h^{0}h^{0}\rightarrow\nu_{L}\nu_{\ell})+\dfrac{n_{\nu_{L}}n_{\bar{\nu}_{\ell}}}{n_{\nu_{L}}^{eq}n_{\bar{\nu}_{\ell}}^{eq}}\gamma^{eq}(\nu_{L}\bar{\nu}_{\ell}\rightarrow h^{0}h^{0})-\dfrac{n_{h^{0}}^{2}}{n_{h^{0}}^{eq\,2}}\gamma^{eq}(h^{0}h^{0}\rightarrow\nu_{L}\bar{\nu}_{\ell})\right],\end{aligned}$$ where $\gamma^{eq}(A \rightarrow B)$ is the equilibrium spacetime rate for the process $A \rightarrow B$. We will assume that interactions are sufficiently fast that the Higgs bosons have their equilibrium density, and in equilibrium, the rate for the process $A \rightarrow B$ is equal to the rate of $B \rightarrow A$. Therefore this simplifies to $$\begin{aligned} \dot{n}_{\nu_{L}}+3Hn_{\nu_{L}} & =-\sum_{\ell=e,\mu,\tau}\left[\left(\dfrac{n_{\nu_{L}}}{n_{\nu_{L}}^{eq}}-\dfrac{n_{\bar{\nu}_{\ell}}}{n_{\bar{\nu}_{\ell}}^{eq}}\right)\gamma^{eq}(\nu_{L}h^{0}\leftrightarrow\bar{\nu}_{\ell}h^{0})+\left(\dfrac{n_{\nu_{L}}n_{\nu_{\ell}}}{n_{\nu_{L}}^{eq}n_{\nu_{\ell}}^{eq}}-1\right)\gamma^{eq}(\nu_{L}\nu_{\ell}\leftrightarrow h^{0}h^{0})\right.\nonumber \\ & \quad\left.+\left(\dfrac{n_{\nu_{L}}n_{\bar{\nu}_{\ell}}}{n_{\nu_{L}}^{eq}n_{\bar{\nu}_{\ell}}^{eq}}-1\right)\gamma^{eq}(h^{0}h^{0}\leftrightarrow\nu_{L}\bar{\nu}_{\ell})\right],\label{eq:n_nu_L}\end{aligned}$$ while for antineutrinos we find the similar equation $$\begin{aligned} \dot{n}_{\bar{\nu}_L} + 3 H n_{\bar{\nu}_L} &= -\sum_{\ell = e,\mu,\tau} \left[\left( \dfrac{n_{\bar{\nu}_L} }{n_{\bar{\nu}_L}^{eq} } - \dfrac{n_{\nu_\ell}}{n_{\nu_\ell}^{eq} } \right) \gamma^{eq}(\bar{\nu}_L h^0 \leftrightarrow \nu_\ell h^0) + \left( \dfrac{n_{\bar{\nu}_L} n_{\bar{\nu}_\ell}}{n_{\bar{\nu}_L}^{eq} n_{\bar{\nu}_\ell}^{eq}} - 1 \right) \gamma^{eq}(\bar{\nu}_L \bar{\nu}_\ell \leftrightarrow h^0 h^0) \right. \nonumber \\ & \quad\left. + \left( \dfrac{n_{\bar{\nu}_L} n_{\nu_\ell}}{n_{\bar{\nu}_L}^{eq} n_{\nu_\ell}^{eq}} - 1 \right) \gamma^{eq}(h^0 h^0 \leftrightarrow \bar{\nu}_L \nu_\ell) \right].\label{eq:n_nu_bar_L}\end{aligned}$$ Since we are interested in the order of magnitude of the final asymmetry, we simplify to the case in which there is only a single neutrino species. Subtracting Eq.  from Eq.  gives a Boltzmann-type equation for the difference $n_{L}=n_{\nu_{L}}-n_{\bar{\nu}_{L}}$, $$\begin{aligned} \dot{n}_{L}+3Hn_{L} & =-2\left(\dfrac{n_{\nu_{L}}}{n_{\nu_{L}}^{eq}}-\dfrac{n_{\bar{\nu}_{L}}}{n_{\bar{\nu}_{L}}^{eq}}\right)\gamma^{eq}(\nu_{L}h^{0}\leftrightarrow\bar{\nu}_{L}h^{0})-\left(\dfrac{n_{\nu_{L}}^{2}}{n_{\nu_{L}}^{eq\,2}}-1\right)\gamma^{eq}(\nu_{L}\nu_{L}\leftrightarrow h^{0}h^{0})\nonumber \\ & \quad+\left(\dfrac{n_{\bar{\nu}_{L}}^{2}}{n_{\bar{\nu}_{L}}^{eq\,2}}-1\right)\gamma^{eq}(\bar{\nu}_{L}\bar{\nu}_{L}\leftrightarrow h^{0}h^{0}).\end{aligned}$$ The rates $\gamma^{eq}(A \leftrightarrow B)$ refer to the process $A \leftrightarrow B$ in equilibrium, but in the presence of the $\mathcal{O}_6$ operator, which alters the energy of particles and antiparticles. Consequently, these reaction rates are not generally equal to the rates one would find in the absence of the $\mathcal{O}_6$ operator; however, the difference appears at a higher order in $E_0 \slash T$ [@Ibe:2015nfa] and so we will neglect it. This has the consequence that the rates for $h^0 h^0 \leftrightarrow \nu_L \nu_L$ and $h^0 h^0 \leftrightarrow \bar{\nu}_L \bar{\nu}_L$ are equal. We will use the subscript 0 to denote reaction rates calculated without the $\mathcal{O}_6$ operator. We next substitute $n_{\nu_{L}}^{eq}=e^{E_{0}\slash T}n_{0}^{eq}$ and $n_{\bar{\nu}_{L}}^{eq}=e^{-E_{0}\slash T}n_{0}^{eq}$, where $n^{eq}_0 = T^3 \slash \pi^2$ is the equilibrium number of left-handed neutrinos (or antineutrinos), when $E_0 =0$. Expanding the resulting equation to lowest order in $E_0 \slash T$ gives $$\begin{gathered} \dot{n}_{L}+3Hn_{L}=-\dfrac{2}{n_{0}^{eq}}\left(n_{L}-\dfrac{E_{0}}{T}n_{L}^{\mathrm{tot}}\right)\gamma_{0}^{eq}(\bar{\nu}_{L}h^{0}\leftrightarrow\nu_{L}h^{0})\\ -\dfrac{1}{n_{0}^{eq\,2}}\left(n_{L}^{\mathrm{tot}}n_{L}-\dfrac{E_{0}}{T}n_{L}^{\mathrm{tot}\,2}\right)\gamma_{0}^{eq}(\nu_{L}\nu_{L}\leftrightarrow h^{0}h^{0}),\end{gathered}$$ where we have introduced the notation $n_{L}^{\mathrm{tot}}=n_{\nu_{L}}+n_{\bar{\nu}_{L}}$, and we have dropped terms quadratic in the asymmetry (e.g., $n_{L}^{2}$). Approximating $n_{L}^{\mathrm{tot}}\approx2n_{0}^{eq}$, the equation becomes $$\begin{aligned} \dot{n}_{L}+3Hn_{L} & =-\dfrac{2}{n_{0}^{eq}}\left(n_{L}-\dfrac{2E_{0}}{T}n_{0}^{eq}\right)\left[\gamma_{0}^{eq}(\bar{\nu}_{L}h^{0}\leftrightarrow\nu_{L}h^{0})\right.\nonumber \\ & \quad\left.+\gamma_{0}^{eq}(\nu_{L}\nu_{L}\leftrightarrow h^{0}h^{0})\right]. \label{eq:nL}\end{aligned}$$ The reaction rates are calculated in Appendix \[sec:cross\_section\_reaction\_rate\_apndx\]. From this equation, we observe that the equilibrium asymmetry is $$n_{L,eq} = \dfrac{2 E_0}{T} n_0^{eq} = \dfrac{2 T^2}{\pi^2} \dfrac{\partial_0 \phi^2}{\Lambda_n^2}.$$ ![The comoving density of equilibrium lepton asymmetry for IC-1, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. Purple (Blue) line corresponds to the result with (without) the thermalization through the top quark. The top diagram corresponds to times before maximum reheating, whereas the bottom diagram corresponds to times after maximum reheating.[]{data-label="equilibrium lepton asymmetry for thermalization"}](Neq_with_Thermalization_1.pdf "fig:"){width="1\columnwidth"}\ ![The comoving density of equilibrium lepton asymmetry for IC-1, with the parameters $\Lambda_{I}=10^{15}\;\mathrm{GeV}$ and $\Gamma_{I}=10^{9}\;\mathrm{GeV}$. Purple (Blue) line corresponds to the result with (without) the thermalization through the top quark. The top diagram corresponds to times before maximum reheating, whereas the bottom diagram corresponds to times after maximum reheating.[]{data-label="equilibrium lepton asymmetry for thermalization"}](Neq_with_Thermalization_2.pdf "fig:"){width="1\columnwidth"} During subsequent oscillations of the Higgs VEV, the chemical potential changes sign. However, due to the large suppression in the cross section, significant washout can be avoided if the Higgs oscillation amplitude decreases rapidly. This is in contrast to Ref. [@GarciaBellido:1999px], in which washout was avoided by using coherent oscillations of the inflaton field to modify the sphaleron transition rate. We note that this is modified by the decay of the Higgs condensate; however, as discussed above, the Higgs condensate does not typically thermalize until after reheating. Fig. \[equilibrium lepton asymmetry for thermalization\] demonstrates the effect of thermalization on the equilibrium density. However, since the lepton asymmetry will be generated primarily during the first oscillation of the Higgs VEV, the effect of the thermalization of the Higgs condensate is negligible. Resulting Asymmetry {#sec:asymmetry_produced} =================== In this section, we consider the lepton asymmetry produced by these Higgs-neutrino interactions during the relaxation of the Higgs VEV, as outlined above. We present four numeric examples, covering both IC-1 and IC-2, along with the scale of the $\mathcal{O}_6$ operator $\Lambda_n$ set to the temperature $T$ (motivated by thermal loops) and a constant $M_n$ (motivated by loops of heavy fermions). This expands the analysis of [@Kusenko:2014lra], which only considered two such scenarios. In all scenarios, we use the improved Boltzmann equation , with the cross sections calculated in Appendix \[sec:cross\_section\_reaction\_rate\_apndx\], and the improved calculation of the Higgs condensate equation of motion. We show the time-evolution of the lepton asymmetry in all four scenarios; subsequently, we present an analysis of the parameter space in which a sufficiently large late-time lepton asymmetry can be generated. An analytic approximation for the asymmetry calculated here numerically can be found in [@Kusenko:2014lra], which we summarize here. The Boltzmann equation can be analyzed in two regimes: during the relaxation of the Higgs vacuum expectation value, during which $\partial_t \phi^2$ is significant and $E_0(t) \neq 0$, and the subsequent cooling of the universe, during which $E_0 = 0$. The reactions shown in Fig. \[fig:lepton\_violation\] are typically out of thermal equilibrium by the end of Higgs relaxation, due to the exchange of the heavy right-handed neutrino, which suppresses washout. One may approximate the potential as $V \sim \lambda \phi^4$, with an effective running coupling $\lambda$ as in [@Degrassi:2012ry]. The final asymmetry $\eta = n_L/(2 \pi^2 g_*T^3 \slash 45)$ is approximately $$\begin{aligned} \eta & = \dfrac{45}{2\pi^2} \frac{\sqrt{\lambda}\phi_{0}^{3} \Lambda_I}{M_{n}^{2}T_R^2}\ t_{\textrm{rlx}}^2 \Gamma_I^2 \times \min \left \{1, T_{\textrm{rlx}}^{3} t_{\textrm{rlx}} \sigma_{R} \right\} \nonumber \\ & \quad\times \exp\left[-\left( \frac{24 + 3\sqrt{15}}{\sqrt{ 3 g_* \pi^7}} \right) \sigma_R M_{\mathrm{Pl}} T_R\right], \label{eq:eta_analytical}\end{aligned}$$ where $t_\mathrm{rlx}$ and $T_\mathrm{rlx}$ as the time and temperature at the end of Higgs relaxation, and $\sigma_R \approx 10^{-31} \; \mathrm{GeV}^{-2}$ approximates the the cross section given by equation . This estimate includes the dilution due to entropy production during the ongoing reheating process. Four Numerical Examples ----------------------- ![Plot of the resulting asymmetry for IC-1, for $\Lambda_n = T$ (blue, solid) and $\Lambda_n = M_n = 10^{14}$ GeV (red, dashed). Both scenarios have $\Lambda_I = 10^{15}$ GeV, and $\Gamma_I = 10^9$ GeV. The vertical lines designate the first time the Higgs VEV crosses zero, time of maximum reheating, and the beginning of the radiation dominated era, from left to right. $t=0$ corresponds to the beginning of inflaton oscillations.[]{data-label="fig:IC_1_plots"}](IC_1_plots.pdf) In this subsection, we present the lepton asymmetry as a function of time for the four scenarios mentioned above. First, we consider two scenarios for IC-1 in Fig. \[fig:IC\_1\_plots\], one with $\Lambda_n = T$ (blue, solid) and one with $\Lambda_n = M_n = 10^{14}$ (red, dashed) for the relevant scales in the $\mathcal{O}_6$ operator. Both scenarios have a maximum temperature of $ 6 \times 10^{13}$ GeV, since they share the inflationary parameters $\Lambda_I = 10^{15}$ GeV and $\Gamma_I = 10^9$ GeV. As in Fig. \[fig:Higgs\_Evolution\_IC1\], the initial Higgs VEV is $10^{15}$ GeV in both cases, which is set by the location of the second minimum in the Higgs potential. Although the asymmetry $\eta$ oscillates during the first few oscillations of the Higgs VEV, it relatively quickly settles into a steady state, and approaches a constant value around the beginning of the radiation dominated era. Note that the Higgs field begins to oscillate before the time of maximum reheating. As mentioned above, the cross section depends primarily on $y^2 \slash M_R$ which is fixed by the light neutrino masses. As mentioned above, we require $T \ll M_R$ in order to suppress the thermal production of right-handed neutrinos; we found that it was sufficient to set $M_R = 9 \times 10^{15}$ GeV, which results in $y \sim 1.7$ using equation . (This gives $y^2 \slash 4 \pi \sim 0.2$, within the perturbative regime.) The late time asymptotic asymmetry is $\eta \sim 10^{-7}$ for $\Lambda_n = T$ and $\eta \sim 10^{-8}$ for $\Lambda_n = M_n = 10^{14} \; \mathrm{GeV}$; this is expected as the temperature is lower than $M_n$. We discuss the variation of the final asymmetry over parameter space below. ![Plot of the resulting asymmetry for IC-2, for $\Lambda_n = T$ (blue, solid) and $\Lambda_n = M_n = 5 \times 10^{12}$ GeV (red, dashed). Both scenarios have $\Lambda_I = 10^{16}$ GeV and $\Gamma_I = 10^8$ GeV. From left to right, the dotted lines correspond to the time of maximum reheating, the first time the Higgs VEV crosses zero, and the beginning of the radiation dominated era.[]{data-label="fig:IC_2_plots"}](IC_2_Plots.pdf) First, however, we present similar results for the IC-2 scenario, again for the two cases $\Lambda_n=T$ (blue, solid) and $\Lambda_n = M_n = 5 \times 10^{12}$ GeV (red, dashed). Both plots have the inflationary parameters $\Lambda_I = 10^{16}$ GeV and $\Gamma_I = 10^8$ GeV, which results in a maximum temperature of $10^{14} \; \mathrm{GeV}$ during reheating. We again take $N_\mathrm{last} = 8$ to determine the Higgs VEV at the end of inflation; this results in $\phi_0 = 10^{13}$ GeV for the Higgs VEV as the start of Higgs relaxation. (We emphasize that this choice, with $M_n < \phi_0$ and $M_n < T$, raises questions regarding the use of effective field theory, which we address below.) In order to suppress the thermal production of right-handed neutrinos, we have taken $M_R = 10 T_\mathrm{max} = 10^{15} \; \mathrm{GeV}$; in order to produce left-handed neutrino masses on the scale of $0.1 \; \mathrm{eV}$, the neutrino Yukawa coupling must be $~1.9$. (This gives $y^2 \slash 4 \pi \approx 0.3$.) The final asymmetries here are of order $10^{-14}$ (for $\Lambda_n = T$) and $10^{-12}$ (for $\Lambda_n = M_n = 5 \times 10^{12}$ GeV). As $M_n$ is generally smaller than the temperature, it is not surprising that this results in a larger asymmetry. These values are insufficient to account for the observed matter-antimatter asymmetry; this motivates a search of the available parameter space. Parameter Space --------------- In two of the four scenarios above, the resulting lepton asymmetry is $\mathcal{O}(10^{-8})$ or larger, which is sufficient to explain the observed baryon asymmetry. However, it is interesting to explore the resulting asymmetry as a function of parameter space; results are shown in Figures \[fig:Parameter space IC1 Mn\], \[fig:Parameter space IC2 Mn\], \[fig:Parameter space IC1 T\], and \[fig:Parameter space IC2 T\]. ![The resulting asymmetry ($\log\left|\eta\right|$) at the end of reheating for IC-1, for $\Lambda_{n}=M_{n}$, with $\Lambda_{I}=10^{15}$ GeV. \[fig:Parameter space IC1 Mn\]](Leptogenesis_IC1_Mn_Parameter_space){width="1\columnwidth"} ![The resulting asymmetry ($\log\left|\eta\right|$) at the end of reheating for IC-2, for $\Lambda_{n}=M_{n}$ with $\Lambda_{I}=5\times10^{16}$ GeV, which gives $\phi_{0}=2.7\times10^{14}$ GeV.\[fig:Parameter space IC2 Mn\]](Leptogenesis_IC2_Mn_Parameter_space){width="1\columnwidth"} As above, we handle the initial conditions with the operator and scale given in \[subsec:IC1\] for the IC-1 plots, and as discussed in \[subsec:IC2\] with $N_\mathrm{last} = 8$ for the IC-2 plots. We emphasize again that the resulting asymmetry is sensitive to $y^2 \slash M_R$, which is set by the left-handed neutrino mass scale, and not on the specific value of $M_R$. However, to suppress thermal production of right-handed neutrinos, we have chosen $M_R = 10 T_\mathrm{max}$ (for IC-1) and $M_R = 20 T_\mathrm{max}$ (for IC-2). We have then set the neutrino Yukawa coupling $y$ by the scale of the left-handed neutrino masses (equation ). We have noted in gray the regions in which the perturbativity condition $y^2 \slash 4\pi < 1$ fails. For the IC-1 plots, the post-inflationary Higgs VEV $\phi_0$ is determined entirely by the operator which lifts the second minimum to generate the quasistable vacuum; for the operator and scale in \[subsec:IC1\], the Higgs VEV relaxes from $\phi_0 = 10^{15} \; \mathrm{GeV}$. For IC-2, $\phi_0$ is determined by the Hubble parameter during inflation, which is in turn fixed by the energy density in the inflation field (see equation ). First, we remark on some general features. The asymmetries generated in the IC-2 scenario are smaller than those generated in the IC-1 scenario. This is because in IC-1, the Higgs VEV does not evolve until the temperature is sufficiently large to destabilize the false vacuum; consequently, the initial evolution of the VEV to zero occurs at higher temperatures. (Compare the vertical lines significantly the first Higgs VEV crossing and maximum reheating in Figures \[fig:IC\_1\_plots\] and \[fig:IC\_2\_plots\].) As a result of the higher temperature, the system is driven towards equilibrium at a faster rate (through the Boltzmann equation ); furthermore, in the $\Lambda_n = T$ scenario, the larger temperature also means that the equilibrium charge density is larger. Figures \[fig:Parameter space IC1 Mn\] and \[fig:Parameter space IC2 Mn\] show the lepton asymmetry $\eta$ as a function of parameter space, in the case in which the scale of the $\mathcal{O}_6$ operator is a constant $M_n$. To reach comparable asymmetries in the IC-2 scenario, we must decrease the scale $M_n$ significantly, such that throughout this plot, $M_n < \phi_0$ and $M_n < T_\mathrm{max}$. In the IC-1 plot, these conditions fail below the red dashed line and blue solid line respectively. In these regions, the use of effective field theory in generating the operator is questionable. An ultraviolet completion of the model is necessary to obtain a reliable description of the dynamics in the regime where the temperature exceeds the scale $M_{n}$. We leave such a completion, which would also elucidate the nature of the new physics leading to the appearance of the $\mathcal{O}_{6}$ operator, for a future work. We focus on the region of Fig. \[fig:Parameter space IC1 Mn\] for which the asymmetry $\eta$ is larger than $10^{-10}$ and $M_n > 0.1 T_\mathrm{max}$. We see that this favors smaller values of $\Gamma_I$. However, for a given $\Lambda_I$, there is a minimum $\Gamma_I$, for which the maximum temperature is insufficient to destabilize the second vacuum. For the parameters considered here ($\Lambda_{I} = 10^{15} \; \mathrm{GeV}$ and the lift operator given in \[subsec:IC1\]), this occurs for $\Gamma_{I} = 6.3\times10^{8}$. Next, we consider the case in which the scale of the $\mathcal{O}_6$ operator is set by the temperature, in Figures \[fig:Parameter space IC1 T\] and \[fig:Parameter space IC2 T\]. This parameter space has one fewer parameters, and so we allow $\Lambda_I$ to also vary, which changes the Hubble parameter during inflation. For IC-2, increasing $H_I$ results in a larger value of $\phi_0$, as described by , which increases the resulting asymmetry. This also increases the temperature scale, resulting in a larger asymmetry, as is evident in both figures. (We also note that for IC-1, we must take care that quantum fluctuations during inflation do not destabilize the second vacuum; this is shown in orange in Fig. \[fig:Parameter space IC1 T\].) As mentioned above, if the reheat temperature is sufficiently small, thermal corrections are unable to destabilize the second vacuum, and therefore this is no relaxation of the Higgs VEV. This region is denoted in white in Fig. \[fig:Parameter space IC1 T\]. Furthermore, in the region in which $M_R < \phi_0$, right-handed neutrinos can be copiously produced by the decay of the Higgs condensate, which is not desirable (as concerns the lepton asymmetry production scenario presented here); this region is denoted in yellow. Furthermore, if $\Lambda_{I}$ is too small, there is insufficient inflation to account for the observed flatness and uniformity of the universe; this region is shown in blue on both figures. In IC-2, there is a further concern that the Higgs VEV can probe the second, deeper minimum at large VEVs. This may not be a phenomenological problem [@Kearney:2015vba], but would require a refinement of the analysis presented here. (Alternatively, $N_\mathrm{last}$ could be decreased, such that $\phi_0$ remains below the instability scale.) This region is shown in purple in Fig. \[fig:Parameter space IC2 T\]. ![The resulting asymmetry ($\log\left|\eta\right|$) at the end of reheating for IC-1, for $\Lambda_{n}=T$.\[fig:Parameter space IC1 T\]](Leptogenesis_IC1_T_Parameter_space){width="1\columnwidth"} ![The resulting asymmetry ($\log\left|\eta\right|$) at the end of reheating for IC-2, for $\Lambda_{n}=T$. \[fig:Parameter space IC2 T\]](Leptogenesis_IC2_T_Parameter_space){width="1\columnwidth"} We see that for IC-1, it is possible to find parameter space in which a sufficiently large asymmetry is generated, but this is not possible for IC-2. For IC-1, smaller $\Gamma_I$ values are favored (and consequently, slower reheating), as for constant $\Lambda_n$. Converting the Lepton Asymmetry Into a Baryon Asymmetry ------------------------------------------------------- Thus far, we have analyzed the production of an excess of leptons over antileptons; here we discuss how this is converted into a baryon asymmetry. First, though as the universe continues to cool, the Standard Model degrees of freedom go out of thermal equilibrium; the resulting entropy production reduces the asymmetry by about two orders of magnitude. This process has produced a net density of $(B-L)$ charge, which is unchanged once these processes are negligible. However, the $(B+L)$ $U(1)$ symmetry is anomalous, and electroweak sphalerons will redistribute the excess between leptons and baryons as in standard leptogenesis [@Fukugita:1986hr], at a rate per unit volume $$\Gamma_{\mathrm{sp}}\sim(\alpha_{W}T)^{4}\exp\left[-g_{W}\phi(t)/T\right].$$ At small vacuum expectation values, the $B$ and $L$ densities approach their equilibrium values, $n_{B}=(28/79)n_{B-L}$. This produces a baryon asymmetry of about the same order of magnitude as the lepton asymmetry found above. Consequently, the regions of parameter space that generate $\eta \sim 10^{-8}$ in the analysis above give a final baryon asymmetry matching the observed value of $\mathcal{O}(10^{-10})$. Conclusions =========== In this paper, we have extended the analysis of Ref. [@Kusenko:2014lra], which introduced a novel leptogenesis possibility in which the lepton asymmetry is as a consequence of an effective chemical potential induced by the post-inflationary relaxation of the Higgs field. Although right-handed neutrinos participate in the lepton-number-violating interactions as a mediator, this is different from the typical leptogenesis scenario in which the asymmetry is produced via the decay of right-handed neutrinos. Even though the heavy right-handed neutrino suppresses the cross section which produces the asymmetry, we have shown parameters for which a sufficiently large the asymmetry is generated. We have analyzed the evolution of the Higgs condensate in detail, including both non-perturbative and perturbative decay. We have derived the relevant Boltzmann equation which governs lepton number, and we have replaced the order-of-magnitude estimate with a tree-level scattering cross section between Higgs bosons and neutrinos in the thermal plasma. Furthermore, we have considered the evolution of the lepton asymmetry for four combinations of producing the large Higgs VEV during inflation (IC-1 and IC-2) and the scale of the $\mathcal{O}_6$ operator (a fermion mass scale $M_n$ and the temperature $T$); we then presented an analysis of the asymmetry as a function of parameter space. We demonstrated regions which produces a baryonic asymmetry that meets or exceeds observational limits. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank K. Harigaya, M. Ibe, M. Kawasaki, M. Peloso, K. Schmitz, F. Takahashi, and T.T. Yanagida for helpful discussions. The work of A.K. was supported by the U.S. Department of Energy Grant DE-SC0009937, as well as by the World Premier International Research Center Initiative (WPI), MEXT, Japan. Interpretting the $\mathcal{O}_6$ Operator as an External Chemical Potential {#sec:chem_potential_apndx} ============================================================================ In section \[sec:chemical\_potential\], we remarked that the $\mathcal{O}_6$ operator in equation acts like an external chemical potential. In this appendix, we explain why this is so and how this leads to a number density asymmetry in chemical equilibrium. This $\mathcal{O}_6$ operator induces a term proportional to $(\partial_0 \phi^2) \slash \Lambda_n^2 j_{B+L}^0$ in the Lagrangian. If $\phi$ is treated as an external field (which we discuss further below), then this produces a term of the form $- (\partial_0 \phi^2) \slash \Lambda_n^2 j_{B+L}^0$ in the Hamiltonian, which has the appropriate form $- \mu_\mathrm{eff} j_{B+L}^0$. A term similar to this, using the phase of the Higgs VEV is frequently used in spontaneous baryogenesis scenarios, in which the phase of the Higgs VEV is used instead of its magnitude (e.g., [@Cohen:1990it]), $$\mathcal{O}_6^\prime = (\partial_t \theta) j_{B+L}^0.$$ However, in such scenarios, the asymmetry is produced via the decay of the Higgs condensate, and therefore, it is not appropriate to treat $\theta$ as an external degree of freedom. When the Hamiltonian is determined using $$\mathcal{H} = \sum_i \dfrac{\partial \mathcal{L}}{\partial \dot{\phi}_i } \dot{\phi}_i - \mathcal{L},$$ there is no contribution from $\mathcal{O}_6^\prime$. Although an asymmetry may be produced in such cases [@Dolgov:1994zq; @Dolgov:1996qq; @Dolgov:1997qr], it is not appropriate to interpret $\dot{\theta}$ as a chemical potential. In the scenario we consider in this work, the time scale for the reactions which maintain the thermal distribution of the plasma is smaller than that of the evolution of the Higgs VEV. Therefore, for purposes of asymmetry generation, it is reasonable to consider the Higgs VEV as a background field, in which case it is appropriate to consider this as a chemical-potential-like term [@Dolgov:1997qr], as we explain below. The $\mathcal{O}_6$ operator shifts $i \partial_0 \rightarrow i \partial_0 - (\partial_0 \phi^2) \slash \Lambda_n^2$ in the Lagrangian. Consequently, the asymptotically free eigenfunctions are $\sim \exp(\mp i (E \mp (\partial_0 \phi^2) \slash \Lambda_n^2) t)$, which justifies our comment that this is equivalent to decreasing the energy of particles by $E_0 = (\partial_0 \phi^2)\slash\Lambda_{n}^{2}$ and increasing the energy of antiparticles by the same amount. If we use the ideal gas approximation, then the phase space densities are $$\begin{aligned} f_p &= \exp(-(E-E_0-\mu_p)\slash T) \nonumber \\ f_{\bar{p}} &= \exp(-(E+E_0-\mu_{\bar{p}})\slash T) \end{aligned}$$ The number densities of particles and antiparticles can be found in the normal manner, using $$\begin{aligned} n_p &= \int \dfrac{d^3p}{(2\pi)^3} \exp(-(E-E_0-\mu_p)\slash T) \nonumber \\ n_{\bar{p}} &= \int \dfrac{d^3p}{(2\pi)^3} \exp(-(E+E_0-\mu_{\bar{p}})\slash T) \end{aligned}$$ If we use the non-relativistic relation $E = p^2 \slash 2 m$, then we find $$\begin{aligned} \mu_p &= - E_0 + T \ln(\lambda^3n_p) \nonumber \\ \mu_{\bar{p}} &= E_0 + T \ln(\lambda^3 n_{\bar{p}}),\end{aligned}$$ where $\lambda = \sqrt{ 2\pi m T}$. In the above relation, the first term can be interpretted as an external chemical potential (due to the “driving" effect of the $\mathcal{O}_6$ operator), while the $T \ln(\lambda^3 n_p)$ is the usual chemical potential of an ideal gas. If a lepton-number-violating process or baryon-number-violating establishes chemical equilibrium between the species, then the chemical potentials will be equal, $\mu_p = \mu_{\bar{p}}$. This gives the expected result $$\dfrac{n_p}{n_{\bar{p}}} = e^{2 E_0 \slash T}.$$ A similar result can be derived using the relativistic relation $E = p$ instead. Calculation of Lepton-Number-Violating Cross Section and Reaction Rate {#sec:cross_section_reaction_rate_apndx} ====================================================================== In this section, we calculate the cross section and reaction rate for the processes shown in Fig. \[fig:lepton\_violation\], assuming a thermal number density for Higgs bosons and neutrinos, as discussed in Section \[sec:lepton\_number\_violating\]. This improves the order of magnitude estimates used in [@Kusenko:2014lra]. As explained in the text, we can use the approximate cross section with the energy shift due to the $\mathcal{O}_6$ operator set equal to zero. In this approximation, the reaction rates for processes with neutrinos and antineutrinos are equal. The top two diagrams of Fig. \[fig:lepton\_violation\] are the $s$- and $t$-channel diagrams of the process $h^0 \nu \rightarrow h^0 \bar{\nu}$, whereas the bottom diagram describes the process $\nu \nu \rightarrow h^0 h^0$. The $s$-channel has a resonance at $E \sim M_R$; however, the typical energy scale $T$ is far beneath this. For completeness, we include the resonance, although it will have negligible effect. In calculating these cross sections, we follow the conventions of [@Dreiner:2008tw] for the Feynman rules of Majorana fermions. The matrix element for the $\nu_\ell h^0 \rightarrow \bar{\nu}_{L} h^0$ process is $$\begin{aligned} - i \mathcal{M} &= i\sum_i \dfrac{Y_{Li} Y_{i\ell}^*}{2} \left[ \dfrac{M_{Ri} - i \Gamma_i \slash 2}{s - M_{Ri}^2 + i \Gamma_i M_{Ri} + \Gamma_i^2 \slash 4} + \dfrac{M_{Ri}- i \Gamma_i \slash 2}{t - M_{Ri}^2 + i \Gamma_i M_{Ri} + \Gamma_i^2 \slash 4} \right] x_{L\alpha}(p_1,s_1) y^{\beta}_\ell(p_4,s_4) \delta_\beta^\alpha,\end{aligned}$$ where $s$ and $t$ are the Mandelstam variables, and $\Gamma_i$ is the width of the right-handed Majorana neutrino. (For a discussion of Breit-Wignar propagators, see [@Nowakowski:1993iu]). The indices 1, 2, 3, and 4 refer to the incoming neutrino, incoming Higgs boson, outgoing Higgs boson, and outgoing antineutrino, in that order. The index $i$ indicates a sum over the heavy right-handed Majorana neutrinos. Let us define $$\begin{aligned} A_i &= s - M_{Ri}^2 + \Gamma_i^2 \slash 4, \nonumber \\ B_i &= t - M_{Ri}^2 + \Gamma_i^2 \slash 4, \nonumber \\ C_i &= \Gamma_i M_{Ri}.\end{aligned}$$ Then the matrix element squared, summed over both the initial and final spin states (as discussed in [@Giudice:2003jh]), is $$\begin{aligned} &\sum_{s_1,s_2} |\mathcal{M}|^2 =\sum_i 2 p_1 \times p_4 \dfrac{|Y_{Li}|^2 |Y_{i\ell}|^2}{4} \left(M_{Ri}^2 + \dfrac{\Gamma_i^2 }{ 4} \right) \times \nonumber \\ & \left[ \dfrac{1}{A_i^2 + C_i^2} + \dfrac{1}{B_i^2 + C_i^2} + \dfrac{2 (A_i B_i + C_i^2)}{(A_i B_i + C_i^2)^2 + C_i^2(A_i - B_i)^2} \right].\end{aligned}$$ In the center of mass reference frame, the cross section is $$\begin{aligned} \sigma_{CM}(s) = \dfrac{1}{16 \pi} \sum_i \dfrac{|Y_{Li}|^2 |Y_{i\ell}|^2}{4} \left( M_{Ri}^2 + \dfrac{\Gamma_i}{4} \right) \int_{-s}^0 dt \, (s+t) \left[ \dfrac{1}{A_i^2 + C_i^2} + \dfrac{1}{B_i^2 + C_i^2} + \dfrac{2 (A_i B_i + C_i^2)}{(A_i B_i + C_i^2)^2 + C_i^2(A_i - B_i)^2} \right].\end{aligned}$$ Generically, the thermally averaged cross section is related to the CM cross section by [@Cannoni:2013zya] $$\begin{aligned} &\left< \sigma v \right> = \dfrac{1}{8T \times m_1^2 K_2(m_1 \slash T) \times m_2^2 K_2(m_2 \slash T)} \int_{(m_1 + m_2)^2}^\infty \dfrac{[s - (m_1 - m_2)^2] [s - (m_1 + m_2)^2]}{\sqrt{s}} K_1(\sqrt{s} \slash T) \sigma_{CM}(s), \label{eq:cross_section}\end{aligned}$$ and so the thermally averaged cross section for $h^0 \nu \rightarrow h^0 \bar{\nu}$ is $$\begin{aligned} &\left< \sigma(h^0 \nu \rightarrow h^0 \bar{\nu}) v \right> = \sum_i \dfrac{ |Y_{Li}|^2 |Y_{i\ell}|^2 }{512 \pi} \left( M_{Ri}^2 + \dfrac{\Gamma_i}{4} \right) \int_0^x dx \int_0^x dy (x^2 - y^2) K_1(x) \left[ \dfrac{1}{(x^2 T^2 - M_{Ri}^2 + \Gamma_i^2 \slash 4)^2 + \Gamma_i^2 M_{Ri}^2} \right.\nonumber \\ & \left. + \dfrac{1}{(y^2 T^2 + M_{Ri}^2 - \Gamma_i^2 \slash 4)^2 + \Gamma_i^2 M_{Ri}^2} - \dfrac{2 ((x^2 T^2 - M_{Ri}^2 + \Gamma_i^2 \slash 4) (y^2 T^2 + M_{Ri}^2 - \Gamma_i^2 \slash 4) -\Gamma_i^2 M_{Ri}^2)}{((x^2 T^2 - M_{Ri}^2 + \Gamma_i^2 \slash 4)(y^2 T^2 + M_{Ri}^2 - \Gamma_i^2 \slash 4) - \Gamma_i^2 M_{Ri}^2)^2 + \Gamma_i^2 M_{Ri}^2 (x^2 +y^2)^2 T^4} \right]\end{aligned}$$ where we have introduced the dimensionless variables $x \equiv \sqrt{s} \slash T$ and $y \equiv \sqrt{-t} \slash T$. Since the temperature evolves in time, the cross section also does; however, when expanded in powers of $T \slash M_{Ri}$, the lowest order contribution is $\sim 1 \slash M_{Ri}^2$, as expected. Repeating the same steps with the $\nu_\ell \nu_L \rightarrow h^0 h^0$ cross section, which does not have a resonance, gives $$\begin{aligned} \left< \sigma(\nu_\ell \nu_L \rightarrow h^0 h^0) v \right> &= \sum_i \dfrac{|Y_{Li}|^2 |Y_{i\ell}|^2}{64 \pi M_{Ri}^2}.\end{aligned}$$ The reaction rates are related to these cross sections by $$\gamma^{eq}(\alpha \beta \rightarrow \gamma \delta) = (n_\alpha^{eq}) (n_\beta^{eq}) \left< \sigma(\alpha \beta \rightarrow \gamma \delta) v \right>, \label{eq:gamma_to_sigma}$$ which holds for any $2 \rightarrow 2$ process. Since we take $E_0 = 0$ in this section, the number densities for Higgs bosons, neutrinos, and antineutrinos are all equal to $T^3 \slash \pi^2$, and so for both processes, $$\begin{aligned} \gamma^0 & = \dfrac{T^6}{\pi^4} \left< \sigma v \right>.\end{aligned}$$ As noted in the text, in order to simplify the calculation, we will consider only the case in which the flavor indices $\ell$ and $L$ are equal, and the contribution of a single right-handed neutrino dominates. Its decay rate is $\Gamma \sim y^2 M_R \slash 16 \pi$, from the only decay $N_R \rightarrow h^0 \nu_L$.
--- abstract: 'The XENON1T collaboration has observed an excess in electronic recoil events below $5~\mathrm{keV}$ over the known background, which could originate from beyond-the-Standard-Model physics. The solar axion is a well-motivated model that has been proposed to explain the excess, though it has tension with astrophysical observations. The axions traveled from the Sun can be absorbed by the electrons in the xenon atoms via the axion-electron coupling. Meanwhile, they can also scatter with the atoms through the inverse Primakoff process via the axion-photon coupling, which emits a photon and mimics the electronic recoil signals. We found that the latter process cannot be neglected. After including the $\rm{keV}$ photon produced via inverse Primakoff in the detection, the tension with the astrophysical constraints can be significantly reduced. We also explore scenarios involving additional new physics to further alleviate the tension with the astrophysical bounds.' author: - Christina Gao - Jia Liu - 'Lian-Tao Wang' - 'Xiao-Ping Wang' - Wei Xue - 'Yi-Ming Zhong' bibliography: - 'ref.bib' title: 'Re-examining the Solar Axion Explanation for the XENON1T Excess' --- Axions are pseudo-goldston bosons which naturally arise from the beyond-the-Standard-Model (BSM) physics scenarios [@Peccei:1977hh; @Weinberg:1977ma; @Wilczek:1977pj]. Due to an approximate shift symmetry, they can be naturally light. Typically, they are very weakly coupled to other particles, which makes them a good candidate of dark matter or dark sector particles. The phenomenology of the axions is rich and they give unique signals in cosmology, astrophysics, and particle physics [@Raffelt:1990yz; @Duffy:2009ig; @Kawasaki:2013ae; @Marsh:2015xka; @Graham:2015ouw]. XENON1T, a dual-phase Liquid Xenon detector, is one of the leading experiments looking for dark matter. Due to its large volume and low backgrounds, the XENON1T is also sensitive to other rare processes potentially related to the BSM physics. Recently, the XENON1T collaboration reported their searches for low-energy electronic recoil, with an excess in the range of 1-5$\,\rm{keV}$, which cannot be accounted for by the known backgrounds [@Aprile:2020tmw]. The XENON1T collaboration has also performed a fit to the excess using the solar axion model [@vanBibber:1988ge]. Since the report from XENON1T collaboration, there have been active speculations about the explanation of the excess [@Takahashi:2020bpq; @OHare:2020wum; @Kannike:2020agf; @Amaral:2020tga; @Alonso-Alvarez:2020cdv; @Fornal:2020npv; @Boehm:2020ltd; @Harigaya:2020ckz; @Bally:2020yid; @Su:2020zny; @Du:2020ybt; @DiLuzio:2020jjp; @Bell:2020bes; @Chen:2020gcl; @AristizabalSierra:2020edu; @Buch:2020mrg; @Choi:2020udy; @Paz:2020pbc; @Dey:2020sai; @Khan:2020vaf; @Cao:2020bwd; @Primulando:2020rdk; @Nakayama:2020ikz; @Lee:2020wmh; @Graciela:1802691; @Yongsoo:1802; @1802727; @1802729]. It is tempting to explain the XENON1T excess using the solar axions since the axion energy spectrum naturally matches the excess. The axions are produced in the Sun from several processes, including the Primakoff process $\gamma \, + \, Ze \, \to \, Ze \, + \, a$; the Atomic axion-recombination and de-excitation, Bremsstrahlung, and Compton scattering processes (ABC); and the nuclear transitions. Hence, the axion-photon $g_{a\gamma}$, axion-electron $g_{ae}$ and axion-nucleon $g_{an}$ couplings enter in the production. With its tiny coupling to photons, the $\rm{keV}$ axions have a long lifetime and can travel from the Sun to the XENON1T. For the processes in the detector which can give the signal, XENON1T [@Aprile:2020tmw] considered only the axion-electron coupling. In this case, the axions could be absorbed by the electrons in xenon atoms. The relevant axion couplings can be summarized in the following Lagrangian, $$\begin{aligned} \mathcal{L} \supset - g_{ae} \frac{\partial_\mu a}{2 m_e} \bar{e} \gamma^\mu \gamma_5 e - \frac{1}{4} g_{a\gamma} a F_{\mu\nu} \tilde{F}^{\mu\nu} .\end{aligned}$$ ${F}^{\mu\nu}$ is the field strength of photon, and its dual $\tilde{F}^{\mu\nu} = \frac{1}{2}\epsilon^{\mu\nu \alpha \beta} F_{\alpha \beta}$. However, the parameter space of the solar axion interpretation of the excess is in tension with he astrophysical observations of stellar evolution including the White Dwarfs (WD) and the Horizontal Branch (HB) stars in the globular clusters (GC) [@Aprile:2020tmw; @DiLuzio:2020jjp]. ![The solar axion induced photon signal through the inverse Primakoff process. []{data-label="fig:process"}](plots/feynman-diagram.pdf){width="0.9\columnwidth"} In this letter, we take into account the fact that at $\mathrm{keV}$ energy range, the current XENON1T experiment can hardly distinguish the detector response of photons from that of electronic recoils. Hence, instead of electronic recoil, the low-energy photons generated through the inverse Primakoff scattering between solar axion and the xenon atoms in the detector can mimic the electronic signal, as shown in Fig. \[fig:process\]. Using inverse Primakoff process to detect axion is proposed in the cryogenic experiments via Bragg scattering [@Buchmuller:1989rb; @Paschos:1993yf; @Creswick:1997pg], and is applied by the SOLAX, COSME, CUORE, CDMS and EDELWEISS collaborations [@Avignone:1997th; @Morales:2001we; @Arnaboldi:2002du; @Arnaboldi:2003tu; @Ahmed:2009ht; @Armengaud:2013rta]. However, it is not included in the liquid time projection chamber type experiments previously. We show that, after including both the electronic recoil and the inverse Primakoff process, the tension between the solar axion explanation and the astrophysical constraints is significantly reduced. To further alleviate the astrophysical bounds, we proposed two models: (1) $U(1)$ Baryon gauge bosons and (2) DM density-dependent interactions. The letter is structured as follows: we first describe the detection using the inverse Primakoff process, and after considering the astrophysics and terrestrial constraints, we present the fit to the data of XENON1T. We then discuss the possible extensions of new physics to further alleviate the tension between the constraints and the XENON1T fit. We conclude in the end.\ ***Detection from inverse Primakoff process.***— In this section, we compute the contribution to the electronic recoil from the inverse Primakoff process $$a + \rm Xe \to \gamma + \rm Xe,$$ where Xe represents the xenon nucleus. The differential cross section is given by [@Buchmuller:1989rb; @Raffelt:1996wa; @Creswick:1997pg]: $$\begin{aligned} \frac{d\sigma^{\rm invPrim}_{a\to\gamma}}{d\Omega} = \frac{\alpha}{16 \pi} g_{a\gamma}^2 \frac{\bm{q}^2}{\bm{k}^2}\left(4 -\bm{q}^2/\bm{k}^2 \right) F_a^2(\bm{q}^2),\end{aligned}$$ where $\alpha$ is the fine structure constant, $\bm{k}$ is the momentum of the incoming axion and $\bm{q}$ is the momentum transfer. In the limit of small axion mass, $m_a \ll |\bm{k}|$, the energy of the outgoing photon is also approximately $|\bm{k}|$. $F_a$ is the form factor characterizing the screening effect of the electric charge of the nucleus. It can be written as $$F_a (\bm{q}^2)= Z \bm{k}^2/(r_0^{-2} + \bm{q}^2), \label{eq:form}$$ where $Z=54$ is the atomic number of xenon and $r_0$ is the screening length [@Buchmuller:1989rb], that can be determined numerically. We take \[eq:form\] and fit the form factors reported in Ref. [@ITC] and obtain $r_0^{-1} = 4.04 {\rm \ keV} = (49 {\rm \ pm})^{-1}$, which is close to the reciprocal of the xenon atomic radii $108$ pm [@doi:10.1063/1.1712084]. Next, we calculate the event rate from solar axions with both the inverse Primakoff process and the axioelectric effect. The cross section of the latter process is given by [@Pospelov:2008jk; @Alessandria:2012mt] $$\begin{aligned} \sigma_{\rm ae} = \sigma_{\rm pe} \frac{g_{\rm ae}^2}{\beta_a} \frac{3 E_a^2}{16\pi \alpha m_e^2} \left(1-\frac{\beta_a^{2/3}}{3}\right),\end{aligned}$$ where $\sigma_{\rm pe}$ is the photoelectric cross-section [@Arisaka:2012pb] and $\beta_a$ is the axion velocity. We will focus on the low energy excess ($\lesssim 5$ keV) throughout this letter, hence only consider the contributions to solar axion flux from the ABC process, $\Phi_a^{\rm ABC}$, and the Primakoff process, $\Phi_a^{\rm Prim} $, and neglect that from nuclear transition of $^{57}$Fe. The ABC flux originates from the axion-electron coupling and is given by $\Phi_a^{\rm ABC}\propto g_{ae}^2$ [@Redondo:2013wwa]. The Primakoff flux is given by [@book1] $$\begin{aligned} & \frac{d\Phi_a^{\rm Prim}}{d E_a} = 6\times 10^{10} \text{cm}^{-2} \text{s}^{-1} \text{keV}^{-1} \times \nonumber \\ &\left(\frac{g_{a\gamma}}{10^{-10}\text{GeV}} \right)^2 \left(\frac{E_a}{\text{keV}} \right)^{2.481} e^{-E_a/(1.205 \text{keV})} .\end{aligned}$$ Given the solar axion flux $\Phi_a$, the differential event rate after including both axioelectric and inverse Primakoff processes in the detection is given by $$\begin{aligned} \frac{dR}{d E_r}= \frac{N_A}{A}&\left(\frac{d \Phi^{\rm ABC}_a}{dE}(E_r) +\frac{d\Phi^{\rm Prim}_a}{dE}(E_r) \right)\nonumber\\ &\times\left(\sigma^{\rm invPrim}_{a\to\gamma}(E_r)+\sigma_{ae}(E_r)\right) ,\end{aligned}$$ where $N_A$ is Avogadro constant, and $E_r$ represents the electronic recoil energy, which is faked by photons in the inverse Primakoff process. To compare with the results reported by the XENON1T collaboration, we further smear the differential event rate with a Gaussian with its variance satisfying $\sigma/E_r=a/{\sqrt{E_r}}+b$. A numerical fit to the data of XENON1T energy resolution [@XENON:2019dti] yields $a=35.9929$ keV$^{1/2}$ and $b=-0.2084$. After the smearing, we apply the detector efficiency [@Aprile:2020tmw]. Fig. \[fig:benchmark\] shows two examples of the differential event rate of the electronic recoils given different values of $g_{ae}$ and $g_{a\gamma}$. In the case that $g_{ae}=0$, the spectrum is only determined by the detection of $\Phi_a^{\rm Prim}$ through the inverse Primakoff process. It is clear that with $g_{ae}$ switched off, solar axions can still account for the low energy excess, although the fit is not as good as that allowing both $g_{ae}$ and $g_{a\gamma}$ to be non-zero.\ ![Fit to electronic recoil energy spectrum with $g_{a\gamma}$ only (top) and both $g_{a\gamma}$ and $g_{ae}$ allowed (bottom). []{data-label="fig:benchmark"}](plots/XENON-1t-fit-with-only-gagamma-with-smear-effct "fig:"){width="0.9\columnwidth"} ![Fit to electronic recoil energy spectrum with $g_{a\gamma}$ only (top) and both $g_{a\gamma}$ and $g_{ae}$ allowed (bottom). []{data-label="fig:benchmark"}](plots/XENON-1t-fit-with-gagamma-and-gae-with-smear-effect "fig:"){width="0.9\columnwidth"} ***Constraints from astrophysics and terrestrial experiments.***— The most severe constraints on the solar axion explanation of the XENON1T excess is from the astrophysical observations of the stellar cooling in the HB and red-giant branch (RGB) stars, which we review below. Axions with sizable $g_{a\gamma}$ and $g_{ae}$ couplings speed up the burning of the H-core for RGB and that of the He-core for HB. The lifetime of the stars in the two phases is proportional to their observed numbers. Therefore, one can use the $R$-parameter, the ratio of the number of HB stars to that of RGB stars, $R\equiv N_\text{HB}/N_\text{RGB}$, to constrain the stellar cooling due to axions. Ref. [@Ayala:2014pea] reported the averaged $R$-parameter over 39 globular clusters with $R_\text{av} = 1.39 \pm 0.03$. Assuming $g_{ae}=0$, $g_{a\gamma}$ is constrained to be $g_{a\gamma} < 6.6\times 10^{-11}~\text{GeV}^{-1}$ with 95% C.L. For non-zero $g_{ae}$, Ref. [@Giannotti:2015kwo] presented two theoretical models which give slightly different predictions of the $R$-parameter. In Fig. \[fig:fit\], we adopted the resulting 95% C.L. constraints on $g_{ae}-g_{a\gamma}$ plane for both models from Fig. 4 of [@Giannotti:2015kwo]. We further discuss the bound dependence on He mass fraction of the globular clusters in the Appendix. The bremsstrahlung energy loss from the axion-electron coupling affects the white dwarf luminosity function (WDLF) and constrain $g_{ae} \lesssim2.8 \times 10^{-13}$ [@Bertolami:2014wua]. The same cooling argument on RGB yields a constraint of $g_{ae} \lesssim 4.3 \times 10^{-13}$ [@Viaux:2013lha]. The global fit of the solar data constrained $g_{a\gamma} < 4.1\times 10^{-10} ~\text{GeV}^{-1}$ [@Vinyoles:2015aba]. In Fig. \[fig:fit\], we also show the favored $g_{ae}-g_{a\gamma}$ parameter region to explain the exotic stellar cooling that hints at a new cooling mechanism beyond the neutrino emission [@Giannotti:2017hny; @DiLuzio:2020jjp]. On the terrestrial experiments side, the axion searches from LUX [@Akerib:2017uem] using axioelectric effect suggest $g_{\rm ae} < 3.5\times 10^{-12}$. Similar constraint is also shown by PandaX [@Fu:2017lfc]. The CAST experiment [@Barth:2013sma] constrains light axions with $g_{a\gamma}< 6.6\times 10^{-11} {{\ \rm GeV}}^{-1}$. But this bound can be significantly weakened if the axion mass is about $\gtrsim1~\text{eV}$.\ ***Results.***— In this section, we first present our fit to the XENON1T excess and compare it with the astrophysical constraints, as shown in Fig. \[fig:fit\]. We scan two parameters $g_{ae}$, $g_{a\gamma}$, and apply the method of least squares to the XENON1T data to find the 90% C.L. contours with (solid red) and without (dashed red) including the inverse Primakoff process. In comparison, we also show the constraints (95% C.L.) from astrophysical observables including WDLF, the tip of RGB, and the $R$-parameter (with two models), as well as the constraints from the global fit of the solar data and the direct search at LUX. ![The 2D axion couplings parameter fit for the Xenon1T excess after including the inverse Primakoff process. Our best fit (90% C.L.) to the XENON1T excess is shown in the red shaded region with the solid boundary. In comparison, a “XENON-like" analysis with only the electron recoil included as the signal yields a fit shown in the region with the dashed boundary. The main difference is that the inclusion of the inverse Primakoff process allows for a region in which $g_{a \gamma} $ is relatively large while $g_{ae}$ can be very small, reducing the tension with the astrophysical data. Also included are the constraints (95% C.L.) from astrophysical observables including WDLF [@Bertolami:2014wua], the tip of RGB [@Viaux:2013lha] and the $R$-parameter (with two models) [@Giannotti:2015kwo], as well as the constraints from the global fit of the solar data [@Vinyoles:2015aba], LUX [@Akerib:2017uem], and PandaX [@Fu:2017lfc], with arrows denoting excluded regions. The shaded green region contains 1 $\sigma$ to 4 $\sigma$ contours favored by the anomalous stellar cooling [@Giannotti:2017hny; @DiLuzio:2020jjp]. []{data-label="fig:fit"}](plots/XENON-1t-fit-gae-gagamme-parameter-space){width="0.9\columnwidth"} From Fig. \[fig:fit\], we see that the inclusion of the inverse-Primakoff process has a significant impact on the parameter region preferred by the XENON1T data. In particular, it opens up a parameter region in which $g_{a\gamma} \gg g_{ae}$ and the inverse Primakoff process gives rise to the observed signal. Moreover, it prefers a $g_{a\gamma}$ which is in the region of a few$\times 10^{-10}$, one order of magnitude smaller than the preferred $g_{a\gamma}$ without the inclusion of the inverse Primakoff process, satisfying the constraints from the global fit of the solar data, and significantly reducing the tension with the stellar cooling bound.\ ***Possible extensions.***— From the previous discussion, we see that even though the inclusion of the inverse Primakoff process can significantly improve the prospect of explaining the XENON1T excess with the solar axion, it is still in tension with the stellar cooling bound. If the excess is indeed completely due to new physics, there remains three possibilities. It could certainly come from other new physics instead of the solar axion, in which case a new explanation of the keV scale needs to be found. It is also possible that there is additional uncertainty in the stellar cooling bound which still has not been appreciated. Instead of pursuing these avenues, we will explore a third possibility. Namely, we introduce new physics in addition to the solar axion to help relax the tension between the XENON1T excess and the stellar cooling bound. \(I) One way to alleviate the astrophysical bound is using axion coupling to both photon and dark gauge boson $A'$ carrying the $U(1)_B$ Baryon charge, $$\begin{aligned} \mathcal{L} \supset - \frac{1}{2} g_{a\gamma A'} a F'_{\mu\nu} \tilde{F}^{\mu\nu} + g_{B} A'_{\mu} J_{\rm B}^\mu \ .\end{aligned}$$ The $U(1)_B$ $A'$ couples to $\rm{Xe}$ nucleus, but not the electrons, such that the form factor suppression from the screening effect of the electric charge of the nucleus is removed, and there is an extra enhancement factor of $A^2/Z^2$ by coupling to both protons and neutrons. The inverse Primakoff for $a + N \to \gamma+N$ is mediated by t-channel $A'$ and its cross-section is $$\begin{aligned} \sigma_{a\to\gamma}^{A'} = \frac{g_{a\gamma A'}^2 \alpha_B A^2 }{8} \, \frac{(2\eta^2+1)\log(4 \eta^2+1) -4 \eta^2}{\eta^2} \, F_n^2 \ , \end{aligned}$$ where $\eta = \bm{k}/m_{A'}$, $\bm{k}$ is the momentum of the axion, $\alpha_B = {g_B^2}/{4\pi}$, $F_n$ is the nuclear form factor which is almost 1 for momentum transfer at keV scale and $A$ is the number of the nucleons in the nucleus. This cross-section also applies to the Primakoff production of axion flux from HB and Sun. It is proportional to $A^2$ with $A'$ mass suppression but no nuclear form factor suppression. Recall that the inverse Primakoff via $g_{a\gamma}$ is proportional to $Z^2$, and after counting the suppression from screening length $r_0^{-1}$, the screened charge of Xenon changes to $Z_{sc} = 5.3$ at $q = 3$ keV. Given the large $A=131$ for xenon, we hope this $A^2$ can greatly benefit the detection using heavy elements. We found when $A'$ mass is about $r_0^{-1}$, the enhancement in the detector is about its expected form $A^2/Z^2 \simeq 6$. When $m_{A'} < r_0^{-1} $, the enhancement factor is proportional to $A^2/Z_{sc}^2 $ which is quite large. However, when $m_{A'} > r_0^{-1} $ there is no enhancement for the detection. The other reason not considering larger $m_{A'}$ is that the energy loss in star from the Primakoff process will be proportional to $(\frac{T}{m_{A'}})^4$ for $m_A' \gg T$. The central region in the Sun is cooler than the core of HB and RGB stars. Therefore, we obtain stronger bounds from the stars in the case of the heavy $A'$. We follow Ref. [@Raffelt:1996wa] to calculate the Primakoff induced flux and take the light $A'$, $m_{A'} = 0.1~ (1) \, \rm{keV}$, as examples. The energy loss or flux at the HB and Sun is rescaled by $15.6$ ($8.0$) and $16.9$ ($4.3$) times $\alpha_B g_{a\gamma A'}^2/(\alpha g_{a\gamma^2})$ comparing with the flux from the $g_{a\gamma}$ coupling. In both HB and Sun, they are both dominated hydrogen and helium, therefore the difference between $Z^2$ and $A^2$ are not significant. For the detection at XENON1T, the cross-section can be enhanced by about $400$ ($90$) times $\alpha_B g_{a\gamma A'}^2/(\alpha g_{a\gamma^2})$. Therefore, when having the solar axion flux explain the XENON excess, the energy loss rate from the star could be reduced to $19 \, \%$ ($40 \, \%$). This is able to alleviate the tension between astrophysics and XENON1T excess if $m_{A'} \lesssim 3 \, \rm{keV}$. Besides $U(1)_B$, one can also consider $U(1)_{B-L}$ by considering the enhancement from neutron number $(Z-A)$. \(II) In this second scenario, we consider that if the axion interactions are all assisted with ultralight dark matter $\phi$, the bounds can be weakened. The ultralight dark matter assisted interactions are, $$\begin{aligned} \mathcal{L} \supset - \frac{\phi}{\Lambda_e} \frac{\partial_\mu a}{2 m_e} \bar{e} \gamma^\mu \gamma_5 e - \frac{1}{4} \frac{\phi}{\Lambda_\gamma^2} a F_{\mu\nu} \tilde{F}^{\mu\nu}.\end{aligned}$$ The ultralight dark matter has a very large occupation number in the solar system, because its mass is very small e.g. $m_\phi =10^{-21}$ eV. Given the relation with local DM density $\rho_{\phi} = m_\phi^2 \phi^2/2$, one can obtain the classical value of the $\phi$ field which behaves as a vev when there is any DM density. Hence the axion photon and axion gluon coupling are respectively given by $g_{ae} = \langle \phi \rangle /\Lambda_e \propto \sqrt{\rho_\phi}/\Lambda_e$ and $g_{a\gamma} = \langle \phi \rangle /\Lambda_\gamma^2 \propto \sqrt{\rho_\phi}/\Lambda_\gamma^2$. Comparing with the Solar system where the Milky Way (MW) galaxy is abundant of DM ($\rho_\text{DM}^\text{local}\sim 0.3\,\text{GeV}/\text{cm}^3$), the globular clusters (GCs) typically have a lower dark matter mass fraction (e.g. $f_\text{DM}\lesssim 6\%$ for NGC 2419 [@Ibata:2012eq]) comparing to that of MW (84%). Of course, to determine the local DM density around the HB and RGB stars in the 39 GCs that set the $R$-parameter constraint, one needs to determine the DM profile of each GC and the location of the stellar populations within them, which is beyond the scope of this paper. But assuming $\rho_\text{DM} = 0.1\rho_\text{DM}^\text{local}$ around HB stars in GC allows the coupling on $g_{a\gamma}$ and $g_{ae}$ to decrease by a factor of $\sqrt{10}$. Using the two theoretical models of the $R$-parameter described in the Appendix, this in turn relaxes the constraint on $g_{a\gamma}$ (assuming $g_{ae} \lesssim 10^{-13}$) from $g_{a\gamma} < 6.6 \times 10^{-11} {{\ \rm GeV}}^{-1}$ to $g_{a\gamma} < (2-3)\times 10^{-10} {{\ \rm GeV}}^{-1}$ when adopting the suggested averaged $R$ value ($R_\text{av} = 1.39 \pm 0.03$) and the He abundance ($Y_\text{He} = 0.254\pm 0.003$) from [@Ayala:2014pea]. The favored parameter space to explains the XENON1T excess remain unchanged.\ ***Conclusions.***— Solar axion is an appealing explanation for the XENON1T excess, with its energy naturally in the keV range. In this letter, we have emphasized the importance of including photon with a similar recoil spectrum as a possible explanation for the XENON1T excess. In particular, it can significantly reduce the tension between the solar axion explanation and the astrophysical data, in particular, the stellar cooling bound. Introducing additional new physics can further alleviate the remaining tension. We conclude here by briefly discussing future prospects. We expect further sharpening the stellar cooling bound certainly helps to clarify the situation. If there is indeed additional new physics that helps to relieve the tension with the astrophysical bound, it would be interesting in exploring other possible signals of these new physics. For example, a more sensitive search for the $U(1)_B$ can have the potential of shedding new light on this scenario. We also note that it is possible to have new physics models in which the photon comes from completely different sources. For example, it can come from a different dark matter scattering process [@Paz:2020pbc] or from decaying from an excited state of the dark matter [@Bell:2020bes; @1802727]. In these cases, the spectrum of the photon would be different from the one from the inverse Primakoff process. Future data can be used to distinguishing these scenarios.\ ***Acknowledgements.***— We would like thank Luca Grandi, Evan Shockley for discussing in detail the response to the electron and photon in the XENON detector and Fei Gao, Jingqiang Ye for the details of fit and the analysis. CG is supported by by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359, JL acknowledges support by an Oehme Fellowship, LTW is supported by the DOE grant DE-SC0013642, XPW is supported by the DOE grant DE-AC02-06CH11357, WX is supported by the DOE grant DE-SC0010296, and YZ is supported by the Kavli Institute for Cosmological Physics at the University of Chicago through an endowment from the Kavli Foundation and its founder Fred Kavli.\ ***Appendix: Dependence of $R$-parameter constraints on the He abundance.***— The quickened core-burning process of HB and RGB from axion cooling can be compensated by a larger He abundance. This leads to a degeneracy between the He mass fraction, $Y_\text{He}$, and the axion couplings, $g_{ae}$ and $g_{a\gamma}$ when setting up constraints with observed $R$ parameter and weakens the coupling constraints when the uncertainties on the He abundance is large. The determination of $Y_\text{He}$ is particularly changeling for GC due to the absence of the spectroscopic window in the direct detection and the difficulties in stellar simulation. Given the similar O/H composition between the selected GCs and the low-metallicity HII regions, Ref. [@Ayala:2014pea] uses the $Y_\text{He}$ of the later environment to approximate that of the former one and adopted $Y_\text{He} = 0.254\pm 0.003$. Ref. [@Ayala:2014pea] also adopts the He abundance from the Big-Bang nucleosynthesis and that from the early solar system as the lower and higher bounds for $Y_\text{He}$ in GCs. Ref. [@Ayala:2014pea] updates the theoretical predictions of the $R$-parameter by including both the $g_{ae}$ and $g_{a\gamma}$ coupling. The two models (labeled as A and B) are given by $$\begin{aligned} R_\text{th}^{A} ={}& 6.26 Y_\text{He} - 0.41 \left(\frac{g_{a\gamma}}{10^{-10} {{\ \rm GeV}}^{-1}}\right)^2 -0.12 \nonumber\\ &- 0.053 \left(\frac{g_{ae}}{10^{-13}}\right)^2 - 1.61 \delta \mathcal{M}_c \ , \label{eq:rth1}\end{aligned}$$ or $$\begin{aligned} R_\text{th}^{B} ={}& 7.33 Y_\text{He} - 0.095 \sqrt{ 21.86+21.08\left(\frac{g_{a\gamma}}{10^{-10} {{\ \rm GeV}}^{-1}}\right)} \nonumber\\ &+0.02 - 0.053\left(\frac{g_{ae}}{10^{-13}}\right)^2 - 1.61 \delta \mathcal{M}_c \ , \label{eq:rth2}\end{aligned}$$ where $$\begin{aligned} \delta \mathcal{M}_c=0.024\! \left[\left(\left(\frac{g_{ae}}{10^{-13}}\right)^2\!\!\!+1.23^2\right)^{\frac{1}{2}} \!\!-\!\!1.23\!\!-\!\!0.138 \left(\frac{g_{ae}}{10^{-13}}\right)^{\frac{3}{2}}\right].\end{aligned}$$ In Fig. \[fig:Rpara\], we showed the resulting 95 % C.L. constraints on the $g_{ae}-g_{a\gamma}$ plane with the suggested value $Y_\text{He}=0.255\pm 0.03$ from the low-metallicity region [@Ayala:2014pea]. To highlight the consequence of $Y_\text{He}$ uncertainty, we also set $Y_\text{He}$ of GCs to that of the primordial He abundance $Y_\text{He}=0.245\pm0.003$ [@Tanabashi:2018oca] and that of the early Solar system [@Serenelli_2010], $Y_\text{He}=0.278\pm 0.006$. Note that by approximating $Y_\text{He}$ to the early Solar system value, we assume no chemical evolution occurred during the 8 Gyr between the formation of GC and the Solar system. This is very unlikely. ![95% C.L. excluded parameter space (shaded with blue) from the ratio of the number of HB stars to that of RGB stars in GCs. Here we adopted the averaged $R_\text{av} = 1.39 \pm 0.03$ and considered the theoretical models of [Eq. (\[eq:rth1\])]{} (left) and [Eq. (\[eq:rth2\])]{} (right). We approximate the $Y_\text{He}$ value from the primordial He abundance (upper), the low-metallicity region (middle), and the early Solar system (lower).[]{data-label="fig:Rpara"}](plots/R-param.pdf){width="48.00000%"}
--- abstract: 'Magnetic systems are an exciting realm of study that is being explored on smaller and smaller scales. One extremely interesting magnetic state that has gained momentum in recent years is the skyrmionic state. It is characterized by a vortex where the edge magnetic moments point opposite to the core. Although skyrmions have many possible realizations, in practice, creating them in a lab is a difficult task to accomplish. In this work, new methods for skyrmion generation and customization are suggested. Skyrmionic behavior was numerically observed in minimally customized simulations of spheres, hemisphere, ellipsoids, and hemi-ellipsoids, for typical Cobalt parameters, in a range from approximately $40 \: nm$ to $120 \: nm$ in diameter simply by applying a field.' author: - Patrick Johnson - 'A. K. Gangopadhyay' - Ramki Kalyanaraman - Zohar Nussinov bibliography: - 'ScalingDraftBib.bib' title: Demagnetization Borne Microscale Skyrmions --- Introduction {#Introduction} ============ A skyrmion, theorized first by Skyrme in 1962 [@THR1962556], is a state with a vectorial order parameter which is aligned at the system boundary at an opposite direction to what the order parameter assumes at the origin. Skyrmions may appear in diverse arenas, such as elementary particles [@THR1962556; @Atiyah1989438; @Houghton1998507; @PhysRevLett.79.363; @2002hep.ph....2250W], liquid crystals [@RevModPhys.61.385], Bose-Einstein condensates [@Khawaja2001; @PhysRevA.62.013602; @PhysRevA.68.043602], thin magnetic films [@0022-3727-44-39-392001], quantum Hall systems [@PhysRevB.47.16419; @PhysRevLett.75.2562; @PhysRevLett.75.4290; @PhysRevLett.74.5112], and potentially vortex lattices in type II superconductors [@RevModPhys.76.975; @2011arXiv1108.3562B]. Being able to experimentally observe or generate skyrmions is a current research thrust [@THR1962556; @THR1962556; @Atiyah1989438; @Houghton1998507; @PhysRevLett.79.363; @2002hep.ph....2250W; @RevModPhys.61.385; @Khawaja2001; @PhysRevA.62.013602; @PhysRevA.68.043602; @0022-3727-44-39-392001; @PhysRevB.47.16419; @PhysRevLett.75.2562; @PhysRevLett.75.4290; @PhysRevLett.74.5112; @RevModPhys.76.975; @2011arXiv1108.3562B; @PhysRevLett.103.250401; @Schulz2012; @Kirakosyan2006413]. In this work we demonstrate via micromagnetic simulations that achieving a skyrmion is as simple as creating a nanoparticle of many possible geometries, which is large enough to support a single vortex but small enough to prevent multiple vortices. The demagnetization energy allows for the formation of a vortex at zero-field. We find that as the field increases such that it lies in a direction opposite to the core, the magnetization at the edges may realign itself parallel to the field direction more readily than the magnetization next to the core. Immediately prior to annihilation of the vortex (i.e., the flipping of the magnetization at the system core to become parallel to the applied field direction), the skyrmionic state is most notable. We observed this, relatively ubiquitous, effect in systems with disparate geometries- spheres, hemispheres, ellipsoids, and hemi-ellipsoids. It may be possible to generalize this process so as to experimentally synthesize a skyrmion lattice by simply creating an array of nanoparticles with tunable size and spacing, such as by self-organzation [@krishna:073902; @Krishna2011356]. Preliminary simulations of a two-by-two grid of Cobalt hemispheres of radius $20 \: nm$ with varying inter-hemisphere separation indicate that beyond a threshold distance of twice the radius, an array of skyrmions is formed. As the center to center separation is steadily increased, the skyrmionic state becomes more lucid. For small separations, interactions partially thwart the creation of the individual skyrmions. As is well known, we can quantify a skyrmionic state by calculating the Pontryagin index (also known as a winding number) that is given by [@eduardo1999field] $$\begin{aligned} Q=\frac{1}{8\pi}\int d^{2}x\epsilon_{ij}\hat{M}\cdot(\partial_{i}\hat{M}\times\partial_{j}\hat{M}). \label{PontryaginIndex}\end{aligned}$$ In this expression, $\epsilon_{ij}$ is the two dimensional anti-symmetric tensor and $\hat{M}$ is the normalized magnetization. For a single skyrmion, this winding number (or topological charge) is equal to unity. Skyrmions are characterized by the non-trivial homotopy class $\pi_{2}(S^{2})$. This homotopy class is characterized by an integer that, for this case, is the Pontryagin index. States with different integer skyrmion number (the Pontryagin index) cannot be continuously deformed into one another. In the current context, the skyrmionic state resides on a two dimensional plane. On each spatial point of the plane, there is a three dimensional order parameter which, in our case, is the magnetization $\vec{M}$. Topologically, a skyrmion is a magnetic state such that when it is mapped onto a sphere (via stereographic projection) resembles a monopole or hairy ball. This means that on mapping from a flat space to the surface of a sphere, the individual magnetic moments will always point perpendicular to the surface of the sphere, much like a magnetic monopole. The above topological classification is valid for an “ideal” skyrmion on an infinite two-dimensional plane or disk with the condition that the local moment $\vec{M}(\vec{r})$ at spatial infinity (irrespective of the location $\vec{r}$ on the infinite disk) all orient in the same direction: $\lim_{r\to\infty}\hat{M}({\vec{r}})=\hat{M}_{0}$. In such a case $\hat{M}_{0}$ corresponds to the magnetization at the “point at infinity”. On applying a stereographic projection of the infinite plane onto a unit sphere, $\hat{M}_{0}$ maps onto the magnetization at the north pole of the unit sphere while the oppositely oriented $\hat{M}$ at the origin corresponds to the magnetization at the south pole. In such a case, the winding number is identically equal (in absolute value) to unity. In many physically pertinent geometries, including the systems simulated in this work, there are finite size limits which only allow the magnetization $\vec{M}$ to exhibit the trend of approaching a uniform value $\vec{M}_{0}$ as one moves away from the center of the system. In this case, the integral in Eq. \[PontryaginIndex\] is not an integer. However, it is clear that, in the limit of infinite planar size, these states would become ideal skyrmions and the winding number $Q$ would approach an integer value. The remainder of this article is organized as follows. In Section \[Theory\], we provide necessary background. We briefly describe the simulations employed in this work and discuss energetic considerations. Section \[ResultsandDiscussion\] reports on our central result- the numerical observation of skyrmions. We discuss a higher dimensional generalization and the possibility of generating skyrmion lattices. We conclude in section \[Conclusion\] with a summary of our results. Theory {#Theory} ====== Simulation Theory {#Simulation Theory} ----------------- In this work of simulating magnetic states of nanoparticles, the Object Oriented Micromagnetic Framework (OOMMF) 1.2a distribution as provided from NIST was utilized [@Donahue1999]. The OOMMF code numerically solves the Landau-Lifshitz Ordinary Differential Equation given by, $$\begin{aligned} \frac{d\vec{M}}{dt}=-|\bar{\gamma}|\vec{M}\times\vec{H}_{eff}-\frac{|\bar{\gamma}|\tilde{\alpha}}{M_{s}}\vec{M}\times\left(\vec{M}\times\vec{H}_{eff}\right)\end{aligned}$$ where $\vec{M}$ is the magnetization, $\bar{\gamma}$ is the Landau-Lifshitz gyromagnetic ratio, $M_{s}$ is the saturation magnetization, $\tilde{\alpha}$ is the damping coefficient, and $H_{eff}$ is the effective field given by derivatives of the Gibbs free energy. The Gibbs free energy, in this case, is given by [@Brown1978], $$\begin{aligned} G=\int(\frac{1}{2}C\left[\left(\vec{\nabla}\alpha\right)^{2}+\left(\vec{\nabla}\beta\right)^{2}+\left(\vec{\nabla}\gamma\right)^{2}\right]+w_{a}\nonumber \\ -\frac{1}{2}\vec{M}\cdot\vec{H}'-\vec{M}\cdot\vec{H}_{0})d\tau\end{aligned}$$ where $\alpha$, $\beta$, and $\gamma$ are the directional cosines, $C$ is proportional to the exchange stiffness constant and depends on the crystal structure, $w_{a}$ is the crystalline anisotropy term, $\vec{H}'$ is the demagnetization field, and $\vec{H}_{0}$ is the external magnetic field. The crystalline anisotropy term can be expressed in terms of anisotropy constants, $K_{1}$ and $K_{2}$, and directional cosines as, $$\begin{aligned} w_{a}=K_{1}\left(\alpha^{2}\beta^{2}+\beta^{2}\gamma^{2}+\gamma^{2}\alpha^{2}\right)+K_{2}\alpha^{2}\beta^{2}\gamma^{2}.\end{aligned}$$ In the simulations, a metastable state was determined to have been reached when the maximum torque experienced by any one magnetic moment, measured in $\frac{degrees}{ns}$, dropped below $0.2$. Once this level of torque was reached, the magnetic state data were saved to a file along with the other properties of the system, including but not limited to, the energies associated with each contribution, overall magnetization, and number of iterations. The magnetic field was then changed to the next value and the iterations continued until saturation of the magnetization was obtained. The magnetic field steps were chosen such that half the steps (typically, a few hundred) were during the increasing field portion and the other half in the decreasing field portion. The data stored in the file were used later to generate the hysteresis plots, track the energy changes associated with the field variations, and the spatial orientations of the magnetic moments. Unless specified otherwise, the parameters chosen in the simulations correspond to those for Cobalt, as shown in Table \[CobaltParameters\]. parameter value used in this work ----------------------------------------------------- --------------------------------------- -- Exchange Stiffness Constant ($A$) $2.5\times10^{-11} \frac{J}{m}$ Saturation Magnetization ($M_{s}$) $1.4\times10^{6} \frac{A}{m}$ Crystalline Anisotropy Constant ($K_{1}$) $5.20\times10^{5} \frac{J}{m^{3}}$ Damping Constant ($\tilde{\alpha}$) $0.5$ Landau-Lifshitz Gyromagnetic Ratio ($\bar{\gamma}$) $2.21\times10^{5} \frac{m}{A\cdot s}$ Stopping Torque ($\frac{dm}{dt}$) $0.19 \frac{deg}{ns}$ : Table of parameters used in the simulations of particles in this work. The exchange stiffness constant, saturation magnetization, and crystalline anisotropy constant are material specific and are chosen for Cobalt. The damping constant, Landau-Lifshitz-Giblert gyromagnetic ratio, and stopping torque are material independent parameters.[]{data-label="CobaltParameters"} Energy Considerations {#Energy Considerations} --------------------- In our simulations, we considered field, demagnetization, and exchange energies. For simplicity, we neglected crystalline anisotropy effects. The field tries to align the local magnetic moments parallel to it while exchange effects favor an alignment of the magnetic moments with their nearest neighbors. The (universally geometry borne) demagnetization energy directly relates to dipole-dipole interactions [@Brown1978]. Demagnetization energy is often the dominant term for long range behaviors while exchange effects tend to dominate at short spatial scales. As is well known, the competition between the long range and the short range energy contributions leads to the creation of domain walls. The demagnetization favors oppositely oriented moments at the expense of exchange effects that favor slow variations amongst neighbors. Ultimately, this tradeoff gives rise to domain walls in micromagnetic systems. The potential energy from demagnetization of a system is given by $$\begin{aligned} \mathcal{E}_{M}=-\frac{1}{2}\sum_{i}\vec{m}_{i}\cdot\vec{h}'_{i},\end{aligned}$$ where $\vec{h}'_{i}$ is the effective field at position $i$ that originates from all other dipoles. This field can be written as $$\begin{aligned} \vec{h}'_{i}=\vec{H}'+\frac{4}{3}\pi\vec{M}+\vec{h}''_{i},\end{aligned}$$ where $\vec{H}'$ is the megascopic field from the poles due to $\vec{M}$ outside of a physically small sphere around site $i$. The second term subtracts the effective field inside an arbitrary small region (or sphere) centered about point $i$, and $\vec{h}''_{i}$ is the field at site $i$ created by dipoles inside this region. In general, $\vec{h}''_{i}$ depends on the crystal lattice structure. In the continuum limit, the sum becomes an integral of the form, $$\begin{aligned} \mathcal{E}_{M}=-\frac{1}{2}\int\vec{M}\cdot(\vec{H}'+\frac{4}{3}\pi\vec{M}+\Lambda\cdot\vec{M})dV.\end{aligned}$$ The tensor $\Lambda$ in the third term depends only on the crystal structure and local magnetization and can grouped with crystalline anisotropy. This tensor also vanishes for cubic crystals identically. The second term in this expression is a constant proportional to $M_{s}^{2}$ and can be ignored. The $\Lambda$ tensor also vanishes for cubic crystals identically leaving, $$\begin{aligned} \mathcal{E}_{M}=-\frac{1}{2}\int\vec{M}\cdot\vec{H}'dV \label{EnergyEquation}.\end{aligned}$$ The demagnetization field, $\vec{H}'$, can equivalently be derived from Maxwell’s equations. It can be expressed as the negative gradient of a potential, $U$ that satisfies the equations, $$\begin{aligned} \nabla^{2}U_{in}=\gamma_{B}\vec{\nabla}\cdot\vec{M}\\ \nabla^{2}U_{out}=0,\end{aligned}$$ with the surface boundary conditions, $$\begin{aligned} U_{in}=U_{out}\\ \frac{\partial U_{in}}{\partial n}-\frac{\partial U_{out}}{\partial n}=\gamma_{B}\vec{M}\cdot\vec{n}.\end{aligned}$$ where the constant $\gamma_{B}$ is, in our units, $4 \pi$. Lastly, the potential needs to be regular at infinity, such that $|rU|$ and $|r^{2}U|$ are bounded as $r\rightarrow\infty$. Our simulations directly capture the demagnetization field effects. From the standpoint of energy, for a skyrmion to be possible, the dimensions of the ellipsoid must be larger than the critical dimensions at which vortices can nucleate in a given system. For example, for the hemispherical geometry, with the typical values of Table \[CobaltParameters\], the critical radius was found to be $19 \: nm$. For larger radii, vortices are the preferred state before reaching zero field. The vortex will nucleate such that the core is parallel to the field and the remainder of the vortex lies in the plane perpendicular to the field. Once the field begins to oppose the direction of the moments at the core, the energy cost of eliminating the core is significantly higher than allowing the outer magnetic moments to align more with the field. When the exchange energy cost of the skyrmionic state becomes greater than the demagnetization energy for a uniform magnetization, the core flips, annihilating the skyrmion, and the magnetization saturates. Immediately, prior to this, though, a skyrmionic state can be achieved. Ezawa [@PhysRevLett.105.197202] raised the specter of a skyrmionic state in thin films via the computation of the energy of such assumed variational states within a field theoretic framework of a non-linear sigma model. Dipole-dipole interactions may stabilize such a state below a critical field. Our exact numerical calculations for the evolution of the magnetic states demonstrate that not only are skyrmionic states viable structures, but are actually the precise lowest energy state for slices of hemispheres and other general structures. Results and Discussion {#ResultsandDiscussion} ====================== Observation of a Skyrmion {#Observation of a Skyrmion} ------------------------- As our numerical simulations vividly illustrate, just prior to the annihilation of the vortex, the magnetic moments at the edge of the system start to orient themselves in a direction opposite to that in the core. On increasing the radius of the simulated hemispheres and spheres, the configurations next to the basal plane better conformed to the full skyrmion topology (i.e., that on an infinite plane).t should be noted here, that as the radius of a hemisphere increases, the crossover to a double vortex state will eventually occur, but if one vortex is maintained, in the limit of large radii, a full skyrmion would be expected. This may be possible in materials with large exchange constant and small saturation magnetization. In what follows, we will employ the typical values appearing in Table \[CobaltParameters\]. The skyrmion state for the bottom layer (basal plane) of a hemisphere of radius $24 \: nm$ is shown in Figure \[Hemisphere24nmSkyrmionBottomLayerNoAxes\]. ![Vector plot of the skyrmion state for the bottom slice of a hemisphere of radius $24 \: nm$. Not all local magnetic moments are shown for the sake of clarity.[]{data-label="Hemisphere24nmSkyrmionBottomLayerNoAxes"}](Hemisphere24nmSkyrmionBottomLayerNoAxes) A similar configuration was observed in simulation runs for nanospheres. For a sphere, symmetry does not favor any particular direction, but that symmetry is broken once a field is applied. Skyrmions were observed in runs of spheres large enough to support a vortex which corresponds to a radius of $\approx15 \: nm$. As the radius of the sphere increases, the edge magnetic moments and the core magnetic moments become more antiparallel. A skyrmion in a sphere of radius $59nm$ is shown in Figure \[SphereSkyrmion\]. ![Vector plot of the skyrmion state in a sphere of radius $59nm$. The slice is along the equator of the sphere. Only a subset of local magnetic moments is shown for clarity.[]{data-label="SphereSkyrmion"}](SphereSkyrmion) Once skyrmions were observed in these systems, it begged the question, “Do these occur in ellipsoids and hemi-ellipsoids?" Upon examining this, indeed skyrmions can be observed in oblate ellipsoids and hemi-ellipsoids as shown in Figures \[EllipsoidSkyrmion\] and \[HemiellipsoidSkyrmion\]. ![Vector plot of the skyrmion state in an ellipsoid with major axis of $20 \: nm$ and minor axis of $15 \: nm$. The slice is along the equator of the ellipsoid. Only a subset of local magnetic moments is shown for clarity.[]{data-label="EllipsoidSkyrmion"}](Ellipsoid20nm15nmSkyrmionField122) ![Vector plot of the skyrmion state in a hemi-ellipsoid with major axis of $20nm$ and minor axis of $15 \: nm$. The slice is along the base of the hemi-ellipsoid. Only a subset of local magnetic moments is shown for clarity.[]{data-label="HemiellipsoidSkyrmion"}](Hemiellipsoid20nm15nmSkyrmion) To verify that these are structures approach those of skyrmions and to quantitively monitor their deviations from an ideal skyrmionic state (for which the Pontryfin index is unity),we computed the Pontryagin index at different cross sections of the hemisphere. These cross sections were those of the hemisphere with planes parallel to the basal plane(i.e., that at the base of the hemisphere). For a hemisphere with radius $30 \: nm$, we calculated the skyrmion number Q for thirty individual parallel layers vertically separated by $1 \: nm$. We numerically evaluated the integral of Eq. \[PontryaginIndex\] for all of these layers and examined how it changes as the field increases from $0$ to $0.8\: T$. These data are shown in Figure \[PontryaginPlot\]. ![Plot of the Pontryagin index versus the z-coordinate of the slice taken from the hemisphere of radius $30 \: nm$. These are shown for increasing field from zero field (dark blue dot-dash line), $0.2 \: T$ (green dotted line), $0.4 \: T$ (red dashed line), and $0.6 \: T$ (teal solid line).[]{data-label="PontryaginPlot"}](PontryaginPlot) Visualizing this in the geometry of the hemisphere specifically, one can look at how the Pontryagin index varies along various planes of a hemisphere, starting from the equator and moving to the pole. It can be clearly seen that the skyrmionic behavior exists for most of the height of the hemisphere and only the cap deviates from the rest of the system. The size of this cap depends on the given field strength as can be seen in the case of 0 field (Figure \[PontryaginHemisphere50\]) and with a field of $0.6 \: T$ (Figure \[PontryaginHemisphere80\]). At higher fields, prior to the annihilation of the vortex, the Pontryagin index approaches an integer value, as expected for an ideal skyrmionic state. Performing similar analysis on the hemi-ellipsoids and visualizing the Pontryagin index and its variance with height, it can be seen that the same behavior exists in a less extreme way than the hemispheres. This behavior can be seen in Figure \[PontryaginHemiellipsoid\] for hemi-ellipsoids of fixed $30 \: nm$ major axis and varying minor axis. In examining the hysteresis behavior of the hemi-ellipsoids, one can see a trend as the z-dimension goes from the hemisphere radius ($20 \: nm$) to the minimum simulated size of $5 \: nm$. This trend shows a movement from extensive vortex and skyrmionic behavior in the more hemispheric geometries and less vortex and skyrmionic behavior in the more ellipsoidal geometries. Although it will not be considered in this work, crystalline anisotropy could influence the formation of skyrmions in a number of ways. In the case of a single crystal, the vortex state would be more difficult to nucleate and thus the skyrmionic state is less energetically favorable. When many crystalline grains are present, the results discussed here are valid as the large number or randomly oriented crystals will, on average, not favor any direction, and thus will not favor any one direction. Generalization to a Hedgehog {#Generalization to a Hedgehog} ---------------------------- These results lead to the question of whether this can be generalized to more than two dimensions. The natural generalization from the two-dimensional skyrmion to a three-dimensional magnetic state would be the hedgehog. The hedgehog resides in three spatial dimensions coupled with a three dimensional order parameter. The canonical example of a hedgehog is$\vec{M}=M_{s}\hat{r}$ where the magnetization always points outwards. A skyrmion is related to a hedgehog via a stereographic projection from the sphere onto a plane where the south pole of the hedgehog projects to the core of the skyrmion on the plane and the north pole of the hedgehog projects to the points at infinity on the plane. Calculating the demagnetizing field for this state in a sphere gives rise to a potential and field equal to $$\begin{aligned} U(r)=\gamma_{B}M_{s}(r-R),\\ \vec{H}=-\gamma_{B}M_{s}\hat{r}.\end{aligned}$$ Plugging this into Equation \[EnergyEquation\], one finds the energy of the hedgehog to be $2\pi M_{s}^{2}(4\pi/3)R^{3}$. Comparing this to the energy of the uniformly magnetized state, $(1/2)(4\pi/3)^{2}M_{s}^{2}R^{3}$, it can easily be seen that the hedgehog has three times the energy of the uniform state. This, combined with the fact that the exchange energy and the field energy will favor the uniform state, the hedgehog state will not be possible in a sphere. If one were to continuously deform the hedgehog by rotating the local magnetic moments by $\pi/2$ such that $\vec{M}=M_{s}f(z)\hat{\phi}$ where $f(z)$ is a function that goes to 0 as $z\rightarrow0$ such that the exchange energy does not diverge, one would find the demagnetization energy of that state to be identically 0. The field energy in this system is also 0 for a field that is applied along the z-axis. The exchange energy is given by $(4\pi/3)RC$ where $C$ is the exchange stiffness constant. The total energy of this state is equal to the exchange energy, and comparing this to the uniform state, a hedgehog of this form is favorable for, $$\begin{aligned} R\ge\sqrt{\frac{C}{\frac{2\pi\mu_{0}M^{2}}{3}-MH_{0}}}.\end{aligned}$$ For $C=2.5\times10^{-11}J/m$ and $M_{s}=1.4\times10^{6}A/m$ as it is for Cobalt, at $0$ field, this radius works out to be $\approx3.5\mu m$. Skyrmion Array {#Skyrmion Array} -------------- It is illuminating to consider the possibility of an array of skyrmions. As briefly discussed below, we find that effective particle interactions may thwart the creation of a skyrmion lattice when these particles are not far separated. However, for sufficiently large center to center separations, a Skryme lattice may be achieved. In preliminary simulations of arrays of nanoparticle arrays, simulations of a two-by-two grid of hemispheres of radius $20 \: nm$ with a variable separation show that a center to center separation of four times the radius is close enough that the nanoparticles still interact magnetically and prevent the formation of an array of skyrmions. As expected, further separation should approach the the single particle result of skyrmions, as we briefly discuss next. The transition from the array of particles which support individual vortices to the array of particles that are clearly interacting with each other can be seen in Figure \[4ArrayFields94And95\]. In this figure, the annihilation of the vortices can be seen as the particles realign their magnetization to form a state where the local magnetization orients in the counterclockwise direction from particle to particle, yet within each particle, when moving in the counterclockwise direction, the local magnetization changes from oriented in the negative z-direction to the positive z-direction. In repeating these simulations for a 3x3 array of hemispherical nanoparticles, the same behavior was observed. This array was similar to the 2x2 array in that it had nanoparticles with diameters of $40 \: nm$ and center to center separation of $80 \: nm$. The annihilation of the vortices occurred at a slightly larger field (0.08T rather than 0.1T) as shown in Fig \[9ArrayFields95And96\]. Conclusion {#Conclusion} ========== We conclude with a brief synopsis of our findings. We carried a systematic numerical study of the magnetization of small nanoparticles in the presence of an external magnetic field. These systems were simulated for different sizes and geometry (sphere, hemisphere, ellipsoids). Our analysis ignored anisotropy (crystalline, shape, strain, etc.) effects. We find that, as has been widely reported in the literature [@Shinjo11082000; @hubert1998magnetic], beyond a critical diameter, the particles enter into a single vortex state under zero external field; multiple vortices are possible for much larger particles. Our key new result concerns [*the creation of skyrmions in the single vortex state*]{}. As the field is increased, vortex annihilation is accompanied by the formation of a skyrmionic state wherein the magnetization of the vortex core points to a direction opposite to that at the edge of the nanoparticle. Our result illustrates how geometry plays a pivotal role. Spheres and hemispheres more readily achieve skyrmionic states than higher eccentricity ellipsoids. Our preliminary results suggested that for center to center separations larger than twice the particle diameters, an array of skyrmions may be realized. More detailed studies of skyrmion lattices for such particle arrays are planned for the future. [**Acknowledgements.**]{} Work at Washington University was partially supported by NSF grants DMR-1106293 and DMR-0856707, and by the Center for Materials Innovation (CMI) of Washington University. Work at the university of Tennessee was partially supported by NSF DMR-0856707.
--- abstract: | A fluid flow in a multiply connected domain generated by an arbitrary number of point vortices is considered. A stream function for this flow is constructed as a limit of a certain functional sequence using the method of images. The convergence of this sequence is discussed, and the speed of convergence is determined explicitly. The presented formulas allow for the easy computation of the values of the stream function with arbitrary precision in the case of well-separated cylinders. The considered problem is important for applications such as eddy flows in the oceans. Moreover, since finding the stream function of the flow is essentially identical to finding the modified Green’s function for Laplace’s equation, the presented method can be applied to a more general class of applied problems which involve solving the Dirichlet problem for Laplace’s equation. Keywords: Vortex flow, multiply connected domain, vortex dynamics, stream function, complex potential. AMS MSC: 76B47, 76M40, 76M23. author: - | Anna Zemlyanova, Ian Manly, and Demond Handley\ Department of Mathematics, Kansas State University,\ 138 Cardwell Hall, Manhattan KS 66506\ Tel.: +1-785-532-6750, Fax: +1-785-532-0546 title: Vortex generated fluid flows in multiply connected domains --- Introduction ============ A problem of fluid motion in the presence of vortices has important applications in geophysics, namely in the study of eddy flows in oceans. Ocean vortices may propagate large distances and are likely to encounter geographic obstacles such as islands, ocean ridges, and coastal lines. Vortex flows can be important vehicles for mass, momentum, heat, and salinity transfer in the oceans. Thus, the study of the vortex flows in multiply connected domains is important for accurate modeling and prediction of ocean flows. Motion of vortices in simply connected flow domains is relatively well-studied. The stream functions for these flows can be obtained by using the celebrated method of images [@MilneThomson1968] in combination with appropriate conformal mapping. The simplest example of application of the method of images is the study of a single vortex flow around one cylinder or an infinite straight wall. The resulting flow can be obtained by placing an image vortex with an opposite circulation at the symmetric point with respect to the cylinder or the straight wall. Reviews of the recent results on the vortex flows in simply connected domains are available in [@Arefetal2002; @Newton2002]. Scientific literature on vortex flows in multiply connected domains is considerably more limited. It is necessary to note the work by Johnson and McDonald [@JohnsonMcDonald2004] which is dedicated to the vortex flows in doubly connected domains. The solution is obtained by first conformally mapping the flow domain onto an annulus, then onto a periodically repeated rectangle in the complex plane, and exploiting the properties of elliptic theta functions. The vortex motion near walls with gaps is considered in [@JohnsonMcDonald2004b; @JohnsonMcDonald2005]. Again, only simply and doubly connected domains are considered. Vortex flows in the domains of arbitrary connectivity have been studied by Crowdy and Marshall in their multiple works [@CrowdyMarshall2005; @CrowdyMarshall2005b; @CrowdyMarshall2006]. The solutions in these papers have been obtained for multiply connected circular and slit domains in terms of the transcendental Schottky-Klein prime function [@Baker1995]. The numerical computation of the Schottky-Klein prime function is based upon computing an infinite product which does not converge in all cases. The convergence and the speed of convergence depends on the well-separatedness of the cylinders. Alternatively, the Schottky-Klein prime function can be computed by using power series approximations centered at the centers of the cylinders [@CrowdyMarshall2007b] in a similar way to the computation of the first-type Green’s function for Laplace’s equation in a circular domains [@Trefethen2005]. In the present paper, a fluid flow generated by an arbitrary number of vortices around an arbitrary number of cylinders with specified circulation around each cylinder is studied. The stream function of the flow is obtained by the application of the method of images. According to the authors’ knowledge the construction presented in this paper has not been attempted before. The main difficulty with applying the method of images to multiply connected flow domains is in the fact that the set of image vortices becomes infinite. This problem has been successfully overcome in the present paper. The stream function of the flow is obtained in terms of the limit of a certain functional sequence. The condition under which this sequence converges is investigated and depends on the mutual location and distance between the cylinders (so called well-separatedness of the cylinders). The speed of the convergence is investigated as well. In particular, it is established that the functional sequence converges with the speed of a geometric series. The presented solution is easy to implement numerically, and the results have been compared on the example of doubly connected domains to those obtained by using the method of elliptic functions in [@JohnsonMcDonald2004]. Finally, it is necessary to note that finding the stream function for the vortex flow in question is essentially equivalent to finding the modified Green’s function for a multiply-connected flow domain [@CrowdyMarshall2007a]. Hence, the presented technique can be applied to a much broader range of problems which can be reduced to solving the Dirichlet problem for Laplace’s equation in multiply-connected domain. In particular, the applications of this method include such areas as electrostatics, potential theory, gravitation, numerical analysis, and approximation theory. Some of the alternative methods of construction of the Green’s function using the theory of functional equations or the Schwarz-Christoffel mappings are presented in [@EmbreeTrefethen1999; @MityushevRogosin2000]. A vortex flow in a multiply connected domain ============================================ ![Vortex flow around $K$ islands in an unbounded domain.[]{data-label="fig1"}](kislands.ps){width="50.00000%"} Consider a flow of ideal fluid in an unbounded region $\tilde{D}$ exterior to $K$ islands $L_k$ of arbitrary smooth shape (fig. \[fig1\]). The fluid flow in the domain $\tilde{D}$ is generated by $N$ point vortices located at the points $z_j$ with circulations $\Gamma_j$, $j=1,\ldots, N$. It is well known that a steady irrotational flow in the two-dimensional domain $\tilde{D}$ can be described by a complex potential $w(z)=\varphi(x,y)+i\psi(x,y)$ which is an analytic function in $\tilde{D}$ except for the points $z_j$ which are the singularities of the complex potential $w(z)$ of the logarithmic type: $$w(z)=\frac{\Gamma_j}{2\pi i}\log (z-z_j)+\tilde{w}_j(z), \label{2_1}$$ where $\tilde{w}_j(z)$ is an analytic function in the neighborhood of the point $z_j$, $j=1,2,\ldots, N$. The real $\varphi(x,y)$ and the imaginary $\psi(x,y)$ parts of the complex potential $w(z)$ are called correspondingly the velocity potential and the stream function of the flow. Both the velocity potential and the stream function are harmonic functions in $\tilde{D}\setminus \cup_{j=1}^N \{z_j\}$. Additionally, on the solid boundaries of the domain $\tilde{D}$ the stream function has to assume constant values: $${\mathop{\rm Im}\nolimits}w(z)=\mbox{Const},\,\,\, z\in L_j. \label{2_2}$$ The last condition from the physical viewpoint means that the solid boundaries are the streamlines of the flow. Vortex trajectories in the region $\tilde{D}$ can be obtained by using the Kirchhoff-Routh function $H(x_1,y_1,\ldots,x_N,y_N)$. If $N$ vortices with circulations $\Gamma_j$, $j=1,\ldots,N$, are present in an incompressible fluid at the locations $(x_j(t),y_j(t))$ which depend on time $t$, then the trajectories of the vortices can be found from the following Hamiltonian equations [@Newton2002]: $$\Gamma_j\frac{dx_j}{dt}=\frac{\partial H}{\partial y_j},\,\,\, \Gamma_j\frac{dy_j}{dt}=-\frac{\partial H}{\partial x_j}.$$ The existence and the uniqueness of the Kirchhoff-Routh function $H(x_1,y_1,\ldots,x_N,y_N)$ has been established in [@Lin1941a]. The relationship between the Kirchhoff-Routh function, the first-type Green’s function and the complex potential of the flow has been described in detail in [@CrowdyMarshall2005; @Lin1941a; @Lin1941b; @Newton2002]. It should be noted that the complex potential and the stream function of the flow generated by several vortices in $\tilde{D}$ can be obtained by superposition of the flows generated by a single vortex in $\tilde{D}$. Thus, it is sufficient to consider the flow in the domain $\tilde{D}$ generated by only one vortex at the point $z_0$ with the unit circulation around this vortex, $\Gamma_0=1$. In this case the stream function $\psi(z)$ of the flow with zero circulations around each of the cylinders $L_j$ coincides with the modified Green’s function [@CrowdyMarshall2005]. Observe that the shape of the islands $L_k$ can be restricted to circular without loss of generality. By the generalization of the Riemann mapping theorem to multiply connected domains [@Nehari1952; @Goluzin1969] there is a unique conformal mapping $f(z)$ of the $K$-connected domain $\tilde{D}$ onto some domain $D$ which is an exterior to $K$ circles in the extended complex plane $\overline{\mathbb{C}}$ (such a domain $D$ will be called from now on a circular domain) with the following expansion at infinity: $$f(z)=z+O\left(1/z\right).$$ The circular domain $D$ is completely determined by the initial domain $\tilde{D}$ and the condition at infinity, and cannot be chosen arbitrarily. Since the conformal mapping $\omega=f(z)$ preserves the properties of the complex potential (\[2\_1\]), (\[2\_2\]), it can be assumed from now on that the flow of liquid is observed in the circular domain $D$. The corresponding flow in the original domain $\tilde{D}$ can be found then by taking the composition of the complex potential of the flow in $D$ with the conformal mapping $\omega=f(z)$ from the original domain $\tilde{D}$ onto the circular domain $D$. While finding the exact analytic expression for the conformal mapping $\omega=f(z)$ is not feasible in most cases, very efficient numerical algorithms have been developed which allow to find this mapping approximately [@Delilloetal1999; @Henrici1986]. ![A single vortex in a $K$-connected circular domain.[]{data-label="fig2"}](kcircles.ps){width="50.00000%"} From now on, consider the flow in the circular domain $D$ generated by a single vortex with a unit circulation $\Gamma_0=1$ located at the given point $z_0$ of the domain $D$ (fig. \[fig2\]). Denote the stream function for this flow as $\psi^s(z,z_0)$. Then a flow in the circular domain $D$ generated by $N$ vortices located at the points $z_j$, $j=1,2,\ldots,N$, with circulations $\Gamma_j$, can be obtained by the superposition of the individual vortex flows for each of the points $z_j$: $$\psi(z)=\sum_{j=1}^N \Gamma_j \psi^s(z,z_j). \label{2_3}$$ Method of images ================ The stream function for the vortex flow shown on the fig. \[fig2\] will be derived here by the application of the method of images. The main idea behind the method of images is to replace the original flow in the domain $D$ with impenetrable walls by a flow in the extended complex plane $\overline{\mathbb{C}}$ with additional “image" vortices placed in the specially selected points of $\overline{\mathbb{C}}$ in such a way that the impenetrable walls of the original flow domain become the streamlines of the flow. ![Method of images for one cylinder.[]{data-label="fig3"}](onecyl.ps){width="30.00000%"} The method of images can be easily illustrated on the simple example of a vortex flow around one cylinder (fig. \[fig3\]). Consider one vortex with a unit circulation $\Gamma_0=1$ which is located at the point $z_0$ of the complex plane, and one cylinder with impenetrable walls with a center $c_1$ and a radius $R_1$. To build a stream function for the flow around the cylinder, an image vortex needs to be placed at the point $z_0^*$ obtained by applying to $z_0$ the inversion map with respect to the circle $L_1:\, |z-c_1|=R_1$: $$T_1(z)=c_1+\frac{R_1^2}{\bar{z}-\bar{c_1}}$$ Observe that $z_0^*=T_1(z_0)$ and $T_1(z_0^*)=z_0$. The circulation at the image vortex is taken with the opposite sign $-\Gamma_0$ to that of the original vortex. Since the construction obtained in this way is symmetric with respect to the circle $L_1$, the circle $L_1$ becomes a streamline for the flow. This fact can be easily verified algebraically. The resulting stream function for the vortex flow around one cylinder has a form: $$\psi^s(z,z_0)=-\frac{1}{2\pi}\log |z-z_0|+\frac{1}{2\pi}\log|z-z_0^*|.$$ The method of images in combination with a conformal mapping is easy to apply for the flows in simply-connected domains in $\overline{\mathbb{C}}$. Using the method of images becomes more complicated in the case of multiple boundaries due to the fact that image vortices, in general, constitute an infinite set. Consider, in particular, the vortex flow depicted on the fig. \[fig2\]. In this case there are $K$ inversion maps with respect to $K$ circles $L_j$: $$T_j(z)=c_j+\frac{R_j^2}{\bar{z}-\bar{c}_j},\,\,\,j=1,\ldots,K. \label{3_1}$$ Again, observe that $T_j^2=I$, $j=1,2,\ldots,K$, where $I$ is an identity map. The goal of the method of images is to produce the set of “image" vortices with respect to all of the rigid boundaries of the flow domain $D$. To obtain the image vortices, first, take the inversion maps of the point $z_0$ with respect to all $K$ circles. This produces level-1 symmetry points $T_j(z_0)$ shown on the fig. \[fig4\]. ![Level-1 symmetry points.[]{data-label="fig4"}](kcircleslevel1.ps){width="50.00000%"} However, unlike in the case of the single cylinder, this construction is not symmetric with respect to the circles $L_j$ since the points $T_j(z_0)$ do not have symmetric images inside the circles $L_k$, $k\neq j$. Thus, it is necessary to apply the inversion maps $T_j$ to the level-1 points which leads to the level-2 points $T_{i_1}T_{i_2}(z_0)$, $i_1\neq i_2$. Obviously, this process needs to be continued infinitely. Level-$N$ point can be written in the form $T_{i_1}T_{i_2}\ldots T_{i_N}(z_0)$ where $i_k\neq i_{k+1}$, $k=1,\ldots, N-1$. It is easy to count the symmetry points of each level obtained in this way: there are $K$ points of the level-1, $K(K-1)$ points of the level-2, $K(K-1)^{N-1}$ points of the level-$N$, and so on. Given that the original vortex has a unit circulation, the circulation at each of the symmetry points of the level-$N$ is equal to $(-1)^N$. Following the method of images, it is possible to formally write the stream function for the flow in the following way: $$\psi^s(z,z_0)=-\frac{1}{2\pi}\log|z-z_0|-\sum_{\zeta\in \mbox{level}\, 1}\frac{(-1)^1}{2\pi}\log|z-\zeta|-\ldots$$ $$-\sum_{\zeta\in \mbox{level}\, N}\frac{(-1)^N}{2\pi}\log|z-\zeta|-\ldots \label{3_2}$$ Observe that without making additional assumptions about the summation, the formula (\[3\_2\]) does not make mathematical sense since it contains, in general, a divergent sum over the infinite set of the symmetry points $\zeta$. The sense in which the convergence of the formula (\[3\_2\]) is understood will be made precise soon. Towards this purpose, consider the level-$N$ approximation to the sum (\[3\_2\]): $$\psi_N^s(z,z_0)=-\frac{1}{2\pi}\log|z-z_0|-\sum_{\zeta\in \mbox{level}\, 1}\frac{(-1)^1}{2\pi}\log|z-\zeta|-\ldots$$ $$-\sum_{\zeta\in \mbox{level}\, N}\frac{(-1)^N}{2\pi}\log|z-\zeta|. \label{3_3}$$ Observe that even though the functions (\[3\_3\]) are harmonic in $D\setminus \{z_0\}$, and satisfy the condition (\[2\_1\]) at the point $z=z_0$, the condition (\[2\_2\]) will not be satisfied in the limit $N\to\infty$. Instead of $\psi_N^s(z,z_0)$ consider next the following function: $$\psi^{s*}_N(z,z_0)=\frac{(K-1)\psi^s_N(z,z_0)+\psi^s_{N+1}(z,z_0)}{K}. \label{3_4}$$ Our goal is to prove that the functional sequence $\psi^{s*}_N(z,z_0)$ converges to a harmonic function $\psi^s(z,z_0)$ in the domain $D\setminus \{z_0\}$. The resulting function $\psi^s(z,z_0)$ has a singularity of the logarithmic type (\[2\_1\]) at the point $z_0$ and is constant on the circles $L_j$, $j=1,\ldots, N$, which constitute the boundary of the domain $D$. In that case, the function $\psi^s(z,z_0)$ is the sought after stream function for the vortex flow in the circular domain $D$. Convergence of the functional sequence $\{\psi_N^{s*}(z,z_0) \}_{N=1}^{\infty}$ in the domain $D\setminus \{z_0\}$ ================================================================================================================== Observe that the function $\psi^{s*}_N(z,z_0)$ can be rewritten in the following form: $$-2\pi\psi^{s*}_N(z,z_0)=\frac{1}{K}\log|z-z_0|+ \label{4_1}$$ $$\frac{1}{K}\left((K-1)\log|z-z_0|+\sum_{\zeta\in\mbox{level}\, 1}(-1)^1\log|z-\zeta| \right)+$$ $$\frac{1}{K}\left((K-1)\sum_{\zeta\in\mbox{level}\, 1}(-1)^1\log|z-\zeta|+\sum_{\zeta\in\mbox{level}\, 2}(-1)^2\log|z-\zeta| \right)+\ldots$$ $$\frac{1}{K}\left((K-1)\sum_{\zeta\in\mbox{level}\, N}(-1)^N\log|z-\zeta|+\sum_{\zeta\in\mbox{level}\, N+1}(-1)^{N+1}\log|z-\zeta| \right).$$ The last formula allows to split the expression for the function $\psi^{s*}_N(z,z_0)$ into the following “layers": $$(K-1)\sum_{\zeta\in\mbox{level}\, M}(-1)^M\log|z-\zeta|+\sum_{\zeta\in\mbox{level}\, M+1}(-1)^{M+1}\log|z-\zeta|. \label{4_2}$$ Since the number of the points of the level-$(M+1)$ is $K-1$ times larger than the number of the points of the level-$M$, then each layer (\[4\_2\]) contains the same number of terms corresponding to the levels $M$ and $M+1$. Any point of the level-$M$ can be written in the form ${\cal L}_M(z_0)=T_{i_1}T_{i_2}\ldots T_{i_M}(z_0)$ for some indices $i_1,i_2,\ldots,i_M\in \{1,\ldots, K \}$, $i_k\neq i_{k+1}$, $k=1,\ldots, (M-1)$. To each point of the level-$M$ correspond $K-1$ points of the level-$(M+1)$ which can be written in the form ${\cal L}_MT_{i_{M+1}}(z_0)$, $i_M\neq i_{M+1}$, $i_{M+1}\in \{1,\ldots, K \}$. Thus, each of the layers (\[4\_2\]) can be further split into the terms of the following type: $$\log|z-{\cal L}_M(z_0)|-\log|z-{\cal L}_MT_{i_{M+1}}(z_0)|. \label{4_3}$$ Let us estimate each of the terms (\[4\_3\]). By applying the inequality $$\log|1+x|\leq x\,\,\mbox{ for }\,\, x>-1,$$ to the formula (\[4\_3\]), obtain: $$\log\left|\frac{z-{\cal L}_M(z_0)}{z-{\cal L}_MT_{i_{M+1}}(z_0)} \right|\leq \left|\frac{{\cal L}_MT_{i_{M+1}}(z_0)-{\cal L}_M(z_0)}{z-{\cal L}_MT_{i_{M+1}}(z_0)} \right|.$$ The map ${\cal L}_M$ is a composition of $M$ inversions $T_j$, $j=i_1,i_2,\ldots,i_M \in \{1,\ldots, K \}$. Observe that at each inversion $T_j$, a couple of the symmetry points $z_1$, $z_2$ is mapped from the exterior of the given circle $L_j$ into the interior of this circle. Using the expression (\[3\_1\]), it is possible to write: $$|T_j(z_1)-T_j(z_2)|=\frac{R_j^2|z_1-z_2|}{|z_1-c_j||z_2-c_j|}. \label{4_4}$$ Due to the procedure using which the symmetry points $z_1$, $z_2$ were generated, each of these points either lies in the interior of some other circle $L_k$ or is a point $z_0$. Simple geometric considerations show that: $$R_j/|z_1-c_j|<P_j,\,\,\,R_j/|z_2-c_j|<P_j,$$ where $$P_j=\max\left\{\frac{R_j}{|z_0-c_j|}, \,\,\,\frac{R_j}{|c_j-c_l|-R_l},\,l=1,\ldots,K,\,l\neq j \right\}. \label{4_5}$$ Then it is possible to conclude: $$|{\cal L}_MT_{i_{M+1}}(z_0)-{\cal L}_M(z_0)|<P_{i_1}^2P_{i_2}^2\ldots P_{i_M}^2\frac{D}{R(z)}<P^{2M}\frac{D}{R(z)}, \label{4_6}$$ where $$P=\max\{P_1,\,P_2,\ldots,\,P_K \}, \label{4_7}$$ $$D=\max_{j}|T_j(z_0)-z_0|,$$ $$R(z)=\min_j\mbox{dist}(z,L_j),$$ and $\mbox{dist}(z,L_j)$ denotes the shortest distance from the point $z$ to the circle $L_j$. Observe that for any compact set $K_0$ lying completely in the interior of the domain $D$ it is possible to find a number $R_0>0$ such that $R(z)>R_0$ for all the points $z\in K_0$. Then the inequality (\[4\_6\]) can be rewritten as: $$|{\cal L}_MT_{i_{M+1}}(z_0)-{\cal L}_M(z_0)|<P^{2M}\frac{D}{R_0}\,\,\, \mbox{for}\,\,\, \forall z\in K_0. \label{4_8}$$ Substituting (\[4\_6\]) and (\[4\_8\]) into the formula (\[4\_1\]), obtain that for any two positive integers $N_1$, $N_2$, such that $N_1<N_2$: $$2\pi|\psi_{N_1}^{s*}(z,z_0)-\psi_{N_2}^{s*}(z,z_0)|<\frac{((K-1)P^2)^{N_1+1}D}{R(z)(1-(K-1)P^2)}, \,\,\,z\in D, \label{4_9}$$ or $$2\pi|\psi_{N_1}^{s*}(z,z_0)-\psi_{N_2}^{s*}(z,z_0)|<\frac{((K-1)P^2)^{N_1+1}D}{R_0(1-(K-1)P^2)} \,\,\,\mbox{for}\,\,\, \forall z\in K_0, \label{4_10}$$ where $K_0\subset D$ is a compact set. It follows from the inequality (\[4\_9\]) that the functional sequence $\psi_N^{s*}(z,z_0)$ is a Cauchy sequence in $D$ pointwise if the condition $(K-1)P^2<1$ holds. Hence, $\psi_N^{s*}(z,z_0)$ converges in $D$ pointwise to some function which we denote as $\psi^s(z,z_0)$. It follows from the inequality (\[4\_10\]) that the convergence is uniform on any compact set $K_0$. Hence, since the functions $\psi_N^{s*}(z,z_0)$ are harmonic in variable $z$ in $D\setminus \{z_0\}$ for all $N$ by construction, it follows that the limit function $\psi^s(z,z_0)$ is also harmonic in $D\setminus \{z_0\}$. The limit function $\psi^s(z,z_0)$ has a logarithmic singularity of the type (\[2\_1\]) at the point $z_0$ because all the functions $\psi_N^{s*}(z,z_0)$ have a singularity of this type at the point $z_0$. The last property which needs to be proved is that the function $\psi^s(z,z_0)$ is constant on the circles $L_j$. Assume that $z\in L_j$ for some $j=1,2,\ldots,K$. Then from the properties of the inversion map $T_j$, it follows that $T_j(z)=z$. It can be seen that the points of the level-$(M+1)$ located inside any circle $L_j$ are obtained by taking an inversion map $T_j$ of the points of the level-$M$ located outside this circle: $$\{\zeta\in \mbox{level}\, (M+1),\,\,\zeta\in \mbox{int}\, L_j \}=T_j\{ \zeta\in \mbox{level}\, M,\,\,\zeta\notin \mbox{int}\, L_j \}.$$ A simple algebraic computation then shows: $$\log|z-\zeta|-\log|z-T_j(\zeta)|=\log\left|\frac{\zeta-c_j}{R_j}\right|, \forall z\in L_j,$$ where the right-hand side is constant on $L_j$. Then we can rewrite the function $\psi_N^{s*}(z,z_0)$ as: $$-2\pi\psi_N^{s*}(z,z_0)=\log\left|\frac{z_0-c_j}{R_j}\right|+\sum_{\substack{\zeta\in\,\mbox{level}\, j,\\ j=1,\ldots,N-1,\\ \zeta\notin \,\mbox{int}\,L_j}}(-1)^j\log\left|\frac{\zeta-c_j}{R_j}\right|+$$ $$\frac{1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, N,\\ \zeta\notin \mbox{int}\,L_j}}(-1)^N\log\left|\frac{\zeta-c_j}{R_j}\right|+ \label{4_11}$$ $$\frac{K-1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, N,\\ \zeta\notin \mbox{int}\,L_j}}(-1)^N\log\left|z-\zeta\right|+\frac{1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, (N+1),\\ \zeta\notin \mbox{int}\,L_j}}(-1)^{N+1}\log\left|z-\zeta\right|.$$ Observe that the first three terms of the formula (\[4\_11\]) are independent of the point $z\in L_j$, and the last two terms can be estimated similarly to (\[4\_9\]): $$\left|\frac{K-1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, N,\\ \zeta\notin \mbox{int}\,L_j}}(-1)^N\log\left|z-\zeta\right|+\frac{1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, (N+1),\\ \zeta\notin \mbox{int}\,L_j}}(-1)^{N+1}\log\left|z-\zeta\right|\right|<$$ $$\frac{K-1}{K}\frac{((K-1)P^{2})^ND}{R_j(z)},$$ where $$R_j(z)={\min}_{\substack{k=1,\ldots,K,\\k\neq j}}\mbox{dist}(z,L_k).$$ Thus, under assumption $(K-1)P^2<1$, it follows that the values of the sequence $\psi_N^{s*}(z,z_0)$ converge to constants on $L_j$ for each $j=1,\ldots,K$. It is possible to show that these constants are finite. To do so, we again split the remaining terms in (\[4\_11\]) into the “layers": $$\frac{K-1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, M,\\ \zeta\notin \mbox{int}\,L_j}}(-1)^j\log\left|\frac{\zeta-c_j}{R_j}\right|+\frac{1}{K}\sum_{\substack{\zeta\in\,\mbox{level}\, (M+1),\\ \zeta\notin \mbox{int}\,L_j}}(-1)^{N+1}\log\left|\frac{\zeta-c_j}{R_j}\right|,$$ which can be further split into the individual terms and estimated: $$\log|{\cal L}_M(z_0)-c_j|-\log|{\cal L}_MT_{i_{M+1}}(z_0)-c_j|<\frac{P^{2M}D}{\min_{\substack{k=1,\ldots,N,\\k\neq j}}(|c_k-c_j|-R_k)}.$$ From the last estimate it is possible to conclude that the values of $\psi^s(z,z_0)$ are finite on each of the circles $L_j$ if the condition $(K-1)P^2<1$ holds. It follows then that the limit function $\psi^s(z,z_0)=\lim_{N\to\infty}\psi_N^{s*}(z,z_0)$ satisfies all of the conditions imposed on the stream function for the considered vortex flow in the $K$-connected circular domain $D$. Circulations around cylinders and at infinity ============================================= The stream function $\psi^s(z,z_0)$ corresponds to the flow in the domain $D$ with a single vortex of a unit circulation $\Gamma_0=1$ located at the point $z_0$. The flow in the domain $D$ with $N$ vortices located at the points $z_j$, $j=1,\ldots,N$, with circulations $\Gamma_j$, can be easily obtained from the stream function $\psi^s(z,z_0)$ by superposition of the stream functions for the individual vortices using the formula (\[2\_3\]). Consider the circulations around the cylinders $L_j$ which are prescribed by the stream function $\psi^s(z,z_0)$. The circulation around a closed contour $C$ in a fluid domain can be computed by the following formula: $$\Gamma_C=\oint_C {\bf u} \cdot d{\bf s},$$ where $\bf u$ is a velocity and $d{\bf s}$ is an element along the contour. Using this formula it is possible to obtain that the circulation around any cylinder $L_j$ is equal to $-1/K$ for the functions $\psi_N^{s*}(z,z_0)$ for all $N$. Thus, in the limit $N\to\infty$, the circulation of the flow defined by the stream function $\psi^s(z,z_0)$ is also equal to $-1/K$ around any cylinder $L_j$ for all $j=1,\ldots,N$. A vortex at infinity point $z_0=\infty$ of the domain $D$ can be introduced by using a similar procedure as for the vortex at a finite point $z_0$. In particular, the flow with a vortex at infinity with a given circulation $\Gamma_{\infty}$ can be generated by the formulas: $$\psi_{N,\infty}(z)=-\sum_{\zeta\in \mbox{level}\, 1}\frac{(-1)^1\Gamma_{\infty}}{2\pi}\log|z-\zeta|-\ldots$$ $$-\sum_{\zeta\in \mbox{level}\, N}\frac{(-1)^N\Gamma_{\infty}}{2\pi}\log|z-\zeta|, \label{5_1}$$ $$\psi^*_{N,\infty}(z)=\frac{(K-1)\psi_{N,\infty}(z)+\psi_{N+1,\infty}(z)}{K}, \label{5_2}$$ and letting $N\to \infty$. These formulas are analogous to the formulas (\[3\_3\]), (\[3\_4\]) with the only exception that the first “generating term" for the vortex at the point $z_0=\infty$ is omitted. Observe, that the level-1 points in this case are the centers $c_j$ of the circles $L_j$. Similarly to the case of a finite point $z_0$, placing a vortex at the point $z_0=\infty$ with the circulation $\Gamma_{\infty}$ induces the circulations equal to $-\Gamma_{\infty}/K$ around each of the cylinders $L_j$, $j=1,\ldots,K$. Finally, for some practical applications, it is important to prescribe the circulations around the cylinders $L_j$, $j=1,\ldots,N$. This can be done by placing additional vortices with circulations $\Gamma_j^c$ at the centers $c_j$ of the cylinders. The formulas (\[3\_3\]), (\[3\_4\]) then become: $$\psi_{N,j}(z)=-\frac{\Gamma_j^c}{2\pi}\log|z-c_j|-\sum_{\substack{\zeta\in \mbox{level}\, 1,\\\zeta\neq\infty}}\frac{(-1)^1\Gamma_j^c}{2\pi}\log|z-\zeta|-\ldots$$ $$-\sum_{\zeta\in \mbox{level}\, N}\frac{(-1)^N\Gamma^c_j}{2\pi}\log|z-\zeta|. \label{5_3}$$ $$\psi^*_{N,j}(z)=\frac{(K-1)\psi_{N,j}(z)+\psi_{N+1,j}(z)}{K}. \label{5_4}$$ In this case $\zeta=\infty$ is a level-1 point, and the corresponding term must be omitted in the formula (\[5\_3\]). The level-2 points contain all the centers $c_k$, $k\neq j$, which are the symmetry points of the infinity point with respect to the cylinders $L_k$, $k\neq j$. Again, placing the vortex with a circulation $\Gamma_j^c$ at the point $c_j$ induces a circulation $\Gamma^c_j(2K-1)/K$ around the cylinder $L_j$, additional circulations of $-\Gamma^c_j/K$ around all other cylinders $L_k$, $k\neq j$, and a circulation $-\Gamma^c_j$ at infinity. Finally, combining the stream functions for individual vortices, vortex at infinity, and vortices at the centers of the cylinders $L_j$ one can obtain a flow in the domain with $N$ vortices with any prescribed circulations around each of the cylinders $L_j$, $j=1,\ldots,N$. If the desired circulations around each of the cylinders $L_j$ are equal to $\gamma_j$, this leads to the following system of linear algebraic equations with respect to the unknowns $\Gamma^c_j$: $$\left[ \begin{array}{cccc} \frac{2K-1}{K} & -\frac{1}{K} & \vdots & -\frac{1}{K}\\ -\frac{1}{K} & \frac{2K-1}{K} & \vdots & -\frac{1}{K}\\ \ldots & \ldots & \ddots & \ldots\\ -\frac{1}{K} & -\frac{1}{K} & \vdots & \frac{2K-1}{K} \end{array} \right] \left[ \begin{array}{c} \Gamma^c_1\\ \Gamma^c_2\\ \ldots\\ \Gamma^c_K \end{array} \right]= \left[ \begin{array}{c} \gamma_1+\frac{1}{K}\sum_{l=1}^N \Gamma_l+\frac{\Gamma_{\infty}}{K}\\ \gamma_2+\frac{1}{K}\sum_{l=1}^N \Gamma_l+\frac{\Gamma_{\infty}}{K}\\ \ldots\\ \gamma_K+\frac{1}{K}\sum_{l=1}^N \Gamma_l+\frac{\Gamma_{\infty}}{K} \end{array} \right] \label{5_5}$$ The system has a diagonally dominant matrix and, hence, is uniquelly solvable for any right-hand side. Finally, the circulation at infinity in this case will be equal to $-\sum_{l=1}^K\gamma_l-\sum_{l=1}^N\Gamma_l$. Set of the symmetry points and its limit set ============================================ Consider the set of all the symmetry points corresponding to the point $z=z_0$. These points can be described by the formula $T_{i_1}T_{i_2}\ldots T_{i_M}(z_0)$ for some $M\geq 1$ and for some set of indices $i_k\in \{1,\ldots, K \}$, $i_k\neq i_{k+1}$, $k=1,\ldots, (M-1)$. Let us study first the case of a doubly-connected domain with only two cylinders $L_1$, $L_2$ present. Observe that in this case all of the symmetry points can be described by four sequences: $$a_j=(T_1T_2)^{j-1}T_1(z_0),\,\,b_j=(T_1T_2)^j(z_0),$$ $$c_j=(T_2T_1)^{j-1}T_2(z_0),\,\,d_j=(T_2T_1)^j(z_0),\,\,\,j=1,2,\ldots.$$ The points of the sequences $\{a_j\}_{j=1}^{\infty}$ and $\{b_j\}_{j=1}^{\infty}$ lie in the interior of the circle $L_1$, while the points of the sequences $\{c_j\}_{j=1}^{\infty}$ and $\{d_j\}_{j=1}^{\infty}$ lie in the interior of the circle $L_2$. It is easy to show that all four sequences are convergent, and the limit points are the fixed points of the mappings $T_1T_2(z)$ and $T_2T_1(z)$. In particular, $$a_j\to z^{\star}_1,\,\,b_j\to z^{\star}_1,\,\,c_j\to z^{\star}_2,\,\,d_j\to z^{\star}_2,\,\,\mbox{as}\,\, j\to\infty,$$ where $$T_1T_2(z^{\star}_1)=z^{\star}_1,\,\,z^{\star}_1\in \,\mbox{int}\, L_1,\,\,\,T_2T_1(z^{\star}_2)=z^{\star}_2,\,\,z^{\star}_2\in\,\mbox{int}\, L_2.$$ ![Sets of the symmetry points for touching circles.[]{data-label="fig5"}](3cyl.eps "fig:"){width="50.00000%"} ![Sets of the symmetry points for touching circles.[]{data-label="fig5"}](4cylfractal.ps "fig:"){width="80.00000%"} The situation becomes more complicated for domains of connectivity higher than two. Observe that the set of symmetry points will necessary be self-similar. This follows from the fact that the level-$(N+1)$ points are obtained from the level-$N$ points by applying one of the symmetry maps $T_j$, $j=1,\ldots,K$. Observe also that in limiting cases, when the circles touch, the limiting set of the symmetry points can become a circle or even a fractal (fig. \[fig5\]). The fig. \[fig5\] are plotted with symmetry points up to the level $10$. More information about the limiting sets of the symmetry maps and Möbius maps can be found in [@Mumford2015]. Numerical results ================= ![Vortex flow around two cylinders with zero circulations on the boundaries of the cylinders.[]{data-label="fig6"}](2cyl00.eps){width="45.00000%"} Consider a vortex flow around two circular cylinders. The resulting flow domain in this case is doubly connected. Using a conformal mapping the exterior of two circles can be first mapped onto a concentric circular ring, and then onto a rectangle periodically repeated throughout the whole complex plane [@JohnsonMcDonald2004]. Hence, the solution can be furnished in terms of elliptic functions, namely, the elliptic theta function $\theta_1(\zeta,q)$. Numerical comparison of the results obtained by the methods used in [@JohnsonMcDonald2004] with the results of the current paper is given in the table \[tab1\]. The computations in the table are made for the stream function of the vortex flow around two cylinders with the parameters $c_1=0$, $R_1=1$, $c_2=3$, $R_2=0.5$, $z_0=2i$, $\Gamma_0=1$ and zero circulations on both cylinders which is equivalent to placing a vortex with a circulation $\Gamma_{\infty}=-\Gamma_0$ at the infinity point of the plane. Point $z$ Johnson and McDonald Current paper ------------- ---------------------- ---------------------- $-3.5-3.5i$ $-0.174608512540543$ $-0.174608618004631$ $0.5-1.5i$ $-0.047561219605849$ $-0.047561611378318$ $2.5+3.5i$ $-0.073398543207433$ $-0.073398504917044$ $1.5+0.5i$ $-0.020268684918721$ $-0.020268607383453$ : Comparison of the values of the stream function for a doubly connected domain. \[tab1\] Observe that for the doubly connected flow domain the condition $(K-1)P^2<1$ is always satisfied and, hence, the method presented in this paper always converges irrespectively of the mutual location and the size of the cylinders and vortices. ![Vortex flow around two cylinders with the circulation equal to $1/2$ on the left cylinder and the circulation equal to $-1/2$ on the right cylinder.[]{data-label="fig7"}](2cyl12m12.eps){width="45.00000%"} ![Vortex flow around two cylinders with the circulations equal to $1/2$ on the boundaries of the cylinders.[]{data-label="fig8"}](2cylp12p12.eps){width="45.00000%"} The graphs of the instantaneous streamlines for the vortex flow around two cylinders are plotted on the figs. \[fig6\], \[fig7\], \[fig8\]. The single vortex is located at the point $z_0=2i$ and has the circulation $\Gamma_0=1$, the cylinders have the centers $c_1=-2$, $c_2=2$, and the radii $R_1=R_2=1$. The circulations around both cylinders on the fig. \[fig6\] are equal to zero, on the fig. \[fig7\] the circulation around the left cylinder is equal to $1/2$ and around the right cylinder to $-1/2$, and on the fig. \[fig8\] the circulations around both cylinders are equal to $1/2$. ![Vortex flow around three cylinders with (a) the circulations equal to $0$ on the boundaries of the cylinders, (b) the circulations equal to $-1/3$ on the boundaries of the cylinders.[]{data-label="fig9"}](3cylinf.eps "fig:"){width="45.00000%"} ![Vortex flow around three cylinders with (a) the circulations equal to $0$ on the boundaries of the cylinders, (b) the circulations equal to $-1/3$ on the boundaries of the cylinders.[]{data-label="fig9"}](3cylnoinf.eps "fig:"){width="45.00000%"} ![Vortex flow around three cylinders with (a) the circulations equal to $0$ on the boundaries of the cylinders and one vortex at the point $z_0=0$, (b) the circulations equal to $0$, $-1$ and $1$ on the boundaries of the cylinders and one vortex at the point $z_0=2i$.[]{data-label="fig10"}](3cylnonsym000.eps "fig:"){width="40.00000%"} ![Vortex flow around three cylinders with (a) the circulations equal to $0$ on the boundaries of the cylinders and one vortex at the point $z_0=0$, (b) the circulations equal to $0$, $-1$ and $1$ on the boundaries of the cylinders and one vortex at the point $z_0=2i$.[]{data-label="fig10"}](3cylnonsym0m11.eps "fig:"){width="40.00000%"} The graphs of the instantaneous streamlines for the vortex flow around three cylinders are plotted on the figs. \[fig9\], \[fig10\]. The single vortex is located at the point $z_0=0$ and has the circulation $\Gamma_0=1$. On the fig. \[fig9\] the cylinders have the centers $c_j=e^{ij\pi/3}$, $j=0,1,2$, and the radii $R_j=0.5$. The circulations around all of the cylinders are equal to zero on the fig. \[fig9\](a), and are equal to $-1/3$ on the fig. \[fig9\](b). On the fig. \[fig10\] the cylinders have the centers $c_1=1+i$, $c_2=-1+i$, $c_3=-0.5-i$, and the radii $R_1=0.5$, $R_2=0.75$, and $R_3=0.5$. The circulations around all of the cylinders are equal to zero on the fig. \[fig10\](a). On the fig. \[fig10\](b) the circulation is equal to $\gamma_1=0$ around the first cylinder, $\gamma_2=-1$ around the second cylinder, and $\gamma_3=1$ around the third cylinder. The vortex with a circulation $\Gamma_0=1$ is located at the point $z_0=0$ for the fig. \[fig10\]a, and at the point $z_0=2i$ for the fig. \[fig10\]b. ![Vortex flow around five cylinders with prescribed circulations.[]{data-label="fig11"}](5cyl.eps){width="75.00000%"} The graphs of the instantaneous streamlines for a vortex flow around five cylinders are plotted on the fig. \[fig11\]. The single vortex is located at the point $z_0=2i$ and has the circulation $\Gamma_0=1$. On the fig. \[fig11\] the cylinders have the centers $c_1=-4$, $c_2=-2$, $c_3=0$, $c_4=2$, $c_5=4$, and the radii $R_j=0.5$. The circulations are equal to $0$ on the first and the fifth cylinder, equal to $-1$ on the second and the forth cylinder, and equal to $1$ on the third cylinder. Observe that in all the considered examples, it has been sufficient to consider the symmetry points of the level five at most, thus, the method converges relatively fast. Increasing the number of points beyond this level did not result in a noticeable difference in the pictures of the streamlines of the flow. Comparison of the results for $N=5$ and $N=10$ is given in the table \[tab2\]. The stream function is computed for the configuration shown on the fig. \[fig11\]. Observe that taking the symmetry points up to the level $N=5$ already provides us with the first 5 digits after the decimal point. Point $z$ $N=5$ $N=10$ ----------- ---------------------- ---------------------- $-2-2i$ $-1.039510891688030$ $-1.039511060181374$ $4i$ $-1.127511567288519$ $-1.127516103881800$ $4-2i$ $-1.193405902645471$ $-1.193403567442811$ : Comparison of the values of the stream function for $N=5$ and $N=10$. \[tab2\] Conclusions =========== This paper presents a new simple method of study of the vortex generated flows of liquid in domains of arbitrary connectivity using the application of the method of images. It is necessary to observe that the study of the fluid flows in multiply connected domains has a very limited coverage in the scientific literature. To the best of the authors’ knowledge, the only other available results are the series of works by Crowdy and Marshall [@CrowdyMarshall2005; @CrowdyMarshall2005b; @CrowdyMarshall2006]. The construction of the stream function presented in this paper is based on taking a limit of a certain functional sequence which converges to the sought after stream funcion of the fluid flow. The convergence of this functional sequence and its speed are investigated, and the condition of convergence is given as a simple inequality with respect to the geometrical parameters of the flow domain. The limitations of the current study are similar to those in the works of Crowdy and Marshall. The convergence of the presented method is reliable and fast in the case of well-separated cylinders when the connectivity $K$ of the flow domain is not too large. In the cases of very high connectivity $K$ or closely spaced cylinders it may be more efficient to compute the stream function of the flow by using the numerical algorithm proposed by Trefethen [@Trefethen2005]. The results of the current paper can be applied to many practical problems which involve solving the Dirichlet problem for Laplace’s equation in multiply connected domains. Acknowledgement {#acknowledgement .unnumbered} =============== Anna Zemlyanova’s research is partially funded through Simon’s Foundation Collaboration Grant. This support is gratefully acknowledged. The authors are grateful to Prof. Hrant Hakobyan for very useful discussions about the nature of the set of the symmetry points. [99]{} Milne-Thomson LM. 1968. *Theoretical hydrodynamics.* London: Macmillan. Aref H, Newton PK, Stremler M, Tokieda T, Vainchtein DL. 2003. Vortex crystals. *Adv. Appl. Mech.* **39**, 1–79. Newton PK. 2002. *The $N$-vortex problem.* New York: Springer. Johnson ER, McDonald NR. 2004. The motion of a vortex near two cylinders. *Proc. R. Soc. Lond. A.* **460**, 939–954. Johnson ER, McDonald NR. 2004. The motion of a vortex near a gap in a wall. *Phys. Fluids.* **16**, 462–469. Johnson ER, McDonald NR. 2005. Vortices near barriers with multiple gaps. *J. Fluid. Mech.* **31**, 355–358. Crowdy DG, Marshall J. 2005. Analytic formulae for the Kirchoff-Routh path function in multiply connected domains. *Proc. R. Soc. A*. **461**, 2477–2501. Crowdy DG, Marshall JS. 2005. The motion of a point vortex around multiple circular islands. *Phys. Fluids.* **17**, 056602. Crowdy DG, Marshall JS. 2006. The motion of a point vortex through gaps in walls. *J. Fluid Mech.* **551**, 31–48. Baker H. 1995. *Abelian functions and the allied theory of theta functions*. Cambridge, UK: Cambridge university press. Crowdy DG, Marshall JS. 2007. Computing the Schottky-Klein prime function on the Schottky double of planar domains. *Comput. Methods Funct. Theory.* **7(1)**, 293–308. Trefethen LN. 2005. Ten digit algorithm. Oxford University computing laboratory, report No. 05/13. Crowdy DG, Marshall JS. 2007. Green’s functions for Laplace’s equation in multiply connected domains. *IMA. J. Appl. Math.* **72**, 278–301, 2007. Embree M., Trefethen LN. 1999. Green’s function for multiply connected domains via conformal mapping. *SIAM Rev.* **41**, 745–761. Mityushev VV, Rogosin SV. 2000. *Constructive methods for linear and nonlinear boundary value problems for analytic functions.* Boca Raton, FL: Monographs and Surveys in Pure and Applied Mathematics. Chapman and Hall. Lin CC. 1941. On the motion of vortices in two dimensions. I. Existence of the Kirchoff-Routh function. *Proc. Natl Acad. Sci.* **27**, 570-575. Lin CC, 1941. On the motion of vortices in two dimensions. II. Some further investigations on the Kirchoff-Routh function. *Proc. Natl Acad. Sci.* **27**, 575–577. Nehari Z. 1952. *Conformal mapping.* New York: McGraw-Hill. Goluzin GM. 1969. *Geometric Theory of Functions of a Complex Variable.* AMS. Translations of Mathematical Monographs. DeLillo TK, Horn MA, Pfaltzgraff JA. 1999. Numerical Conformal mapping of multiply-connected regions by Fornberg-like methods. *Numerische Math.* **83**, 205–230. Henrici P. 1986. *Applied and computational complex analysis.* New York: Wiley. Mumford D, Series C, Wright D. 2002. *Indra’s pearls. The Vision of Felix Klein.* Cambridge, UK: Cambridge University Press.
--- abstract: | Calcium imaging has revolutionized systems neuroscience, providing the ability to image large neural populations with single-cell resolution. The resulting datasets are quite large (with scales of TB/hour in some cases), which has presented a barrier to routine open sharing of this data, slowing progress in reproducible research. State of the art methods for analyzing this data are based on non-negative matrix factorization (NMF); these approaches solve a non-convex optimization problem, and are highly effective when good initializations are available, but can break down e.g. in low-SNR settings where common initialization approaches fail. Here we introduce an improved approach to compressing and denoising functional imaging data. The method is based on a spatially-localized penalized matrix decomposition (PMD) of the data to separate (low-dimensional) signal from (temporally-uncorrelated) noise. This approach can be applied in parallel on local spatial patches and is therefore highly scalable, does not impose non-negativity constraints or require stringent identifiability assumptions (leading to significantly more robust results compared to NMF), and estimates all parameters directly from the data, so no hand-tuning is required. We have applied the method to a wide range of functional imaging data (including one-photon, two-photon, three-photon, widefield, somatic, axonal, dendritic, calcium, and voltage imaging datasets): in all cases, we observe $\sim$2-4x increases in SNR and compression rates of 20-300x with minimal visible loss of signal, with no adjustment of hyperparameters; this in turn facilitates the process of demixing the observed activity into contributions from individual neurons. We focus on two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the video data. author: - | E. Kelly Buchanan[^1],$^,$  Ian Kinsella,$^,$  Ding Zhou,$^,$  Rong Zhu[^2],   Pengcheng Zhou,\ Felipe Gerhard[^3], John Ferrante,\ Ying Ma[^4], Sharon H. Kim, Mohammed A Shaik,\ Yajie Liang[^5], Rongwen Lu,\ Jacob Reimer[^6], Paul G Fahey, Taliah N Muhammad,\ Graham Dempsey, Elizabeth Hillman, Na Ji, Andreas S Tolias, Liam Paninski bibliography: - 'axon\_pipeline.bib' title: 'Penalized matrix decomposition for denoising, compression, and improved demixing of functional imaging data' --- Introduction {#introduction .unnumbered} ============ Functional imaging is a critical tool in neuroscience. For example, calcium imaging methods are used routinely in hundreds of labs, generating large-scale video datasets whose characteristics (cell shapes, signal-to-noise levels, background activity, signal timescales, etc.) can vary widely depending on the imaging modality and the details of the brain region and cell types being imaged. To handle this data, scientists must solve two basic tasks: we need to extract signals from the raw video data with minimal noise, and we need to store (and share) the data. A number of papers have focused on the first task [@mukamel2009automated; @maruyama2014detecting; @pnevmatikakis2016simultaneous; @pachitariu2016suite2p; @friedrich2017multi; @inan2017robust; @Reynolds2017; @petersen2017scalpel; @zhou2018efficient; @Mishne2018]; however, somewhat surprisingly, very little work has focused on the second task. For both of these tasks, it is critical to denoise and compress the data as much as possible. Boosting the signal-to-noise ratio (SNR) is obviously important for detecting weak signals, performing single-trial analyses (where noise cannot be averaged over multiple trials), and for real-time experiments (where we may need to make decisions based on limited data - i.e., averaging over time is not an option). The benefits of compression are perhaps less obvious but are just as numerous: compression would facilitate much more widespread, routine open data sharing, enhancing reproducible neuroscience research. Compression will also be critical for in vivo imaging experiments in untethered animals, where data needs to be transmitted wirelessly, making data bandwidth a critical constraint. Finally, many signal extraction methods based on matrix factorization can be sped up significantly if run on suitably compressed data. Previous methods for denoising and compressing functional data have several drawbacks. Generic video compression approaches do not take advantage of the special structure of functional imaging data and produce visible artifacts at high compression rates; more importantly, these approaches do not denoise the data, since they focus on compressing the full data, including noise, whereas our goal here is to discard the noise. Conversely, generic image denoising approaches do not offer any compression (and also fail to take advantage of strong structured correlations in the video data). Constrained nonnegative matrix factorization (CNMF) [@pnevmatikakis2016simultaneous] approaches provide state of the art denoising and demixing of calcium imaging data, but these methods can leave significant visible signal behind in the residual (discarding potentially valuable signal) and are highly dependent on the initialization of the matrix factorization; thus it would be dangerous to keep only the matrix factorization output and discard the raw data. Principal components analysis (PCA) is often employed as a compression and denoising method [@mukamel2009automated; @pachitariu2016suite2p], but PCA is based on a rather unstructured signal model and therefore provides a suboptimal encoder of functional data (we will discuss this point in further depth below). In addition, the computation time of PCA scales quadratically with the number of pixels (assuming a long video dataset) and therefore naive applications of PCA are rather slow [@friedrich2017multi]. Finally, importantly, it is difficult to automatically choose the number of principal components that should be retained in a given video (and the “correct” number of components can vary widely across different datasets). Here we introduce a new simple approach to denoising and compressing functional video data. We apply a variant of penalized matrix decomposition [@Witten2009pmd] that operates locally in space, and encourages smoothness in both the spatial and temporal dimensions. This method offers multiple advantages over previous approaches. It is based on a signal model that is well-matched to the structure of the data: cells are local in space, there aren’t too many of them compared to the number of pixels (leading to a low-rank signal model), and cellular activity is smoother than the dominant noise sources, which are spatially and temporally uncorrelated. The approach is scalable (scaling linearly in the number of frames and pixels), and has modest memory requirements (because all processing is only performed in local spatial patches). All parameters (including the local matrix rank and the degree of smoothness of the output) are chosen automatically. Empirically we find that the method is highly effective, leaving behind minimal visible structure in the residual, while achieving 20-300x compression rates and 2-4x improvements in SNR. We demonstrate the method’s effectiveness on a wide variety of functional imaging datasets (both calcium and voltage imaging; one-, two- and three-photon imaging; and data including somas and dendrites) and show that the method is also effective on wide-field imaging data, where single-cell resolution is not available. Finally, we develop a new constrained NMF approach based on the denoised and compressed representation of the data, and apply this new demixing method to two challenging applications: dendritic calcium imaging data and voltage imaging data in the context of optogenetic stimulation. In both cases, we show that our new approach leads to faster and much more robust extraction of activity from the video data. Methods {#methods .unnumbered} ======= We begin by defining notation. Our starting point is an imaging dataset that has been motion-corrected (i.e., we assume that there is no motion of visible cellular components from frame to frame of the movie) and then “unfolded" into a $d \times T$ matrix $\mathbf{Y}$, where $T$ is the number of frames in the movie and $d$ is the number of pixels per frame (or voxels per frame if we are performing imaging in three dimensions). Now the typical approach is to model the data $\mathbf{Y}$ as $\mathbf{Y} = \mathbf{AC} + \mathbf{B} + \mathbf{E}$, where the columns of $\mathbf{A} \in \mathbb{R}^{d \times K}$ model the locations of each source (with $K$ sources total), the rows of $\mathbf{C} \in \mathbb{R}^{K \times T}$ model the time-varying fluorescence of each source, $\mathbf{B} \in \mathbb{R}^{d \times T}$ is a “background" term to handle signals that can not easily be split into single-neuronal components, and $\mathbf{E} \in \mathbb{R}^{d \times T}$ denotes temporally and spatially uncorrelated noise. It is useful to break the processing pipeline into three sub-problems: 1. **Denoising**: separation of neural signal $\mathbf{Y}^{*} = \mathbf{A}\mathbf{C} + \mathbf{B}$ from noise $\mathbf{E}$; 2. **Compression** of signal $\mathbf{Y}^{*}$; 3. **Demixing**: factorization of $\mathbf{Y}^{*}$ into its constituent components $\mathbf{A},\mathbf{C}$, and $\mathbf{B}$. Most prior work has attempted to solve these sub-problems simultaneously, e.g., to recover $\mathbf{A}$ and $\mathbf{C}$ directly from the raw data $\mathbf{Y}$. As emphasized above, this direct approach involves a challenging non-convex optimization problem; the solution to this problem typically misses some structure in $\mathbf{Y}$, is highly sensitive to initialization and hyperparameter settings, and can be particularly unstable in low-SNR regimes. We have found empirically that a sequential approach is more robust and effective. First we compute the compressed and denoised estimate $\hat{\mathbf{Y}} = \mathbf{UV}$; here $\mathbf{U}$ and $\mathbf{V}$ are chosen so that $\hat{\mathbf{Y}}$ captures all of the signal in $\mathbf{Y}$ while retaining minimal noise (i.e., $\hat{\mathbf{Y}} \approx \mathbf{Y}^{*}$) and also $\mathbf{U}$ and $\mathbf{V}$ are highly-structured, compressible matrices, but we do not enforce any constraints between $(\mathbf{U}, \mathbf{V})$ and $(\mathbf{A}, \mathbf{C}, \mathbf{B})$. The computation of $\mathbf{U}$ and $\mathbf{V}$ essentially solves sub-problems 1 and 2 simultaneously. Second, we exploit $\mathbf{U}$, $\mathbf{V}$, and the resulting denoised $\hat{\mathbf{Y}}$ to facilitate the solution of problem 3. We discuss each of these steps in turn below. Denoising & Compression {#denoising-compression .unnumbered} ----------------------- To achieve good compression and denoising we need to take advantage of three key properties of functional imaging data: 1. Signal sources are (mostly) spatially local; 2. Signal is structured both temporally and spatially, whereas noise is temporally and spatially uncorrelated; 3. Signal is (mostly) low-rank. Given these structural assumptions, it is natural to construct $\mathbf{U}$ and $\mathbf{V}$ via a local penalized matrix decomposition approach[^7]: we break the original data matrix $\mathbf{Y}$ into a collection of overlapping spatial patches, then decompose each of these matrix patches (in parallel) using a factorization method that enforces smoothness in the estimated spatial and temporal factors, then combine the resulting collection of spatial and temporal factors over all the patches into a final estimate of $\mathbf{U}$ and $\mathbf{V}$. (See [CaImAn](https://github.com/flatironinstitute/CaImAn) for a similar patch-wise approach to the demixing problem.) We have experimented with several approaches to penalized matrix decomposition (PMD), and found that an iterative rank-one deflation approach similar to the method described in [@Witten2009pmd] works well. We begin by standardizing the data within a patch: for each pixel, we subtract the mean and normalize by an estimate of the noise variance within each pixel; the noise variance is estimated using the frequency-domain method described in [@pnevmatikakis2016simultaneous], which exploits the fact that the signal and noise power are spectrally separated in movies with sufficiently high frame rates. After this normalization we can model the noise $\mathbf{E}$ as roughly spatially and temporally homogeneous. Denote this standardized data matrix within a patch as $\mathbf{Y_0}$, and Frobenius norm as $||.||_F$. Then at the $k^{th}$ iteration PMD extracts the best rank-one approximation $\mathbf{u}_k\mathbf{v}_k^T$ to the current residual $\mathbf{R}_k = \mathbf{Y_0} - \sum_{n=1}^{k-1} \mathbf{u}_n\mathbf{v}_n^T$, as determined by the objective $$(\mathbf{u}_k, \mathbf{v}_k) = \underset{\mathbf{u}, \mathbf{v}}{\arg\min} ~ || \mathbf{R}_k - \mathbf{u} \mathbf{v}^T ||_F \hspace{1em} \text{subject to} \hspace{1em} P_{spatial}(\mathbf{u}) \leq c_{1}^k,\ P_{temporal}(\mathbf{v}) \leq c_{2}^{k}, \label{eqn:CPMD}$$ followed by a temporal debiasing update $\mathbf{v}_k = \mathbf{R}_k^T\mathbf{u}_k$. The objective (\[eqn:CPMD\]) can be ascended via alternating minimization on $\mathbf{u_k}$ and $\mathbf{v_k}$. Note that if we drop the $P_{spatial}(\mathbf{u})$ and $P_{temporal}(\mathbf{v})$ constraints above then we can solve for $\mathbf{u}_k$ and $\mathbf{v}_k$ directly by computing the rank-1 singular value decomposition (SVD) of $\mathbf{R}_k$; in other words, by performing PCA within the patch. Since we have normalized the noise scale within each pixel, PCA should identify the signal subspace within the patch, given enough data (because the normalized projected data variance in any direction will be equal to one plus the signal variance in this direction; since PCA searches for signal directions that maximize variance, PCA will choose exactly the signal subspace in the limit of infinite data). Indeed, as discussed in the results section, simple patch-wise PCA (with an appropriate adaptive method for choosing the rank) often performs well, but incorporating spatial and temporal penalties in the optimization can push $\mathbf{u}_k$ and $\mathbf{v}_k$ closer to the signal subspace, resulting in improved compression and SNR. How should we define the penalties $P_{spatial}(\mathbf{u})$ and $P_{temporal}(\mathbf{v})$, along with the corresponding constraints $c_{1}^k$ and $c_{2}^{k}$? The simplest option would be to use quadratic smoothing penalties; this would lead to a simple closed-form linear smoothing update for each $\mathbf{u_k}$ and $\mathbf{v_k}$. However, the signals of interest here have inhomogeneous smoothness levels — an apical dendrite might be spatially smooth in the apical direction but highly non-smooth in the orthogonal direction, and similarly a calcium signal might be very smooth except at the times at which a spike occurs. Therefore simple linear smoothing is typically highly suboptimal, often resulting in both undersmoothing and oversmoothing in different signal regions. We have found total variation (TV) [@Rudin:1992:NTV:142273.142312] and trend filtering (TF) [@Kim2009tf] penalties to be much more empirically effective. We let $$\begin{aligned} P_{temporal}(\mathbf{v}) = \| \mathbf{D}^{(2)} \mathbf{v}\|_1 = \sum_{t=2}^{T-1} |\mathbf{v}_{t-1} - 2 \mathbf{v}_{t} + \mathbf{v}_{t+1}| \end{aligned}$$ and $$\begin{aligned} P_{spatial}(\mathbf{u}) = \|\mathbf{\nabla}_{\mathcal{G}}\mathbf{u}\|_1 = \sum_{(i,j) \in \mathcal{E}} | \mathbf{u}_i - \mathbf{u}_j |.\end{aligned}$$ Here $\mathbf{D}^{(2)}$ denotes the one-dimensional discrete second order difference operator and $\mathbf{\nabla}_{\mathcal{G}}$ the incidence matrix of the nearest-neighbor pixel-adjacency graph (pixels $(i,j)$ are in the edge set $\mathcal{E}$ if the pixels are nearest neighbors). Similarly to [@pnevmatikakis2016simultaneous], we define the smoothing constraints $c_1^k$ and $c_2^k$ implicitly within the alternating updates by the simple reformulation $$\mathbf{u}_k = \underset{\mathbf{u}}{\arg\min} \| \mathbf{R}_k \mathbf{v}_k - \mathbf{u}\|_2^2\ s.t.\ \| \mathbf{\nabla}_{\mathcal{G}} \mathbf{u}\|_1 \leq c_1^k \iff \mathbf{u}_k = \underset{\mathbf{u}}{\arg\min} \| \mathbf{\nabla}_{\mathcal{G}} \mathbf{u}\|_1\ s.t.\ \| \mathbf{R}_k \mathbf{v}_k - \mathbf{u}\|_2^2 \leq \hat{\sigma}^2_{\tilde{\mathbf{u}}} d \label{eqn:spatial_update}$$ and $$\mathbf{v}_k = \underset{\mathbf{v}}{\arg \min} \| \mathbf{R}_k ^T \mathbf{u}_k - \mathbf{v}\|_2^2 \ s.t.\ \| \mathbf{D}^{(2)} \mathbf{v}\|_1 \leq c_2^k \iff \mathbf{v}_k = \underset{\mathbf{v}}{\arg\min} \| \mathbf{D}^{(2)} \mathbf{v}\|_1 \ s.t.\ \| \mathbf{R}_k ^T \mathbf{u}_k - \mathbf{v}\|_2^2 \leq \hat{\sigma}^2_{\tilde{\mathbf{v}}} T \label{eqn:temporal_update}$$ where $\hat{\sigma}^2_{\tilde{\mathbf{u}}}$ (resp. $\hat{\sigma}^2_{\tilde{\mathbf{v}}}$) estimates the noise level of the unregularized update $\tilde{\mathbf{u}}_k = \mathbf{R}_k \mathbf{v}_k$ (resp. $\tilde{\mathbf{v}}_k = \mathbf{R}_k^T \mathbf{u}_k$), and we are using the fact that if the residual $\mathbf{R}_k \mathbf{v}_k - \mathbf{u}$ contains just noise then its squared norm should be close to $\hat{\sigma}^2_{\tilde{\mathbf{u}}} d$, by the law of large numbers (and similarly for equation \[eqn:temporal\_update\]). See Algorithm \[alg:ROD\] for a summary. To solve the constrained problems on the right-hand side we use the line search approach described in [@Langer2017cps]. We solve the primal form of the TV optimization problem (\[eqn:spatial\_update\]) using the proxTV package [@barberoTV14], and of the TF optimization problem (\[eqn:temporal\_update\]) using the Primal-Dual Active Set method in [@Han2016pdas]. Both of these methods can exploit warm starts, leading to major speedups after a good initial estimate is found. Empirically the TF optimization scales linearly with the movie length $T$; since the scale of the TV problem is bounded (because we work in local spatial patches) we have not explored the scaling of the TV problem in depth. Figure \[fig:TrendFiltering\] illustrates the effect of trend filtering on a couple $\mathbf{v}$ components. One important difference compared to previous denoising approaches [@haeffele2014structured; @pnevmatikakis2016simultaneous] is that the TF model is more flexible than the sparse autoregressive model that is typically used to denoise calcium imaging data: the TF model does not require the estimation of any sparsity penalties or autoregressive coefficients, and can handle a mixture of positive and negative fluctuations, while the sparse nonnegative autoregressive model can not (by construction). This is important in this context since each component in $\mathbf{V}$ can include multiple cellular components (potentially with different timescales), mixed with both negative and positive weights. ![Illustration of trend filtering. Each row shows a component $\mathbf{v}$ extracted from the voltage imaging dataset (see Results section for details). Red indicates simple projected signal $\tilde{\mathbf{v}} = \mathbf{R}^T \mathbf{u}$; blue indicates $\mathbf{v}$ after trend filtering. Errorbars on left indicate $2 \times$ estimated noise scale; right panels show zoomed region indicated by dashed lines in left panel.[]{data-label="fig:TrendFiltering"}](./methods/Effect_Of_TF.png){width="17cm"} To complete the description of the algorithm on a single patch we need an initialization and a stopping criterion to adaptively choose the rank of $\mathbf{U}$ and $\mathbf{V}$. For the latter, the basic idea is that we want to stop adding components $k$ as soon as the residual looks like uncorrelated noise. To make this precise, we define a pair of spatial and temporal “roughness" test statistics $$\begin{aligned} &T_{temporal}(\mathbf{v}) = \|\mathbf{D}^{(2)} \mathbf{v}\|_1 / \| \mathbf{v} \|_1 &T_{spatial}(\mathbf{u}) = \|\mathbf{\nabla}_{\mathcal{G}}\mathbf{u}\|_1 / \| \mathbf{u} \|_1\end{aligned}$$ and compute these statistics on each extracted $\mathbf{u}_k$ and $\mathbf{v}_k$. We accept or reject each component according to a one-sided hypothesis test under the null hypothesis that $\mathbf{R}_k$ consists of uncorrelated Gaussian noise of variance one. (We compute the critical region for this test numerically.) In the compression stage we are aiming to be rather conservative (we are willing to accept a bit of extra noise or a slightly higher-rank $\mathbf{U}$ and $\mathbf{V}$ in order to ensure that we are capturing the large majority of the signal), so we terminate the outer loop (i.e., stop adding more components $k$) after we reject a couple components $k$ in a row. See Algorithm \[alg-full-PMD\] for a summary. To initialize, we have found that setting $\mathbf{u}_0 \propto \mathbf{1}$ works well. To speed up early iterations, it is natural to iterate the projections while skipping the denoising steps; this corresponds to intializing with an approximate rank-1 SVD as computed by power iterations. Initializing in this manner can reduce the total number of iterations needed for $\mathbf{u}_k, \mathbf{v}_k$ to converge. Matrix-vector multiplications are a rate limiting step here; thus, these initial iterations can be sped up using spatial and temporal decimation on $\mathbf{R}_k$. Empirically, decimation has the added benefit of boosting signal (by averaging out noise in neighboring timepoints and pixels) and can be useful for extracting weak components in low SNR regimes; see [@friedrich2017multi] for a related discussion. The method described so far handles a single spatial patch of data. We can process patches in parallel; a multi-core implementation of this method (assigning different patches to different cores) achieves nearly linear speedups. We have found that for some datasets edge artifacts can appear near patch boundaries if the patches do not overlap spatially. These boundary artifacts can be eliminated by performing a $4 \times$ over-complete block-wise decomposition of $\mathbf{Y}$ using half-offset grids for the partitions (so that each pixel $x$ lies within the interior of at least one patch). Then we combine the overlapping patches together via linear interpolation (see [@pnevmatikakis2017normcorre] for a similar approach): set $$\hat{\mathbf{Y}}(x,t) = \frac{\sum_{p} \mathbf{a}_p(x) \hat{\mathbf{Y}}_p(x,t)} {\sum_p \mathbf{a}_p(x)},$$ where $p$ indexes the patches (so $\hat{\mathbf{Y}}_p$ denotes the denoiser output in the $p$-th patch) and $0 \leq \mathbf{a}_p(x) \leq 1$ is a “pyramid" function composed of piecewise linear functions that start at $0$ at the patch boundaries and increase linearly to $1$ at the center of the patch. The above is equivalent to starting with a collection of overlapping sparse local factorizations $\mathbf{U}_p \mathbf{V}_p$, forming element-wise products between the individual spatial components $\mathbf{U}_{ip}$ and the pyramid functions $\mathbf{a}_p$, and then forming the union of the result to obtain a new factorization $\mathbf{UV}$. Typically this will result in some redundancy due to the overlapping spatial components; we remove this redundancy in a final backwards model selection step that tests whether each temporal component can be explained as a weighted sum of its neighbors. More precisely, we sort the components in ascending order according to the $L_2$ norms of $\mathbf{U}_{ip} \cdot a_p$. For each $i$ in this order we then regress $\mathbf{V}_i$ onto the collection of temporal components $\mathbf{V}_j$ whose corresponding spatial components $\mathbf{U}_j$ overlap with $\mathbf{U}_i$, i.e., approximate $ \hat{\mathbf{V}_i} = \sum_j \beta_j \mathbf{V}_j$. We then test the signal strength of the residual $\mathbf{V}_i - \hat{\mathbf{V}_i}$ (using the temporal test statistic defined previously); the component is rejected if the residual is indistinguishable from noise according to this test statistic. If component $i$ is rejected then we distribute its energy to the remaining spatial components according to the regression weights: $\mathbf{U}_{j} = \mathbf{U}_{j} + \beta_{j} \mathbf{U}_{i}$. We conclude with a few final implementation notes. First, the results do not depend strongly on the precise patch size, as long as the patch size is comparable to the spatial correlation scale of the data: if the patches are chosen to be much smaller than this then the $\mathbf{V}$ components in neighboring patches are highly correlated, leading to excessive redundancy and suboptimal compression. (Conversely, if the patch size is too big then the sparsity of $\mathbf{U}$ is reduced, and we lose the benefits of patch-wise processing.) Second, in some datasets (e.g., widefield imaging, or microendoscopic imaging data), large background signals are present across large portions of the field of view. These background signals can be highly correlated across multiple spatial patches, leading to a suboptimal compression of the data if we use the simple independent-patch approach detailed above. Thus in some cases it is preferable to run a couple iterations of PMD(TV, TF) on the full $\mathbf{Y}$ and then subtract the resulting components away before moving on to the independent block processing scheme. We have found that this effectively subtracts away dominant background signals; these can then be encoded as a small number of dense columns in the matrix $\mathbf{U}$, to be followed by a larger number of sparse columns (corresponding to the small patches), resulting in an overall improvement in the compression rate. See the for an example. The patch-wise PMD(TV,TF) approach results in an algorithm that scales linearly in three critical parameters: $T$ (due to the sparse nature of the second-difference operator in the TF step), $d$ (due to the patch-wise approach), and the rank of $\mathbf{U}$ and $\mathbf{V}$. We obtain further speedups by exploiting warm starts and parallel processing over patches. Additional speedups can be obtained for very long datasets by computing $\mathbf{U}$ on a subset of the data and then updating $\mathbf{V}$ on the remainder of the movie; the latter step does not require any PMD iterations (since the spatial signal subspace has already been identified) and is therefore very fast, just requiring a single temporal update call per element of $\mathbf{V}$. Demixing {#demixing .unnumbered} -------- The methods described above provide a compressed and denoised representation of the original data $\mathbf{Y}$: the output matrices $\mathbf{U}$ and $\mathbf{V}$ are low-rank compared to $\mathbf{Y}$, and $\mathbf{U}$ is additionally highly sparse (since $\mathbf{U}$ is formed by appending spatial components $\mathbf{u}$ from multiple local spatial patches, and each $\mathbf{u}_k$ is zero outside of its corresponding patch). How can we exploit this representation to improve the demixing step? It is useful to first take a step back to consider the strengths and weaknesses of current state of the art demixing methods, most of which are based on NMF. The NMF model is very natural in calcium imaging applications, since each neuron has a shape that is fixed over the timescale of a typical imaging experiment (and these shapes can be represented as non-negative images, i.e., an element of the $\mathbf{A}$ matrix), and a corresponding time-varying calcium concentration that can be represented as a non-negative vector (an element of $\mathbf{C}$): to form a movie we simply take a product of each of these terms and add them together with noise and background, i.e., form $\mathbf{Y}= \mathbf{AC} + \mathbf{B} + \mathbf{E}$. However, current NMF-based approaches leave room for improvement in several key directions. First, since NMF is a non-convex problem, good initializations are critical to obtain good results via the standard alternating optimization approaches (similar points are made in [@petersen2017scalpel]). Good initialization approaches have been developed for somatic or nuclear calcium imaging, where simple Gaussian shape models are useful crude approximations to the elements of $\mathbf{A}$ [@pnevmatikakis2016simultaneous], but these approaches do not apply to dendritic or axonal imaging. Second (related), it can be hard to separate weak components from noise using current NMF-based approaches. Finally, voltage imaging data does not neatly fit in the NMF framework, since voltage traces typically display both positive and negative fluctuations around the baseline resting potential. To improve the robustness of NMF approaches for demixing functional data, we make use of the growing literature on “guaranteed NMF” approaches — methods for computing a non-negative matrix factorization that are guaranteed to output the “correct” answer under suitable conditions and assumptions [@donoho2004does; @recht2012factoring; @arora2012computing; @li2016recovery]. In practice, these methods work well on clean data of sufficiently small dimensionality, but are not robust to noise and scale poorly to high-dimensional data. We can solve both of these issues by “superpixelizing" the denoised version of $\mathbf{Y}$; the resulting NMF initialization method improves significantly on state of the art methods for processing dendritic and axonal data. We also take advantage of the sparse, low-rank structure of $\mathbf{U}$ and $\mathbf{V}$ to speed up the NMF iterations. ### Initialization via pure superpixels {#initialization-via-pure-superpixels .unnumbered} The first step of the initialization procedure is to identify groups of highly correlated spatially connected pixels – “superpixels." The idea is that a pixel within a neuron should be highly correlated with its neighbors, while a pixel containing mostly noise should have a much lower neighbor correlation. These neighbor correlations, in turn, can be estimated much more accurately from the denoised compared to the raw data. The superpixelization procedure results in a set of non-overlapping groups of pixels which are likely to be contained in good neural components. Then we want to extract “pure” superpixels, i.e., the subset of superpixels dominated by signal from just one neural component. We will use the temporal signals extracted from these pure superpixels to seed $\mathbf{C}$ in the NMF decomposition. To identify superpixels, we begin with the denoised data $\hat{\mathbf{Y}} = \mathbf{UV}$. Since the compression process discussed in the previous section is rather conservative (aiming to preserve the full signal, at the expense of retaining a modest amount of noise), there is room to apply a more aggressive lossy denoiser in the initialization stage to further reduce any remaining noise in $\hat{\mathbf{Y}}$. We soft-threshold signals in each pixel that are not sufficiently large — less than the median plus $\delta \times$ the median absolute deviation (MAD) within each pixel, with $\delta \approx 1$ or $2$. (This thresholding serves to extract mostly spiking activity from functional imaging data.) We identify two neighboring pixels to be from the same superpixel if their resulting denoised, soft-thresholded temporal signals have a correlation larger than a threshold $\epsilon$, with $\epsilon \approx 0.9$. Superpixels that contain fewer than $\tau$ pixels are discarded to further reduce noise and the total number of superpixels. We then apply rank 1 NMF on the signals from each superpixel to extract their (thresholded) temporal activities. To extract pure superpixels, we apply the Successive Projection Algorithm (SPA) [@gillis2014fast] to the temporal activities of superpixels. This algorithm removes “mixed” superpixels whose temporal activity can be modeled as a nonnegative linear combination of activity in other superpixels (up to some R-squared level larger than $ 1-\kappa$, where we use $\kappa \approx 0.2$) and outputs the remaining “pure" superpixels. See Algorithm \[alg1\] for pseudocode. Note that running SPA on superpixels rather than raw pixels improves performance significantly here, since averaging signals within superpixels boosts SNR (making it easier to separate signal from noise and isolate pure from mixed pixels) and also greatly reduces the dimensionality of the non-negative regression problem SPA has to solve at each iteration. (To keep the problem size small we also run SPA just on small local spatial patches, as in the previous section.) Finally, while we have obtained good results with SPA, other approaches are available [@gillis2018fast] and could be worth further exploration in the future. See Figure \[fig:vi\_superpixels\] for a visual summary of the full procedure. -- -- -- -- ### Local NMF {#local-nmf .unnumbered} Next we run NMF, using the temporal signals extracted from the “pure” superpixels to initialize $\mathbf{C}$. Given the initial $\mathbf{C}$, the typical next step is to regress onto the data to initialize $\mathbf{A}$. (Note that pure superpixels typically capture just a subset of pixels within the corresponding neuron, so it is not efficient to initialize $\mathbf{A}$ with the pure superpixels.) However, given the large number of pixels in a typical functional imaging video, direct regression of $\mathbf{C}$ onto $\mathbf{Y}$ is slow and overfits, providing poor estimates of $\mathbf{A}$. This issue is well-understood [@pnevmatikakis2016simultaneous], and several potential solutions have been proposed. For somatic imaging it makes sense to restrict the support of $\mathbf{A}$ to remain close to their initial values (we could use a dilation of the superpixel support for this). But for data with large dendritic or axonal components this approach would cut off large fractions of these components. Sparse regression updates are an option here, but these do not enforce spatial structure in the resulting $\mathbf{A}$ directly; this often results in “speckle" noise in the estimated spatial components (c.f. Figure \[realcompare\] below). We have found the following approach to be more effective. We initialize the support set $\Omega_k$ as the support of the $k$-th “pure” superpixel. Given $\mathbf{C}$, we compute the correlation image for each component $k$ as the correlation between the denoised data $\hat{\mathbf{Y}}$ and the $k$-th temporal component, $\mathbf{C}_k$. We truncate this correlation image below a certain threshold $\epsilon_1$ to zero, then update $\Omega_k$ as the connected component of the truncated correlation image which overlaps spatially with the previous $\Omega_k$. We use the modified fastHALS algorithm in [@friedrich2017multi] to update $\mathbf{A}$, $\mathbf{C}$, and $\mathbf{B}$ to locally optimize the objective $$\label{objnmf} \min_{\mathbf{A},\mathbf{C},{\textbf{\textit{b}}}} \|\hat{\mathbf{Y}}-\mathbf{AC} -\mathbf{B}\|_{F}^2,\ \mathrm{s.t.}\ \mathbf{A}_k^x = 0 ~ \forall x \not \in \Omega_k , \mathbf{A}\geqslant 0, \mathbf{C}\geqslant 0, \mathbf{B}={\textbf{\textit{b}}}\mathbf{1}^T, {\textbf{\textit{b}}}\geqslant 0.$$ Here we have modeled the background $\mathbf{B}$ as a simple temporally-constant vector; we discuss generalizations to time-varying backgrounds below. Also note that we are approximating $\hat{\mathbf{Y}}$ directly here, not the thresholded version we used to extract the superpixels above. Finally, we incorporate a merge step: we truncate the correlation image below certain threshold $\epsilon_2$ to zero, and automatically merge neurons if their truncated correlation images are highly overlapped. The full algorithm is shown in Algorithm \[alg2\]. ### Further implementation details {#further-implementation-details .unnumbered} *Multi pass strategy:* As in [@zhou2018efficient], we find it effective to take a couple passes over the data; particularly in datasets with high neuron density, the first NMF pass might miss some dim neurons. We decrease the MAD threshold $\delta$ and re-run Algorithm \[alg1\] on the residual to find additional components, and then run a final merge and NMF update to complete the pipeline. *Improvements from denoising and compression:* Compressed data leads to faster NMF updates, since we can replace $\hat{\mathbf{Y}}$ as $\mathbf{UV}$; in fastHALS, we can regress each ${\textbf{\textit{a}}}_k$ on $\mathbf{U}$ or ${\textbf{\textit{c}}}_{k}$ on $\mathbf{V}$ first instead of directly onto $\mathbf{Y}$. Similarly, when calculating the correlation image, we can compute the correlation between the low rank $\mathbf{V}$ and ${\textbf{\textit{c}}}_k$ first. As emphasized above, denoising also improves the estimation of the correlation images, which in turn improves the estimation of the support sets $\Omega_k$. *Time-varying background:* It is straightforward to generalize the objective \[objnmf\] to include a time-varying background, using either a low-rank model (as in [@pnevmatikakis2016simultaneous]) or a ring-structured model (as in [@zhou2018efficient]). For the low-rank background model, we have found that performing an SVD on the data excluding the support of the superpixels provides an efficient initialization for the background temporal components. *Incorporating temporal penalties*: Note that we are only imposing nonnegativity in $\mathbf{C}$ here; after denoising to obtain $\hat{\mathbf{Y}}$, we have found that this simple nonnegative constraint is sufficient for the datasets examined here. However, it is certainly possible to incorporate temporal penalties or constraints on $\mathbf{C}$ (e.g., a TF penalty or a non-negative auto-regressive penalty as in [@pnevmatikakis2016simultaneous]), either within each iteration or as a final denoising step. *Post-processing*: We find that sorting the extracted components by their “brightness," computed as $\max {\textbf{\textit{a}}}_k\cdot\max{\textbf{\textit{c}}}_k$, serves to separate dim background components from bright single-neuronal components. We also found it useful to drop components whose temporal trace has skewness less than 0.5; traces with high skewness correspond to components with significant spiking activity, but low-skewness traces corresponded to noise. Motion corrected data $\mathbf{Y}\in \mathbb{R}^{d\times T}$, MAD threshold $\delta$, minimum size of superpixels $\tau$, correlation threshold for superpixels $\epsilon$, $R^2$ threshold in SPA $\kappa$. $\sigma({\textbf{\textit{x}}})\leftarrow$ estimated noise for each pixel ${\textbf{\textit{x}}}$ of $\mathbf{Y}$; $\mu({\textbf{\textit{x}}})\leftarrow$ mean for each pixel of $\mathbf{Y}$; $\mathbf{Y} \leftarrow \left(\mathbf{Y}-\mu({\textbf{\textit{x}}})\right) / \sigma({\textbf{\textit{x}}})$; $(\hat{\mathbf{Y}},\mathbf{U},\mathbf{V}) \leftarrow$ PMD($\mathbf{Y}$); $n \leftarrow 0$; $\mathbf{A} \leftarrow [\ ]$, $\mathbf{C}\leftarrow [\ ]$, ${\textbf{\textit{b}}}\leftarrow\mathrm{median}$ for each pixel of $\hat{\mathbf{Y}}$; $\mathbf{R} \leftarrow \hat{\mathbf{Y}} -\mathbf{AC} - {\textbf{\textit{b}}}$; $\sigma_{med}({\textbf{\textit{x}}})\leftarrow$ median absolute deviation for each pixel of $\mathbf{R}$; $\mu_{med}({\textbf{\textit{x}}})\leftarrow$ median for each pixel of $\mathbf{R}$; $\tilde{\mathbf{Y}}\leftarrow \max\left(0, \mathbf{R} - \mu_{med}({\textbf{\textit{x}}}) - \delta\cdot\sigma_{med}({\textbf{\textit{x}}})\right)$; $\mathrm{corr}({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)\leftarrow \mathrm{corr}\left(\tilde{\mathbf{Y}}({\textbf{\textit{x}}},t),\tilde{\mathbf{Y}}({\textbf{\textit{x}}}^*,t)\right)$ for all neighbouring pixel pairs $({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)$; Extract superpixels: connect ${\textbf{\textit{x}}}$ and ${\textbf{\textit{x}}}^*$ together if $\mathrm{corr}({\textbf{\textit{x}}},{\textbf{\textit{x}}}^*)\geqslant \epsilon$ to construct connected components and discard those smaller than $\tau$, forming superpixels $\Omega_k,k=1,\cdots,K$; $({\textbf{\textit{a}}}_{k}, {\textbf{\textit{c}}}_{k})\leftarrow \mathrm{rank\ 1\ NMF}$ of $\tilde{\mathbf{Y}}$ on support $\Omega_k , k= 1,\cdots, K$; $[i_1,i_2,\cdots,i_S]\leftarrow\mathrm{SPA}([{\textbf{\textit{c}}}_{1},{\textbf{\textit{c}}}_{2},\cdots,{\textbf{\textit{c}}}_{K}], \kappa)$; $i_1,i_2,\cdots,i_S$ are indices of pure superpixels; $\mathbf{A}_0\leftarrow[\mathbf{A}, {\textbf{\textit{a}}}_{i_1},{\textbf{\textit{a}}}_{i_2},\cdots,{\textbf{\textit{a}}}_{i_S}]$; $\mathbf{C}_0\leftarrow[\mathbf{C}^T, {\textbf{\textit{c}}}_{i_1},{\textbf{\textit{c}}}_{i_2},\cdots,{\textbf{\textit{c}}}_{i_S}]^T$; ${\textbf{\textit{b}}}_0\leftarrow {\textbf{\textit{b}}}$; $(\mathbf{A}, \mathbf{C}, {\textbf{\textit{b}}})\leftarrow\mathrm{LocalNMF}(\mathbf{U}, \mathbf{V}, \mathbf{A}_0, \mathbf{C}_0, {\textbf{\textit{b}}}_0)$; $\delta \leftarrow \delta-1$; $n \leftarrow n+1$; $\eta(k)\leftarrow$ estimated noise for ${\textbf{\textit{c}}}_k$ using average of high frequency domain of PSD; (Optional) Denoise temporal components, e.g. by $\ell_1$ trend filter: ${\textbf{\textit{c}}}_k\leftarrow \min\limits_{\tilde{{\textbf{\textit{c}}}}_k} \|\tilde{{\textbf{\textit{c}}}}_k\|_1,\ \mathrm{s.t.}\ \|\tilde{{\textbf{\textit{c}}}}_k-{\textbf{\textit{c}}}_k\|_{F}\leqslant \eta(k)\sqrt{T}, k=1,\cdots,K$; $\mathbf{A},\mathbf{C},{\textbf{\textit{b}}}$ Compressed factors $\mathbf{U} \in \mathbb{R}^{d\times r}, \mathbf{V} \in \mathbb{R}^{T\times r}$ ($r = rank (\hat{\mathbf{Y}})$); initial constant background ${\textbf{\textit{b}}}_0$, spatial components $\mathbf{A}_0=[{\textbf{\textit{a}}}_{1,0},\cdots,{\textbf{\textit{a}}}_{K,0}]\in\mathbb{R}^{d\times K}$, and temporal components $\mathbf{C}_0=[{\textbf{\textit{c}}}_{1,0},\cdots,{\textbf{\textit{c}}}_{K,0}]^T \in\mathbb{R}^{K\times T}$; truncation threshold when updating support $\epsilon_1$, truncation threshold when merging $\epsilon_2$, overlap threshold when merging $\epsilon_3$. $\Omega_k \leftarrow \mathrm{supp}({\textbf{\textit{a}}}_{k,0})$ is spatial support for $k$-th component, $k=1,\cdots,K$; $\hat{\mathbf{A}} \leftarrow \mathbf{A}_0, \hat{\mathbf{C}}\leftarrow \mathbf{C}_0, \hat{{\textbf{\textit{b}}}}\leftarrow{\textbf{\textit{b}}}_0$; $\nu({\textbf{\textit{x}}})\leftarrow$ standard deviation for each pixel of $\hat{\mathbf{Y}} = \mathbf{UV}$; $\bar{\mathbf{V}}\leftarrow$ mean for each column of $\mathbf{V}$; $\mathbf{P} \leftarrow \left[\mathbf{U},-{\textbf{\textit{b}}}\right]\left( \begin{bmatrix} \mathbf{V}\\ \mathbf{1}^T\\ \end{bmatrix}\hat{\mathbf{C}}^T\right)$; $\mathbf{Q} \leftarrow \hat{\mathbf{C}}\hat{\mathbf{C}}^{T}$; Update spatial: $\hat{{\textbf{\textit{a}}}}_{k}(\Omega_k) \leftarrow \max\left(0, \hat{{\textbf{\textit{a}}}}_{k}(\Omega_k) + \frac{\mathbf{P}(\Omega_k,k)-\hat{\mathbf{A}}(\Omega_k)\mathbf{Q}(:,k)}{\mathbf{Q}(k,k)}\right)$; Update constant background: $\hat{{\textbf{\textit{b}}}} \leftarrow \max\left(0, \frac{1}{T}(\mathbf{UV}-\hat{\mathbf{A}}\hat{\mathbf{C}})\mathbf{1}\right)$; $\mathbf{P} \leftarrow \left[\mathbf{V}^T,\mathbf{1}\right]\left(\left[\mathbf{U},-{\textbf{\textit{b}}}\right]^T\hat{\mathbf{A}}\right)$; $\mathbf{Q} \leftarrow \hat{\mathbf{A}}^{T}\hat{\mathbf{A}}$; Update temporal: $\hat{{\textbf{\textit{c}}}}_{k} \leftarrow \max\left(0, \hat{{\textbf{\textit{c}}}}_{k} + \frac{\mathbf{P}(:,k)-\hat{\mathbf{C}}\mathbf{Q}(:,k)}{\mathbf{Q}(k,k)}\right)$; $\mathrm{corr}(k,{\textbf{\textit{x}}})\leftarrow \frac{1}{T\cdot\nu({\textbf{\textit{x}}})\cdot\mathrm{sd}({\textbf{\textit{c}}}_k)}\mathbf{U}({\textbf{\textit{x}}},:)\left((\mathbf{V} - \bar{\mathbf{V}})({\textbf{\textit{c}}}_k - \bar{{\textbf{\textit{c}}}}_k)\right)$; Update spatial support: $\Omega_k \leftarrow$ biggest connected component in $\{{\textbf{\textit{x}}}|\mathrm{corr}(k,{\textbf{\textit{x}}})\geqslant\epsilon_1\}$\ that spatially overlaps with $\{{\textbf{\textit{a}}}_k>0\}$; $\hat{{\textbf{\textit{a}}}}_k(\Omega_k^{c}) \leftarrow 0$; $\rho(k,{\textbf{\textit{x}}})\leftarrow\left(\mathrm{corr}(k,{\textbf{\textit{x}}})\geqslant\epsilon_2\right)$; Merge overlapping components $k_1,k_2$ if $\sum_{{\textbf{\textit{x}}}} \left(\rho(k_1,{\textbf{\textit{x}}}) * \rho(k_2,{\textbf{\textit{x}}})\right) / \sum_{{\textbf{\textit{x}}}}\rho(k_i,{\textbf{\textit{x}}}) \geqslant \epsilon_3$; $(\tilde{{\textbf{\textit{a}}}},\tilde{{\textbf{\textit{c}}}}) \leftarrow$ rank-1 NMF on $[\hat{{\textbf{\textit{a}}}}_{k_1},\cdots,\hat{{\textbf{\textit{a}}}}_{k_r}][\hat{{\textbf{\textit{c}}}}_{k_1},\cdots,\hat{{\textbf{\textit{c}}}}_{k_r}]$ for merged components $k_1,\cdots,k_r$; $\hat{\mathbf{A}}\leftarrow \left[\hat{\mathbf{A}}\backslash \{{\textbf{\textit{a}}}_{k_1},\cdots,{\textbf{\textit{a}}}_{k_r}\},\tilde{{\textbf{\textit{a}}}}\right], \hat{\mathbf{C}}\leftarrow \left[\hat{\mathbf{C}}^T\backslash \{{\textbf{\textit{c}}}_{k_1},\cdots,{\textbf{\textit{c}}}_{k_r}\},\tilde{{\textbf{\textit{c}}}}\right]^T;$ update number of components $K$; $\hat{\mathbf{A}},\hat{\mathbf{C}},\hat{{\textbf{\textit{b}}}}$ Results {#results .unnumbered} ======= Denoising {#denoising .unnumbered} --------- -------------- -------- ------------ ----------------- ------------------ ----------- ----------------- ------------ **Dataset** **Method** **Compression** **Total** **SNR** Frames FOV Patch **ratio** **runtime (s)** **metric** Endoscopic 6000 256x256 16x16 Patch-wise PMD 23 220.4 2.3 16x16 Patch-wise PCA\* X X X NA Standard PCA 2 595.5 1.3 Dendritic 1000 192x192 16x16 Patch-wise PMD 52 3.2 3.7 16x16 Patch-wise PCA 32 1.2 2.5 NA Standard PCA 2 18.3 1.1 Three-photon 3650 160x240 20x20 Patch-wise PMD 94 12.4 1.8 20x20 Patch-wise PCA 44 3.5 1.4 NA Standard PCA 2 187.2 1.0 Widefield 1872 512x512 32x32 Patch-wise PMD 298 12.5 3.5 32x32 Patch-wise PCA 265 10.1 3.4 NA Standard PCA 10 80.1 1.6 Voltage 6834 80x800 40x40 Patch-wise PMD 180 30.5 2.8 40x40 Patch-wise PCA 213 8.7 2.7 NA Standard PCA 8 185.1 1.0 -------------- -------- ------------ ----------------- ------------------ ----------- ----------------- ------------ : Summary of performance for PCA vs. PMD(TV,TF). SNR metric: average ratio of denoised vs raw SNR, with average restricted to top 10% of pixels with highest raw SNR (to avoid division by small numbers when calculating SNR ratios); an SNR metric of 1 indicates no improvement compared to raw data. Compression ratio defined in the main text. \* denotes that the patch-wise PCA method left a significant amount of visible signal in the residual for this dataset, and therefore we did not pursue further comparisons of timing or the other statistics shown here. To obtain optimistic results for the standard PCA baseline, runtimes are reported for a truncated SVD with prior knowledge of the number of components to select for each dataset (i.e., runtimes did not include any model selection steps for standard PCA). Results for patch-wise methods are reported for a single (non-overlapping) tiling of the FOV; note that total runtimes are reported (not runtimes per patch). All experiments were run using an Intel Core i7-6850K 6-core processor.[]{data-label="tab:pro_pro"} ![Illustration of the compression approach applied to microendoscopic imaging data. Top: individual frame extracted from the raw movie $\mathbf{Y}$ (left), denoised movie $\hat{\mathbf{Y}}$ (middle), and residual $\mathbf{Y} - \hat{\mathbf{Y}}$ (right). Bottom: example single-pixel traces from the movie (locations of pixels are circled in the top plots; first trace indicated by the black circle and second trace indicated by the gray circle). Note that the denoiser increases SNR significantly, and minimal signal is left behind in the residual. These results are best viewed in video form; see [microendoscopic imaging video](\VideoEndoscopeURL) for details.[]{data-label="fig:denoised_endoscope_1"}](./pmd_results/Endoscope_PMD.pdf){width="18cm" height="14cm"} ![Further analysis of microendoscopic imaging data. Top: per-pixel SNR estimated from the raw movie $\mathbf{Y}$ (left), denoised movie $\hat{\mathbf{Y}}$ (middle), and residual $\mathbf{Y} - \hat{\mathbf{Y}}$ (right). Red box indicates zoomed-in region shown in the previous figure. Bottom left panel: ratio of denoised vs. raw SNR; compression boosts SNR by roughly a factor of two here. Bottom middle and right: “correlation images" quantifying the average correlation of the temporal signals in each pixel vs. those in the nearest neighbor pixels [@Smith_2010], computed on raw and residual data, indicating that minimal signal is left behind in the residual. All results here and in the previous figure are based on background-subtracted data, for better visibility. []{data-label="fig:denoised_endoscope_2"}](./pmd_results/Endoscope_PMD.pdf){width="18cm" height="14cm"} ![Example frames and traces from Bessel dendritic imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [Bessel dendritic imaging demixing video](\VideoDemixDendriticURL) for details.[]{data-label="fig:denoised_dendritic_1"}](./pmd_results/Dendritic_PMD.pdf){width="18cm" height="14cm"} ![Summary quantification for denoising of Bessel dendritic imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_dendritic_2"}](./pmd_results/Dendritic_PMD.pdf){width="18cm" height="14cm"} ![Example frames and traces from three-photon imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [three-photon imaging video](\VideoThreePURL) for details. []{data-label="fig:denoised_3p_1"}](./pmd_results/3P_PMD.pdf){width="18cm" height="14cm"} ![Summary quantification for denoising of three-photon imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_3p_2"}](./pmd_results/3P_PMD.pdf){width="18cm" height="14cm"} ![Example frames and traces from widefield imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [widefield imaging video](\VideoWidefieldURL) for details. []{data-label="fig:denoised_widefield_1"}](./pmd_results/Widefield_PMD.pdf){width="18cm" height="14cm"} ![Summary quantification for denoising of widefield imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\].[]{data-label="fig:denoised_widefield_2"}](./pmd_results/Widefield_PMD.pdf){width="18cm" height="14cm"} ![Example frames and traces from voltage imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_1\]. See [voltage imaging demixing video](\VideoDemixVoltageURL) for details.[]{data-label="fig:denoised_voltage_1"}](./pmd_results/QState_PMD.pdf){width="18cm" height="14cm"} ![Summary quantification for denoising of voltage imaging data. Conventions as in Figure \[fig:denoised\_endoscope\_2\]. []{data-label="fig:denoised_voltage_2"}](./pmd_results/QState_PMD.pdf){width="18cm" height="14cm"} We have applied the denoising and compression approach described above to a wide variety of functional imaging datasets (See Appendix for full details): - **Endoscopic**: one-photon microendoscopic calcium imaging in dorsal striatum of behaving mouse - **Dendritic**: two-photon Bessel-beam calcium imaging of dendrites in somatosensory cortex of mouse in vivo - **Three-photon**: three-photon calcium imaging of visual cortex of mouse in vivo - **Widefield**: one-photon widefield whole-cortex calcium imaging in behaving mouse - **Voltage**: one-photon in vitro voltage imaging under optogenetic stimulation. The proposed methods perform well in all cases with no parameter tuning. We obtain compression ratios (defined as $nnz(\mathbf{Y}) / [nnz(\mathbf{U})+nnz(\mathbf{V})]$, where $nnz(\mathbf{A})$ counts the number of nonzero elements of the matrix $\mathbf{A}$) of 20x-200x, and SNR improvements typically in the range of about 2x but ranging up to 10x, depending on the dataset and the region of interest (we find that SNR improvements are often largest in regions of strongest activity, so SNR improvements vary significantly from pixel to pixel). See Table \[tab:pro\_pro\] and Figures \[fig:denoised\_endoscope\_1\]-\[fig:denoised\_voltage\_2\] for details. In terms of runtime, we observed the expected scaling: the proposed method scales linearly in $T$, $d$, and the number of extracted components. In turn, the number of estimated components scales roughly proportionally to the number of neurons visible in each movie (in the datasets with single-cell resolution). Total runtimes ranged from a few seconds to a few minutes (for the “Endoscope" dataset, which had the largest number of extracted components); these runtimes are fast enough for the proposed method to be useful as a pre-processing step to be run prior to demixing. We also performed comparisons against two simpler baselines: standard PCA run on the full dataset, and “patch-wise PCA" run on the same patches as used by PMD. For patch-wise PCA, we used the same stopping rule for choosing the rank of $\hat{\mathbf{Y}}$ as described above for PMD, but did not apply the TV or TF penalty. We find that using the same rank selection criterion for PCA applied to the full dataset performs relatively poorly: in each of the five datasets examined, this approach left significant visible signal behind in the residual. Thus, to make the comparisons as favorable as possible for standard PCA, we chose the rank manually, to retain as much visible signal as possible while keeping the rank as low as possible. Nonetheless, we found that the PMD approach outperformed standard PCA significantly on all three metrics examined here (compression ratio, SNR improvement, and runtime), largely because PCA on the full image outputs dense $\mathbf{U}$ matrices (leading to slower computation and worse noise suppression) whereas the $\mathbf{U}$ matrices output by the patch-wise approaches are highly sparse. The patch-wise PCA approach has much stronger performance than standard PCA applied to the full data. In four out of five datasets (the “Endoscope" dataset was the exception) patch-wise PCA captured all the visible signal in the dataset and did not leave any visible signal behind in the residual. In these four datasets PMD performed comparably or significantly better than patch-wise PCA in terms of SNR improvement and compression score, but patch-wise PCA was faster. Thus there may be some room to combine these two approaches, e.g., to use PCA as a fast initial method and then PMD to provide further denoising and compression. We leave this direction for future work. Demixing {#demixing-1 .unnumbered} -------- ### Voltage imaging data {#voltage-imaging-data .unnumbered} -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- Next we turn to the problem of demixing. We begin with an analysis of a challenging voltage imaging dataset. Voltage imaging (VI) data presents a few important challenges compared to calcium imaging (CI) data: currently-available VI data typically has much lower SNR and displays much stronger bleaching effects than CI data. The dataset we focus on here has another challenging feature: the preparation was driven with time-varying full-field optogenetic stimulation, resulting in highly correlated subthreshold activity in the visible cells, which are highly overlapping spatially. In preliminary analyses of this data we applied variants of CNMF-E [@zhou2018efficient] but did not obtain good results (data not shown), due to the strong bleaching and optogenetic stimulation-induced correlations present in this data. Thus we pre-processed this data by applying a spline-based detrending to each pixel (see Appendix for full details). This served to attenuate the highly-correlated bleaching signals and subthreshold fluctuations in the raw data, leaving behind spiking signals (which were not perfectly correlated at the millisecond resolution of the video data here) along with uncorrelated noise as the dominant visible signals in the data. Figure \[fig:vi\_superpixels\] shows that the denoiser (followed by soft-thresholding) serves to significantly improve the separability of neural signals from noise in this data: the superpixels obtained after denoising and soft-thresholding provide excellent seeds for the constrained NMF analysis. Figures \[fig:vi\_demixing\] (and the corresponding video) and \[fig:vi\_components\] demonstrate that the full demixing pipeline achieves good performance, extracting components with high spatial and temporal SNR and leaving relatively little residual signal behind despite the limited SNR and the multiple overlapping signals visible in the original (detrended) data. Note that in the final step we project the estimated spatial components back onto the original data, recovering the (highly correlated) temporal components including strong bleaching components (panel D of Figure \[fig:vi\_components\]). Finally, we achieved a speedup in the NMF iterations here that was roughly proportional to the ratio of the rank of $\mathbf{Y}$ compared to the rank of $\mathbf{U}$. ### Bessel dendritic imaging data {#bessel-dendritic-imaging-data .unnumbered} [ccc]{} Proposed pipeline & NMF on $\hat{\mathbf{Y}}$ & NMF on $\mathbf{Y}$\ [cccc]{} Ground truth & Proposed pipeline & NMF on $\hat{\mathbf{Y}}$ & NMF on $\mathbf{Y}$\ [ccc]{} Spatial components & Spatial components support & Temporal components\ \ The VI dataset analyzed in the preceding subsection contained a number of large visible axonal and dendritic components, but also displayed strong somatic components. For our next example we focus on a CI dataset dominated by dendritic components, where the simple Gaussian spatial filter approach introduced in [@pnevmatikakis2016simultaneous] for initializing somatic components is ineffective. (Indeed, in dendritic or axonal imaging datasets, a search for “hotspots" in the images is biased towards pixels summing activity from multiple neurons — and these “non-pure" pixels are exactly those we wish to avoid in the demixing initialization strategy proposed here.) Figure \[realcompare\] illustrates several of the spatial components extracted by our pipeline (again, see the corresponding video for a more detailed illustration of the demixing performance); these components visually appear to be dendritic segments and match well with the signals visible in the data movie. Notably, no parameter tuning was necessary to obtain good demixing performance on both the VI and CI datasets, despite the many differences between these data types. Additionally, as a baseline comparison we applied a simple sparse NMF approach with random initialization (similar to the method described in [@pnevmatikakis2016simultaneous]) to both the denoised and raw data ($\hat{\mathbf{Y}}$ and $\mathbf{Y}$, respectively). As shown in the right columns of Figure \[realcompare\], this baseline approach extracted components that were much more mixed and noisy than the components extracted by our proposed demixing pipeline; we also found that the baseline approach was more prone to missing weaker, dimmer components than was the proposed pipeline (data not shown). The above analyses depended on qualitative visual examinations of the obtained components and demixing video. We also generated simulated data with characteristics closely matched to the raw data, in order to more quantitatively test the demixing performance against a known (albeit simulated) ground truth. To generate simulated data $\mathbf{Y}$, we used the $\mathbf{A}$ and $\mathbf{C}$ estimated from the raw data, and further estimated the conditional distribution of the residual as a function of the denoised data $\mathbf{A} \mathbf{C}$ in the corresponding pixel $x$ and time bin $t$; then we added independent noise samples from this signal-dependent conditional distribution (but with the noise scale multiplied 2x, to make the simulation more challenging) to $\mathbf{AC}$. See the [simulated Bessel dendritic imaging video](\VideoSimulateDendriticURL) for comparison of real and simulated data. We ran the three demixing pipelines on this simulated data. Typical results of these simulations are shown in Figure \[simcompare\]: again we see that the proposed pipeline captures the ground truth components much more accurately than do the baseline methods, similar to the results shown in Figure \[realcompare\]. Quantitatively, components extracted by proposed pipeline have higher correlation with ground truth components than do those extracted by sparse NMF approaches, as shown in Figure \[corr\_sim\_comp\]. Discussion {#discussion .unnumbered} ========== We have presented new scalable approaches for compressing, denoising, and demixing functional imaging data. The compression and denoising methods presented are generally applicable and can serve as a useful generic step in any functional video processing pipeline, following motion correction and artifact removal. The new demixing methods proposed here are particularly useful for data with many dendritic and axonal processes, where methods based on simple sparse NMF are less effective. Related work {#related-work .unnumbered} ------------ Other work [@haeffele2014structured; @pnevmatikakis2016simultaneous; @de2018structured] has explored penalized matrix decomposition incorporating sparsity or total variation penalties in related contexts. An important strength of our proposed approach is the focus on highly scalable patch-wise computations (similar to [CaImAn](https://github.com/flatironinstitute/CaImAn)); this leads to fast computations and avoids overfitting by (implicitly) imposing strong sparsity constraints on the spatial matrix $\mathbf{U}$. We also employ a constrained optimization approach using the trend-filtering (TF) penalty, which is more flexible e.g. than the sparse convolutional temporal penalty used in [@haeffele2014structured], since the constrained TF approach doesn’t require us to fit a specific convolutional model or to estimate any Lagrange multipliers for the sparsity penalty. There are also some interesting connections between the demixing approach proposed in [@petersen2017scalpel] and our approach to initializing NMF, which is based on the sparse projection algorithm (SPA). [@fu2015self; @gillis2018fast] discuss the relationships between SPA and group-sparse dictionary selection methods related to the approach used in [@petersen2017scalpel]; thus the methods we use to compute “pure" superpixels and the methods used in [@petersen2017scalpel] to select neural dictionary elements are closely related. However, our denoise-then-superpixelize approach to seeding the dictionary of neural temporal components is in a sense converse to the clustering approach developed in [@petersen2017scalpel] for seeding the dictionary of neural spatial components. There may be room to fruitfully combine these two approaches in the future. Future work {#future-work .unnumbered} ----------- Real-time online updates for $\mathbf{U}$ and $\mathbf{V}$ should be possible, which would enable the incorporation of the compression and denoising approach into [@giovannucci2017onacid] for improved online demixing of neural activity. We are also continuing to explore alternative methods for spatial and temporal denoising of $\mathbf{u}_k$ and $\mathbf{v}_k$, e.g. artificial neural network denoisers. In the near future we plan to incorporate our code into the [CaImAn](https://github.com/flatironinstitute/CaImAn) and [CNMF-E](https://github.com/zhoupc/CNMF_E) packages for calcium imaging analysis. We hope that the proposed compression methods will help facilitate more widespread and routine public sharing of these valuable datasets and lead to more open and reproducible neuroscience. Code {#code .unnumbered} ---- Open source code is available at [https://github.com/paninski-lab/funimag](https://github.com/paninski-lab/funimag ). Video captions {#video-captions .unnumbered} -------------- 1. \ (left) Raw movie $\mathbf{Y}$; (middle) background $\mathbf{Y}_{BG}$ estimated via rank-5 PMD; (right) estimated foreground $\mathbf{Y} - \mathbf{Y}_{BG}$. Ticks along the horizontal and vertical axis (in this video and in the videos below) indicate patch borders; note that no edge artifacts are visible at these borders. 2. \ (left) Foreground; (middle) denoised foreground $\hat{\mathbf{Y}}$; (right) residual $\mathbf{Y} - \hat{\mathbf{Y}}$. 3. \ (left) Raw movie $\mathbf{Y}$; (middle) denoised movie $\hat{\mathbf{Y}}$; (right) residual $\mathbf{Y} - \hat{\mathbf{Y}}$. 4. \ Same format as previous video. 5. \ Panels from top to bottom: (1) detrended movie $\mathbf{Y}$; (2) denoised movie $\hat{\mathbf{Y}}$; (3) MAD soft-thresholded movie; (4) rank-1 NMF approximation within superpixels; (5) superpixels; (6) pure superpixels. 6. \ Panels from top to bottom: (1) detrended movie $\mathbf{Y}$; (2) denoised movie $\hat{\mathbf{Y}}$; (3) estimated signal $\mathbf{AC}$; (4) background $\mathbf{B}$; (5) residual $\hat{\mathbf{Y}} - \mathbf{AC} - \mathbf{B}$; (6) estimated noise $\mathbf{Y} - \hat{\mathbf{Y}}$. 7. \ Top: (left) motion corrected movie $\mathbf{Y}$; (middle) denoised movie $\hat{\mathbf{Y}}$; (right) estimated signal $\mathbf{AC}$; Bottom: (left) background $\mathbf{B}$, (middle) residual $\hat{\mathbf{Y}} - \mathbf{AC} - \mathbf{B}$, and (right) estimated noise $\mathbf{Y} - \hat{\mathbf{Y}}$. 8. \ Top: (left) Motion corrected real movie; (right) simulated movie. Bottom: (left) estimated noise from real movie; (right) simulated noise. Acknowledgments {#acknowledgments .unnumbered} --------------- We thank Shay Neufeld and Bernardo Sabatini for generously sharing their micro-endoscopic data with us, and Andrea Giovanucci, Eftychios Pnevmatikakis, Ziqiang Wei, Darcy Peterka, Jack Bowler, and Uygar Sumbul for helpful conversations. We also thank our colleagues in the International Brain Laboratory for motivating our efforts towards compressing functional imaging data. This work was funded by Army Research Office W911NF-12-1-0594 (MURI; EH and LP), the Simons Foundation Collaboration on the Global Brain (LP), National Institutes of Health R01EB22913 (LP), R21EY027592 (LP), 1U01NS103489-01 (NJ and LP), R01NS063226 (EH), R01NS076628 (EH), RF1MH114276 (EH), and U19NS104649-01 (EH and LP); in addition, this work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC00003 (LP). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Author contributions {#author-contributions .unnumbered} -------------------- EKB and LP conceived the project. EKB led development of the local PCA compression and denoising approach, including the 4x overcomplete approach for avoiding block artifacts. IK led development of the PMD(TF,TV) approach. DZ led development of the superpixelization and local NMF demixing approach. RZ developed a preliminary version of the PMD approach. PZ contributed to the development of the demixing approach. FG, JF, and GD contributed the voltage imaging dataset. JR, PF, TM, and AT contributed the three-photon imaging dataset. YL, RL, and NJ contributed the Bessel dendritic dataset. YM, SK, MS, and EH contributed the widefield dataset. EKB, IK, DZ, and LP wrote the paper, with input from PZ. LP supervised the project. Appendix: dataset details {#appendix-dataset-details .unnumbered} ========================= Microendoscopic imaging data {#microendoscopic-imaging-data .unnumbered} ---------------------------- This dataset was analyzed previously in [@zhou2018efficient]; see the “Dorsal Striatum Data" subsection of the Methods section of that paper for full experimental details. Briefly, a 1 mm gradient index of refraction (GRIN) lens was implanted into dorsal striatum of a mouse expressing AAV1-Syn-GCaMP6f; imaging was performed using a miniature one-photon microscope with an integrated 475 nm LED (Inscopix) while the mouse was freely moving in an open-field arena. Images were acquired at 30 Hz and then down sampled to 10 Hz. Bessel dendritic imaging data {#bessel-dendritic-imaging-data-1 .unnumbered} ----------------------------- All surgical procedures were in accordance with protocols approved by the Howard Hughes Medical Institute Janelia Research Campus Institutional Animal Care and Use Committee. C57BL/6J mice over 8 weeks old at the time of surgery were anesthetized with isoflurane anesthesia (1–2%). A craniotomy over nearly the entire left dorsal cortex (from Bregma +3 mm to Bregma -4.0 mm) was performed with the dura left intact, with the procedure described in detail previously in [@sofroniew2016large]. AAV2/9-synapsin-flex-GCaMP6s (2.5$\times 10^{13}$ GC/ml) was mixed with AAV2/1-synapsin-Cre (1.5$\times 10^{13}$ GC/ml, 1000$\times$dilution with PBS) at 1:1 to make the working viral solution for intracerebral injections. 30 nl viral solution was slowly injected into exposed cortex at 0.5 mm below dura. Injection sites were evenly spaced (at 0.7-0.9 mm separation) along two lines at 2.3 mm and 3.3 mm parallel to the midline. A custom-made glass coverslip (450 $\mu$m thick) was embedded in the craniotomy and sealed in place with dental acrylic. A titanium head bar was attached to the skull surrounding the coverslip. After recovery from surgery, the mice were habituated to head fixation. Four weeks after surgery, the head-fixed mouse was placed on a floating ball in the dark. The spontaneous neural activity as indicated by GCaMP6s fluorescence signal was recorded in the somatosensory cortex. Volumetric imaging of dendrites was achieved by scanning an axially extended Bessel focus in [@lu201850] and [@lu2017video]. An axicon-based Bessel beam module was incorporated into a 2-photon random access mesoscope (2p-RAM) in [@lu201850]. Details of the 2p-RAM have been described previously in [@sofroniew2016large]. Briefly, the system was equipped with a 12kHz resonant scanner (24 kHz line rate) and a remote focusing unit that enabled fast axial movements of the focal plane. The system has an excitation numerical aperture (NA) of 0.6 and a collection NA of 1.0. The measured lateral full width at half maximum (FWHM) of the Gaussian focus at the center of the field of view was  0.65 $\mu$m. The lateral and axial FWHMs of the Bessel focus were 0.60 $\mu$m and 71 $\mu$m, respectively. Scanning the Bessel focus in two dimensions, therefore, probed brain volumes within a  100 $\mu$m axial range. The volumetric dendritic data presented in this paper were obtained by placing the center of the Bessel focus at 62 $\mu$m below dura to probe structures at 12 $\mu$m to 112 $\mu$m below dura (figure \[gaussian\_bessel\]). Dendrites within this volume were imaged at an effective volume rate of 3.7 Hz, with each image having 1924$\times$2104 pixels at 0.33 $\mu$m/pixel in the x-y plane. The wavelength of the excitation light was 970 nm and the post-objective excitation power was 120 mW. Images were spatially decimated and cropped for the analyses shown here. ![In vivo volumetric imaging of dendrites in the mouse brain. (a) Maximum intensity projection of a 3D volume (635 $\mu$m x 694 $\mu$m x 100 $\mu$m) of dendrites. The sampling size was 0.33 $\mu$m/pixel. Post-objective power: 24 mW. (b) Image of the same volume collected by scanning a Bessel focus with 0.60 $\mu$m lateral FWHM and 71 $\mu$m axial FWHM. The effective volume rate was 3.7 Hz. Post-objective power: 120 mW. Excitation wavelength: 970 nm. Scale bar: 100 $\mu$m. []{data-label="gaussian_bessel"}](./plots/experiment/gaussian_bessel.png){width="100.00000%"} Three-photon imaging data {#three-photon-imaging-data .unnumbered} ------------------------- All procedures were carried out in accordance with the ethical guidelines of the National Institutes of Health and were approved by the Institutional Animal Care and Use Committee (IACUC) of Baylor College of Medicine. Cranial window surgeries over visual cortex were performed as described previously [@reimer2014pupil]. Briefly, a 4 mm cranial window was opened under isoflurane anesthesia and sealed with a 4 mm glass coverslip and surgical glue. The dura was removed before applying the coverslip to increase optical access to the cortex. Imaging was performed in a triple-transgenic mouse (Slc17a7-Cre x Dlx5-CreER x Ai148) expressing GCaMP6f pan-neuronally throughout cortex. Three-photon imaging data was collected as described previously [@ouzounov2017vivo]. Three-photon excitation of GCaMP6 was at 1320nm, which also enabled visualization of unlabeled vasculature and white matter via THG (third harmonic generation). Power was calibrated prior to each day of scanning and carefully maintained below 1.5nJ at the focal plane. For this study, scans were collected at 680 microns and 710 microns below the cortical surface with a 540 x 540 micron field of view at 0.59 pixels/micron spatial resolution and a frame rate of 5 Hz. Imaging was performed at the border of V1 and LM during presentation of oriented noise stimuli. Widefield imaging data {#widefield-imaging-data .unnumbered} ---------------------- See [@ma2016resting; @ma2016wide] for full details. Voltage imaging data {#voltage-imaging-data-1 .unnumbered} -------------------- Q-State’s proprietary Optopatch all-optical electrophysiology platform was used to record fluorescence recordings from induced pluripotent stem (iPS) cell-derived NGN2 excitatory neurons from a cohort of human subjects [@werley2017all]. Stimulation of action potentials was achieved with a blue light-activated channelrhodopsin (CheRiff). Fluorescent readout of voltage was enabled by an Archaerhodopsin variant (QuasAr). NGN2 neurons were produced at Q-State using a transcriptional programming approach. Recordings were performed with an ultra-widefield instrument with a resolution of 800x80 pixels (corresponding field of view of 2 mm$^2$) at a frame rate of 987 Hz. The obtained data displayed breaks during stimulus resets and photobleaching. To remove these effects from the raw data, we removed frames during stimulus resets, extracted slow trends with a robust B-spline regression (with knots chosen to allow for non-differentiability at stimulus change-points and discontinuity at stimulus resets), and then a quadratic regression against frames with no stimuli to capture and then remove photobleaching effects. [^1]: Equal contribution, arranged alphabetically; ekb2154, iak2119, dz2336@columbia.edu [^2]: Departments of Statistics and Neuroscience, Grossman Center for the Statistics of Mind, Center for Theoretical Neuroscience, and Zuckerman Mind Brain Behavior Institute, Columbia University [^3]: Q-State Biosciences, Inc., Cambridge, MA [^4]: Department of Biomedical Engineering and Zuckerman Mind Brain Behavior Institute, Columbia University [^5]: Departments of Physics and Molecular and Cell Biology, UC Berkeley [^6]: Department of Neuroscience and Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine [^7]: One important note: many matrix factorizations are possible here to obtain a compressed representation $(\mathbf{U},\mathbf{V})$. This non-uniqueness does not pose an issue for either compression or denoising. This makes these problems inherently easier than the demixing problem, where the identifiability of $\mathbf{A}$, $\mathbf{C}$, and $\mathbf{B}$ (perhaps up to permutations of the rows and columns of $\mathbf{A}$ and $\mathbf{C}$) is critical.
6.5in 8.5in [**Instability of the Randall-Sundrum Model and Exact Bulk Solutions**]{} [**Hongya Liu $^{,}$ and Guowen Peng** ]{} [**Abstract**]{} Five dimensional geodesic equation is used to study the gravitational force acted on a test particle in the bulk of the Randall-Sundrum two-brane model. This force could be interpreted as the gravitational attraction from matters on the two branes and may cause the model to be unstable. By analogy with star models in astrophysics, a fluid RS model is proposed in which the bulk is filled with a fluid and this fluid has an anisotropic pressure to balance the gravity from the two branes. Thus a class of exact bulk solutions is obtained which shows that any 4D Einstein solution with a perfect fluid source can be embedded in $y=$ constant hypersurfaces in the bulk to form an equilibrium state of the brane model. By requiring a 4D effective curvature to have a minimum, the compactification size of the extra dimension is discussed. Higher dimensions, Brane models. INTRODUCTION ============ There is a strong interest in the possibility that our universe is a 3-brane embedded in a higher dimensional space. It has been proposed that the large hierarchy between the weak scale and the fundamental scale of gravity can be eliminated if the extra compact dimensions are large[@ADD]. An alternative solution to the hierarchy problem, proposed by Randall and Sundrum (RS), assumes that our universe is a negative tension brane separated from a positive tension brane by a five-dimensional anti-de Sitter ($AdS_5$) bulk space[@RS1]. This does not require a large extra dimension: the hierarchy problem is solved by the special properties of the AdS space. A similar scenario to the RS one is that of Horava and Witten[HW]{}, which arises within the context of M-theory. The RS two-brane solution satisfies the 5D Einstein equations $$R_{AB}-\frac 12g_{AB}R=-\frac 1{4M^3}\left\{ \Lambda g_{AB}+\left[ \lambda _{vis}\delta (y-y_c)+\lambda _{hid}\delta (y)\right] g_{\mu \nu }\delta _A^\mu \delta _B^\nu \right\} \; \label{RSeq}$$ with a non-factorizable 5D metric being $$ds^2=W^2(y)\widetilde{\eta }_{\alpha \beta }dx^\alpha dx^\beta +dy^2\;. \label{RSmetr}$$ Here and in the following we use signature ($-++++$), and we use upper case Latin letters to denote 5D indices ($0,1,2,3,5$) and lower case Greek letters to denote 4D indices ($0,1,2,3$). In (\[RSeq\]) and (\[RSmetr\]), the “warp” factor $W(y)$ is $$W(y)=e^{-k\left| y\right| }\;, \label{W(y)}$$ and $\lambda _{vis}$, $\lambda _{hid}$ and $\Lambda $ are $$\lambda _{hid}=-\lambda _{vis}=24M^3k,\;\;\Lambda =-24M^3k^2\;. \label{lambda}$$ In this solution, the fifth dimension has the $Z_2$ reflection symmetry $% (x,y)\rightarrow (x,-y)$ with $-y_c\leq y\leq y_c$. The hidden brane and the visible brane are located at $y=0$ and $y=y_c$, respectively. The instability of the RS model has received extensive studies \[4, 5\]. In this paper, we wish to approach this subject from a different perspective. The paper is arranged as follows. In section 2 we use the 5D geodesic equations to study the instability of the model. In section 3 we introduce a 5D anisotropic fluid in the bulk and derive a hydrostatic equilibrium equation of the bulk fluid along the $y$-direction. In section 4 we look for exact solutions of the 5D Einstein equations. In section 5 we discuss the embedding of several well known 4D exact solutions. In section 6 we study the compactification size of the fifth dimension. GRAVITATIONAL FORCE IN THE BULK =============================== In this section we study the gravitational interaction between matters on the two branes of the RS model. It is known from the brane-world scenario that Standard Model (SM) particles are confined to branes while gravitons can freely propagate in the bulk. Now let us consider a test particle in the bulk. It is reasonable to expect that the motion of a bulk test particle, which is acted on by the gravitational force only, is described by the following 5D geodesics[@Youm]: $$\frac{d^2x^A}{d\tau ^2}+\Gamma _{BC}^A\frac{dx^B}{d\tau }\frac{dx^C}{d\tau }% =0\;, \label{geoEq}$$ where $\Gamma _{BC}^A$ is the Christoffel symbol for the 5D metric $g_{AB}$ and $d\tau ^2=-ds^2=-g_{AB}dx^Adx^A$. It is known that 5D geodesic equations (\[geoEq\]) may yield extra 4D forces[@Youm][@MWL]. In this paper, we are not going to study this kind of extra forces; we only wish to study particle’s motion along the fifth direction. From (\[geoEq\]), the 5D gravitational force can be defined as $$F^A=-\Gamma _{BC}^A\frac{dx^B}{d\tau }\frac{dx^C}{d\tau }\;. \label{FA}$$ Using (\[RSmetr\]) and (\[W(y)\]) we find that the fifth component of $% F^A$ is $$\frac{d^2y}{d\tau ^2}=F^5=-\Gamma _{BC}^5\frac{dx^B}{d\tau }\frac{dx^C}{% d\tau }=\varepsilon k\left[ 1+\left( \frac{dy}{d\tau }\right) ^2\right] \;, \label{A=5a}$$ where $$\varepsilon =\left\{ \begin{array}{l} 1\quad \;\,for\quad y>0 \\ -1\quad for\quad y<0% \end{array} \right. \;. \label{A=5b}$$ So we find $$\begin{aligned} F^5 &>&0\quad for\quad y>0\;, \nonumber \\ F^5 &<&0\quad for\quad y<0\;. \label{A=5c}\end{aligned}$$ This result shows that the force $F^5$ acting on the bulk test particle points from the hidden brane at $y=0$ to the visible brane in both $y>0$ and $y<0$ sides. So a bulk test particle will eventually move to the visible brane at $y=y_c$. This may cause the RS two brane model to be unstable. Firstly, we note that $y=0$ is an unstable equilibrium position while $y=y_c$ is a stable one. Secondly, it was argued that in sufficiently hard collisions the SM particles can acquire momentum in the extra dimensions and escape from the branes[@ADD]. As soon as a SM particle was kicked off the hidden brane at $y=0$ into the bulk, it will be pulled by $F^5$ down to the visible brane at $y=y_c$. In this way, the distribution of matter on the two branes can not remain balanced. So we say that the hidden brane is unstable[**.**]{} In the single brane RS model[@RS2], the $y=$ $y_c$ brane approaches the AdS horizon. We find that above discussion and conclusion also valid. It has been noted[@CHR] that if the Minkowski metric $\widetilde{\eta }% _{\alpha \beta }$ in the RS solution (\[RSmetr\]) is replaced by [*any*]{} Ricci flat metric $\widetilde{g}_{\alpha \beta }$ then the Einstein equations (\[RSeq\]) are still satisfied[@BP]. This enable people to study [*any*]{} 4D Einstein’s vacuum solutions such as the Schwarzschild one in the RS scenario. A very interesting work of this kind is discussed in Ref.[@CHR] where the 5D Schwarzschild solution is called the brane-world black hole, black string, or black cigar. Here we find that even in the Ricci-flat case the three equations (\[A=5a\])-(\[A=5c\]) still hold. Therefore, the conclusion is the same that the $y=0$ brane for Ricci-flat metrics $\widetilde{g}_{\alpha \beta }$ is also unstable. EQUILIBRIUM EQUATION OF THE BULK FLUID ====================================== To resolve the instability problem, we follow others[@BDL][@Kanti4] to introduce a 5D fluid in the bulk to balance the attraction between the two branes and to form a hydrodynamical model. This introduction would generalize the Ricci-flat metric $\widetilde{g}_{\alpha \beta }$ once more to non Ricci-flat 4D metrics, for which we let $$ds^{2}=W^{2}(y)\widetilde{g}_{\alpha \beta }(x^{\mu })dx^{\alpha }dx^{\beta }+dy^{2}\;, \label{metr}$$where $\widetilde{g}_{\alpha \beta }$ is the induced 4D metric. For the bulk matter, we use anisotropic 5D fluid model and require that the bulk fluid does not flow along the $y$-direction[@Kanti4], i.e., $u^{5}\equiv dy/d\tau =0$. That is, we let $$\begin{aligned} T^{AB} &=&\left( \begin{array}{ll} T^{\alpha \beta } & 0 \\ 0 & P% \end{array}% \right) \;, \nonumber \\ T^{\alpha \beta } &=&(\rho +p)u^{\alpha }u^{\beta }+pg^{\alpha \beta }\;, \label{T^AB}\end{aligned}$$where $T^{\alpha \beta }$ is of the 4D perfect fluid form with $u^{\alpha }\equiv dx^{\alpha }/d\tau $. Then by using the fifth equation of the 5D Bianchi identities $T^{AB};_{B}=0$ we obtain a condition $$P^{\prime }=\frac{W^{\prime }}{W}(3p-\rho -4P)\;, \label{P'}$$where we have used the relation $g_{\alpha \beta }T^{\alpha \beta }=3p-\rho $ (since $u^{5}=0$) and a prime stands for partial derivative with respect to $% y$. This condition (\[P’\]) is a constraint upon the bulk fluid which is similar to that of star models in astrophysics. Accordingly, we call ([P’]{}) the hydrostatic equilibrium equation of the bulk fluid along the $y$-direction. EXACT 5D BULK SOLUTIONS ======================= Now the 5D Einstein equations read $$R_{AB}-\frac 12g_{AB}R=\frac 1{4M^3}\left\{ T_{AB}-\Lambda g_{AB}-\left[ \lambda _{vis}\delta (y-y_c)+\lambda _{hid}\delta (y)\right] g_{\mu \nu }\delta _A^\mu \delta _B^\nu \right\} , \label{5Deq}$$ where $T_{AB}$ takes the form (\[T\^AB\]). To solve equations (\[5Deq\]), we firstly use the metric (\[metr\]) to reduce $R_{AB}-\frac 12g_{AB}R$ to $$\begin{aligned} R_{\alpha \beta }-\frac 12g_{\alpha \beta }R &=&\widetilde{R}_{\alpha \beta }-\frac 12\widetilde{g}_{\alpha \beta }\widetilde{R}+3\left( WW^{\prime \prime }+W^{\prime 2}\right) \widetilde{g}_{\alpha \beta }\;, \nonumber \\ R_{\alpha 5} &=&0\;, \nonumber \\ R_{55}-\frac 12g_{55}R &=&-\frac 12W^{-2}\widetilde{R}+6W^{-2}W^{\prime 2}\,\;, \label{G^AB}\end{aligned}$$ where $\widetilde{R}_{\alpha \beta }$ and $\widetilde{R}$ are made from $% \widetilde{g}_{\alpha \beta }$. Then, by substituting (\[G\^AB\]) into ([5Deq]{}), we obtain $$\widetilde{R}_{\alpha \beta }-\frac 12\widetilde{g}_{\alpha \beta }% \widetilde{R}+3\left( WW^{\prime \prime }+W^{\prime 2}\right) \widetilde{g}% _{\alpha \beta }=\frac 1{4M^3}\left\{ T_{\alpha \beta }-\left[ \Lambda +\lambda _{vis}\delta (y-y_c)+\lambda _{hid}\delta (y)\right] g_{\alpha \beta }\right\} ,\ \label{G^AB1}$$ $$-\frac 12\widetilde{R}+6W^{\prime 2}=\frac 1{4M^3}(P-\Lambda )W^2\;. \label{G^AB2}$$ Now we wish to know what kind of exact solutions of Eqs. (\[G\^AB1\]) and (\[G\^AB2\]) could fit into the RS two brane model without changing the boundary conditions (\[W(y)\]) and (\[lambda\]). So we suppose that the “warp” factor $W(y)$ and the cosmological constants $\Lambda $, $\lambda _{vis}$, and $\lambda _{hid}$ take the same forms as they do in the RS solution (\[W(y)\]) and (\[lambda\]). Then Eqs. (\[G\^AB1\]) and ([G\^AB2]{}) reduce to $$\widetilde{R}_{\alpha \beta }-\frac 12\widetilde{g}_{\alpha \beta }% \widetilde{R}=\frac 1{4M^3}\left[ (\rho +p)u_\alpha u_\beta +pg_{\alpha \beta }\right] \;, \label{4eq1}$$ $$\widetilde{R}=-\frac 1{2M^3}PW^2\;, \label{4eq2}$$ where we have used (\[T\^AB\]). We see that the left-hand sides of these two equations are functions of the 4D coordinates $x^\mu $ only. So we wish somehow to arrange to have the right-hand sides of the two equations also depend on $x^\mu $ only. To do this, let us define the induced 4D velocity by $\widetilde{u}^\alpha \equiv dx^\alpha /d\widetilde{\tau }$, where $d% \widetilde{\tau }^2=-\widetilde{g}_{\alpha \beta }dx^\alpha dx^\beta $. So $% u^\alpha \equiv dx^\alpha /d\tau =(d\widetilde{\tau }/d\tau )\widetilde{u}% ^\alpha $. Since $u^5=0$ we have $$u^\alpha =W^{-1}\widetilde{u}^\alpha \,,\qquad u_\alpha =W\widetilde{u}% _\alpha \;. \label{4vel}$$ Using this into (\[4eq1\]) gives $$\widetilde{R}_{\alpha \beta }-\frac 12\widetilde{g}_{\alpha \beta }% \widetilde{R}=\frac 1{4M^3}W^2\left[ (\rho +p)\widetilde{u}_\alpha \widetilde{u}_\beta +p\widetilde{g}_{\alpha \beta }\right] \;.$$ Note that $\widetilde{u}_\alpha $ and $\widetilde{g}_{\alpha \beta }$ depend on $x^\mu $ only. So from this equation and equation (\[4eq2\]) we obtain $$\rho =bW^{-2}\widetilde{\rho }\,,\quad \;p=bW^{-2}\widetilde{p}\,,\quad P=bW^{-2}\widetilde{P}\;, \label{pP1}$$ where $b$ is a constant, $\rho $, $p$ and $P$ satisfy the following condition $$2P=3p-\rho \;, \label{pP2}$$ and $\widetilde{\rho }$, $\widetilde{p}$ and $\widetilde{P}$ are functions of $x^\mu $ only. Thus, with $$b=32M^3\pi G_4\quad , \label{b}$$ we have successfully brought equations (\[4eq1\]) to the form of the standard 4D Einstein’s equations with a perfect fluid source: $$\begin{aligned} \widetilde{R}_{\alpha \beta }-\frac 12\widetilde{g}_{\alpha \beta }% \widetilde{R} &=&8\pi G_4\widetilde{T}_{\alpha \beta }\;, \nonumber \\ \widetilde{T}_{\alpha \beta } &=&(\widetilde{\rho }+\widetilde{p})\widetilde{% u}_\alpha \widetilde{u}_\beta +\widetilde{p}\widetilde{g}_{\alpha \beta }\;. \label{4eq1'}\end{aligned}$$ Note that equation (\[pP2\]) plays the same role as the hydrostatic equilibrium equation (\[P’\]), and (\[P’\]) is satisfied automatically. It is also noticed that some results, such as relations (\[pP1\]) and ([pP2]{}), are recoveries of previous works[@Kanti4], and is compatible with global constraints known as the brane world sum rules[@GKL]. Here, we can call $\widetilde{T}_{\alpha \beta }$ in (\[4eq1’\]) the effective 4D energy-momentum tensor. Many discussions concerning this kind of effective or induced energy momentum tensor can be found in the induced matter theory[@Wesson] in which the 4D matter could be a consequence of the dependence of the 5D metric on the extra dimension. This is also true in brane models in which if the 5D metric is independent of the extra dimension, then the brane is void of matter. Detailed discussions for the relationship between the induced-matter and the brane-world theories can be found in Ref.[@Leon]. We also wish to emphasis that our derivation for solutions (\[pP1\]) and (\[pP2\]) is very general. The only restriction is that the 5D bulk energy-momentum tensor $T^{AB}$ should take a fluid form (\[T\^AB\]). Since (\[4eq1’\]) are just the 4D Einstein equations, we can conclude that [*any*]{} known 4D exact solution, which has a perfect fluid as source, can be embedded in 4D hypersurfaces of the bulk to generate a 5D exact solution of the 5D equations (\[5Deq\]), with 5D metric as in (\[metr\]), the “warp” factor $W(y)$ and the cosmological constants $\Lambda $, $\lambda _{vis}$, $\lambda _{hid}$ as in (\[W(y)\]) and (\[lambda\]), and $% \widetilde{\rho }$, $\widetilde{p}$, $\widetilde{P}$ satisfy relations ([pP1]{}) and (\[pP2\]). EMBEDDING OF 4D EINSTEIN SOLUTIONS ================================== It is well known that most 4D exact solutions of general relativity have used a perfect fluid as source, such as the standard FRW cosmological solutions and various exterior and interior solutions for various rotating and non-rotating neutral stars. Using the relations obtained in section 4, all these solutions can easily be embedded in the RS model to form 5D exact solutions without changing the RS boundaries. For example, the 5D FRW cosmological solutions are $$ds^2=e^{-2k\left| y\right| }\left[ -dt^2+a^2(t)\left( \frac{dr^2}{% 1-k^{\prime }r^2}+r^2d\Omega ^2\right) \right] +dy^2\;, \label{CosS1}$$ where $k^{\prime }$ is the 3D curvature index ($k^{\prime }=\pm 1,0$), $% d\Omega ^2\equiv d\theta ^2+\sin ^2\theta d\varphi ^2$, and $$\begin{aligned} \left( \frac{da}{dt}\right) ^2+k^{\prime } &=&\frac{8\pi G_4}3\widetilde{% \rho }a^2\quad , \nonumber \\ a^3\frac{d\widetilde{p}}{dt} &=&\frac d{dt}\left[ \left( \widetilde{\rho }+% \widetilde{p}\right) a^3\right] \quad , \nonumber \\ \rho &=&\left( G_4/G_5\right) e^{2k\left| y\right| }\widetilde{\rho }% (t)\;,\qquad \nonumber \\ p &=&\left( G_4/G_5\right) e^{2k\left| y\right| }\widetilde{p}(t)\;, \nonumber \\ 2P &=&\left( G_4/G_5\right) e^{2k\left| y\right| }\left[ 3\widetilde{p}(t)-% \widetilde{\rho }(t)\right] \;, \label{CosS2}\end{aligned}$$ where $8\pi G_5=(4M^3)^{-1}$. From these we see that $\rho ,$ $p$ and $P$ increase exponentially when $y$ tends from the hidden brane at $y=0$ to the visible brane at $y=y_c$. As a second example, we write down the Schwarzschild-AdS$_5$ solution in the following: $$ds^2=e^{-2k\left| y\right| }\left[ -U(r)dt^2+U(r)^{-1}dr^2+r^2d\Omega ^2% \right] +dy^2\;, \label{S-AdS1}$$ where $$U(r)=1-\frac{2G_4M}r+\frac 13\widetilde{\lambda }r^2\;, \label{S-AdS2}$$ and $$\begin{aligned} -\widetilde{\rho } &=&\widetilde{p}=\frac 1{8\pi G_4}\widetilde{\lambda }\;, \nonumber \\ -\rho &=&p=\frac 1{8\pi G_5}e^{2k\left| y\right| }\widetilde{\lambda }\;, \nonumber \\ P &=&\frac 1{4\pi G_5}e^{2k\left| y\right| }\widetilde{\lambda }\;. \label{S-AdS3}\end{aligned}$$ So in the vicinity of the visible brane, the magnitude of the 5D densities $% \rho $, $p$ and $P$ are much larger than those in the vicinity of the hidden brane. By using known 4D exact solutions, more 5D exact solutions can be obtained easily in this way. We can show that the hydrostatic equilibrium equation (\[P’\]) is satisfied by all these solutions. So all these solutions are equilibrium states of the RS model. COMPACTIFICATION SIZE OF THE FIFTH DIMENSION ============================================ By introducing a scalar field in the bulk, Goldberger and Wise[@GW] proposed a dynamics to stabilize the size of the extra dimension. The mechanism was to integrate the scalar field action over the fifth dimension to yield an effective 4D potential. Then it was found that this potential has a minimum which yields a compactification radius without fine tuning of parameters. In our case, there is no scalar field available in the bulk. If one still wish to stabilize the extra dimension, one may need look for another quantity to minimum. For simplicity, let us consider the 5D Schwarzschild-AdS$_5$ solution ([S-AdS1]{})-(\[S-AdS3\]). For this solution the 5D scalar curvature $R$ can be calculated by using (\[5Deq\]), (\[T\^AB\]), (\[S-AdS3\]) and the relation $8\pi G_5=(4M^3)^{-1}$as $$R=-\frac 1{6M^3}\left\{ 24M^3\widetilde{\lambda }e^{2k\left| y\right| }-5\Lambda -4\left[ \lambda _{vis}\delta (y-y_c)+\lambda _{hid}\delta (y)% \right] \right\} . \label{R}$$ Now we consider the geometrical part of the 5D action $$S_{geo}=\int d^4x\int_{-y_c}^{y_c}2M^3\sqrt{-g}Rdy\quad . \label{Sgeo}$$ Substituting (\[R\]) into this equation and integrating over the fifth dimension, we obtain $$S_{geo}=-\frac 13\int \sqrt{-\widetilde{g}}d^4x\left[ \frac{24}kM^3% \widetilde{\lambda }\left( 1-e^{-2ky_c}\right) -\frac 5{2k}\Lambda \left( 1-e^{-4ky_c}\right) -4\lambda _{vis}e^{-4ky_c}-4\lambda _{hid}\right] . \label{Sgeo2}$$ Denote the expression inside the square bracket of this equation as $K$, and use (\[lambda\]) to eliminate $\Lambda $, $\lambda _{vis}$, and $\lambda _{hid}$ in $K$, we find $$K\equiv -6M^3\int_{-y_c}^{y_c}W^4Rdy=\frac{24}kM^3\widetilde{\lambda }\left( 1-e^{-2ky_c}\right) -36M^3k\left( 1-e^{-4ky_c}\right) \;. \label{K}$$ This $K$ can be interpreted as an effective 4D curvature. Interestingly we find that this $K$ has a minimum at $$e^{-2ky_c}=\frac{\widetilde{\lambda }}{3k^2}\quad , \label{Yc}$$ at which $$\frac{\partial ^2K}{\partial y_c^2}=\frac{32}kM^3\widetilde{\lambda }^2>0\;. \label{K''b}$$ Therefore we see that the relation (\[Yc\]) may provide us with a possible compactification size $y_c$ for the fifth dimension. Note that the relation (\[Yc\]) requires $\widetilde{\lambda }$ being positive. Be aware that if $\widetilde{\lambda }$ is negative, then the 5D Schwarzschild-AdS$_5$ solution (\[S-AdS1\])-(\[S-AdS3\]) becomes the 5D Schwarzschild-dS$_5$ solution. So the effective 4D curvature $K$ of the 5D Schwarzschild-dS$_5$ solution does not have a minimum. From (\[S-AdS2\]) it is reasonable to expect $\widetilde{\lambda }% y_{c}^{2}\ll 1$. For instance, if $\widetilde{\lambda }y_{c}^{2}=10^{-31}$, then $ky_{c}\simeq 40$. We find that this value of $y_{c}$ meets the requirement from the hierarchy problem[@RS1]. CONCLUSION ========== In this paper we have studied the gravitational force field in the bulk of the RS two brane model by using the 5D geodesic equations. This force may cause the hidden brane to be unstable. To balance this force we have introduced a 5D fluid in the bulk with it’s 4D part being a perfect fluid. Thus a hydrostatic equilibrium equation for the bulk fluid is derived. Meanwhile, a class of exact bulk solutions is obtained. In 4D hypersurfaces these solutions turn out to be exactly the same as the 4D Einstein equations with a perfect fluid source. Therefore, one can obtain exact 5D bulk solutions by simply embedding a suitable 4D solution in the bulk. Then we have discussed the stabilization size of the extra dimension. Further investigation is needed. [**ACKNOWLEDGMENTS**]{} This work was supported by the National Natural Science Foundation of China under grant 19975007. [99]{} Arkani-Hamed, N., Dimopoulos, S., and Dvali, G. (1998). Phys. Lett. B [**429**]{}, 263, hep-ph/9803315; Arkani-Hamed, N., Dimopoulos, S., and Dvali, G. (1999). Phys. Rev. D [**59**]{}, 086004 , hep-ph/9807344; Antoniadis, I., Arkani-Hamed, N., Dimopoulos, S., and Dvali, G. (1998). Phys. Lett. B [**436**]{}, 257, hep-ph/9804398. Randall, L., and Sundrum, R. (1999). Phys. Rev. Lett. [**83**]{}, 3370 , hep-ph/9905221. Horava, P., and Witten, E. (1996). Nucl. Phys. B [**460**]{}, 506 , hep-th/9510209; Horava, P., and Witten, E. (1996). Nucl. Phys. B [**475**]{}, 94, hep-th/9603142; Witten, E. (1996). Nucl. Phys. B [**471**]{}, 135 , hep-th/9602070. Goldberger, W. D., and Wise, M. B. (1999). Phys. Rev. D [**60**]{}, 107505, hep-ph/9907218; Goldberger W. D., and Wise, M. B. (1999). Phys. Rev. Lett. [**83**]{}, 4922 , hep-ph/9907447; Goldberger W. D., and Wise, M. B. (2000). Phys. Lett. B [**475**]{}, 275, hep-ph/9911457. Csaki, C., Graesser, M., Randall, L., and Terning, J. (2000). Phys. Rev. D [**62**]{}, 045015 , hep-ph/9911406; DeWolfe, O., Freedman, D. Z., Gubser, S. S., and Karch, A. (2000). Phys. Rev. D [**62**]{}, 046008, hep-th/9909134; Goldberger, W., and Rothstein, I. (2000). Phys. Lett. B [**491**]{}, 339 , hep-th/0007065; Luty, M. A., and Sundrum, R. (2000). Phys. Rev. D [**62**]{}, 035008 , hep-th/9910202. Youm, D. (2000). Phys. Rev. D [**62**]{}, 084002, hep-th/0004144. Mashhoon, B., Wesson, P., and Liu, H. (1998). Gen. Rel. Grav. [**30**]{}, 555; Liu, H., and Mashhoon, B. (2000). Phys. Lett. A [**272**]{}, 26 , gr-qc/0005079; Ponce de Leon, J. (2001). Phys. Lett. B [**523**]{}, 311 , gr-qc/0110063. Randall, L., and Sundrum, R. (1999). Phys. Rev. Lett. [**83**]{}, 4690, hep-th/9906064. Chamblin, A., Hawking, S. W., and Reall, H. S.(2000). Phys. Rev. D [**61**]{}, 065007 , hep-th/9909205. Brecher D., and Perry, M. J. (2000). Nucl. Phys. B [**566**]{}, 151 , hep-th/9908018. Binetruy, P., Deffayet, C., and Langlois, D. (2000). Nucl. Phys. B [**565**]{}, 269 , hep-th/9905012. Kanti, P., Kogan, I., Olive, K. A., and Pospelov, M. (1999). Phys. Lett. B [**468**]{}, 31 , hep-ph/9909481; Kanti, P., Kogan, I. I., Olive K. A., and Pospelov, M. (2000). Phys. Rev. D [**61**]{}, 106004 , hep-ph/9912266; Kanti, P., Olive K. A., and Pospelov, M. (2000). Phys. Lett. B [**481**]{}, 386 , hep-ph/0002229; Kanti, P., Olive K. A., and Pospelov, M. (2000). Phys. Rev. D [**62**]{}, 126004 , hep-ph/0005146; Kennedy, C., and Prodanov, E. M. (2000). Phys. Lett. B [**488**]{}, 11 , hep-th/0003299; Kennedy, C., and Prodanov, E. M. (2000). Phys. Lett. B [**498**]{}, 272, hep-th/0010202; Enqvist, K., Keski-Vakkuri, E., and Rasanen, S. hep-th/0007254. Gibbons, G., Kallosh R., and Linde, A. (2001). JHEP [**0101**]{}, 022 , hep-th/0011225. Wesson, P.S., Space-Time-Matter (world Scientific, Singapore, 1999); Overduin, J.M. and Wesson, P.S. (1997). Phys. Rep.[**283**]{}, 303, gr-qc/9805018; Wesson, P.S. and Ponce de Leon (1992). J. Math. Phys. [**33**]{}, 3883. Ponce de Leon, J. (2001). Mod. Phys. Lett. A, [**16**]{}, 2291, gr-qc/0111011.
--- abstract: | Users buy compatible IOT devices from different brands with an expectation that their cooperation is smooth, but while function may superficially look friendly, cohabitation can subversively cause early battery depletion in competitor devices. The Wi-Fi Direct standard was introduced with the intention of simplifying peer-to-peer connections in home applications while helping devices to save power through centralization of effort into a single group owner device negotiated on start-up. Attacks on the group formation stage can be based on manipulating a victim device to frequently end up being assigned the group owner function, thereby depleting its batteries at faster rates than its peer devices. This manipulation is made easy by the group formation mechanism adopted by the standard. We show that group formation procedures could be better secured with features ensuring fairness by relying on commitments and by learning about the behavior observed for peer devices in the past. Simulations are used to quantify the resistance achieved against several attack strategies. author: - | Marius C. Silaghi, Arianit Maraj$^*$, Timothy Atkinson$^*$\ Florida Intitute of Technology\ msilaghi@fit.edu,amaraj@my.fit.edu,atkinsot1999@my.fit.edu bibliography: - 'wifidirect.bib' title: | The Device War\ The War Between IOT Brands In A Household --- Introduction ============ Background ========== Related Work ============ Learning Peer Behavior ====================== Exploiting Commitments ====================== Experiments =========== Conclusions ===========
--- abstract: 'We have carried out a three-site photometric campaign for the $\beta$ Cephei star  from April to August 2003. 245 hours of differential photoelectric $uvy$ photometry were obtained during 77 clear nights. The frequency analysis of our measurements resulted in the detection of seven pulsation modes within a narrow frequency interval between 7.116 and 7.973 . No combination or harmonic frequencies were found. We performed a mode identification of the individual pulsations from our colour photometry that shows the presence of one radial mode, one rotationally split $\ell=1$ triplet and possibly three components of a rotationally split $\ell=2$ quintuplet. We discuss the implications of our findings and point out the similarity of the pulsation spectrum of  to that of another star, V836 Cen.' author: - 'G. Handler$^{1}$, R. R. Shobbrook$^{2}$, T. Mokgwetsi$^{3}$' - | \ $^1$ Institut für Astronomie, Universität Wien, Türkenschanzstrasse 17, A-1180 Wien, Austria\ $^{2}$ Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT, Australia\ $^{3}$ Theoretical Astrophysics Programme, University of the North-West, Private Bag X2046, Mmabatho 2735, South Africa date: 'Accepted 2004 July 17. Received 2004 August 13; in original form 2004 September 10' title: 'An asteroseismic study of the star $\theta$ Ophiuchi: photometric results' --- stars: variables: other – stars: early-type – stars: oscillations – stars: individual: – techniques: photometric Introduction ============ Two recent groundbreaking studies have opened up the class of the pulsators for asteroseismic investigations. For the star V836 Cen, Aerts et al. (2003, 2004a) acquired and analysed 21 years of time-resolved Geneva photometry. They identified the six detected pulsation modes with their pulsational quantum numbers (the radial fundamental mode, an $\ell=1$ triplet and two components of a rotationally split $\ell=2$ mode). Consequent seismic modelling (Dupret et al. 2004) allowed the derivation of constraints on the star’s position in the HR diagram and its convective core size plus demonstrated that its interior rotation is not rigid. A second star, $\nu$Eri, was studied with large photometric and spectroscopic multisite campaigns (Handler et al. 2004, Aerts et al. 2004b), yielding a total of almost 1200 hours of measurement. The nine modes detected for this star were identified with the radial fundamental mode, two $\ell=1$ triplets, one $\ell=1$ singlet and one $\ell=2$ mode (De Ridder et al. 2004). Seismic modelling (Pamyatnykh, Handler & Dziembowski 2004, Ausseloos et al. 2004) demonstrated that the pulsation spectrum of $\nu$Eri cannot be reproduced with standard models, some convective core overshooting may be required and again non-rigid interior rotation must be present (with the edge of the convective core rotating about 3 times faster than the outer layers, consistent with the findings for V836 Cen). The seismic results indicate that it is possible that the interior chemical composition of the star is not homogeneous. After some 15 years of frustration, asteroseismology of opacity-driven main sequence pulsators has thus finally become reality. The reasons why some stars are the first such objects to be studied may be summarised as follows: their pulsational mode spectra are sufficiently simple that few possibilities for erroneous or ambiguous mode identifications occur, yet the observed spectra are fairly complete; radial modes have been detected for the two abovementioned stars (substantially reducing the number of possible seismic models); finally, the applied mode identification methods do work (e.g. see Handler et al. 2003). The general astrophysical implications of seismic studies of the stars are also highly interesting. Since these objects are main sequence stars between 9 and 17 $M_{\sun}$ (Stankov & Handler 2005), they are progenitors of Type II supernovae, which in turn are largely responsible for the enrichment of the interstellar medium and thus for the chemical evolution of galaxies. Consequently, if we can trace the evolution of stars by sounding their interiors in different evolutionary states, we are not only able to calibrate stellar structure and evolution calculations, but could put constraints on the modelling of extragalactic stellar systems. Therefore it is highly desirable to determine the interior structure of several stars. One of the objects that seems well suited for an asteroseismic study is . The variability of this bright ($V=3.27$ mag) object has been known for a long time (Henroteau 1922), and the corresponding period determinations in the literature are partly controversial. Several authors (van Hoof & Blaauw 1958, van Hoof 1962, Briers 1971, Heynderickx 1992) noted variable shapes of their radial velocity and light curves, indicating multiperiodicity. However, no consensus on the values of possible secondary and tertiary periods was reached. Together with the presence of archival high-resolution spectroscopy (to be analysed in a companion paper by Briquet et al. 2005), the findings mentioned above made  an attractive target for a multisite study. Consequently, we have carried out a photometric campaign on this star in mid 2003. Observations and reductions =========================== We acquired single-channel differential photoelectric photometry through the Strömgren $uvy$ filters with three telescopes on three continents during the months of April to August 2003. The measurements of  were obtained with respect to two comparison stars, 44 Oph (HD 157792, A3m, $V=4.17$) and 51 Oph (HD 158643, A0V, $V=4.81$). Owing to the brightness of all three objects, some neutral density filters were applied to avoid damage of the photomultipliers. A short summary of the observations is given in Table 1. The total time base of our measurements is 124 days. ----------------------------------------------- ----------- ---------- ----------- -------- ------------- -------- ------ Observatory Longitude Latitude Telescope Observer(s) Nights h points South African Astronomical Observatory (SAAO) +2049 $-$3222 0.5m 9 34.08 135 TM Fairborn Observatory $-$11042 +3123 0.75m APT 44 106.68 662 $--$ Siding Spring Observatory (SSO) +14904 $-$31 16 0.6m 24 104.35 506 RRS Total 77 245.11 1303 ----------------------------------------------- ----------- ---------- ----------- -------- ------------- -------- ------ Data reduction was started by correcting for coincidence losses, sky background and extinction. Nightly extinction coefficients were determined with the Bouguer method or with the differential technique from the comparison stars (neither showed any variability during our measurements). As the comparison stars are considerably cooler than the variable, second-order colour extinction coefficients were also determined. We found colour extinction corrections to be necessary for the $u$ data from SSO and the APT and applied them correspondingly. We then determined the mean $u, v, y$ zeropoints between the comparison star magnitudes and used them to combine the measurements of 44 and 51 Oph to a curve that was assumed to reflect the effects of transparency and detector sensitivity changes only. Consequently, these combined time series were binned into intervals that would allow good compensation for the above mentioned nonintrinsic variations in the target star time series and were subtracted from the measurements of . The binning minimises the introduction of noise in the differential light curve of the target. The timings for this differential light curve were heliocentrically corrected as the next step. Finally, the photometric zeropoints of the different sites, which may not be quite the same because of slightly different wavelength responses of the individual instrumental systems combined with the different colours of the variable and the comparison stars, were set to zero. The resulting final combined time series was subjected to frequency analysis; we show some of our light curves of  in Fig.1. We note that the amplitude of the light variation is modulated, but its shape is not; it is always sinusoidal. The accuracy of the differential light curves of the comparison stars was 4.8 mmag in the $u$ filter, 4.3 mmag in $v$ and 3.9 mmag in $y$ per single data point. These rather high values are mostly caused by the high air mass of  and unstable weather conditions during the measurements at Fairborn Observatory. ![Some light curves of . Plus signs are data in the Strömgren $u$ filter, filled circles are our $v$ measurements and open circles represent Strömgren $y$ data. The full line is a fit composed of all the periodicities detected in the light curves (Table 2). The upper two panels are measurements from SSO, the middle two are from the APT and the lower two from SAAO. The amount of data shown here is about one third of the total.](thefig1.ps){width="88mm"} Frequency analysis ================== Our frequency analyses were performed with the program [PERIOD 98]{} (Sperl 1998). This package applies single-frequency power spectrum analysis and simultaneous multi-frequency sine-wave fitting, and also includes advanced options. We started by computing the Fourier spectral window of the final light curves in each of the filters. It was calculated as the Fourier transform of a single noise-free sinusoid with a frequency of 7.116 (the strongest pulsational signal of ) and an amplitude of 10 mmag sampled in the same way as were our measurements. The upper panel of Fig.2 contains the result for the $y$ data. Any alias structures that would potentially mislead us into incorrect frequency determinations are reasonably low in amplitude due to our multisite coverage. ![Amplitude spectra of . The uppermost panel shows the spectral window of the data, followed by the periodogram of the data. Successive prewhitening steps are shown in the following panels; note their different ordinate scales. The second lowest panel contains a significance curve; any peak to be regarded as real by us must exceed it. The lowest panel shows the residual amplitude spectrum in a wider frequency range, containing no evidence for further periodicities in our data.](thefig2.ps){width="88mm"} We proceeded by computing the amplitude spectra of the data itself (second panel of Fig.2). The signal designated $f_1$ dominates. We prewhitened it by subtracting a synthetic sinusoidal light curve with a frequency, amplitude and phase that yielded the smallest residual variance, and computed the amplitude spectrum of the residual light curve (third panel of Fig.2). This resulted in the detection of a second signal ($f_2$). We then prewhitened a two-frequency fit from the data using the same optimisation method as before, and continued this procedure (further panels of Fig.2) until no significant peaks were left in the residual amplitude spectrum. We consider an independent peak statistically significant if it exceeds an amplitude signal-to-noise ratio of 4 in the periodogram (see Breger et al.1993). The noise level was calculated as the average amplitude in a 5 interval centred on the frequency of interest; the final detection limit corresponding to $S/N=4$ is shown as a significance curve in the second lowest panel of Fig.2. We repeated the prewhitening procedure with the $u$ and $v$ data independently and obtained the same frequencies within the observational errors. We then determined final values for the detected frequencies by averaging the values from the individual filters. The pulsational amplitudes were then recomputed with those frequencies. We regard this solution as representing our data set best; the result is listed in Table2. ------- ----------------------- ----------- ----------- ----------- ------- ID Freq. $u$ Ampl. $v$ Ampl. $y$ Ampl. $S/N$ () (mmag) (mmag) (mmag) $f_1$ 7.11600 $\pm$ 0.00008 12.7 9.2 9.4 41.4 $f_5$ 7.2881 $\pm$ 0.0005 2.1 1.5 1.4 6.4 $f_2$ 7.3697 $\pm$ 0.0003 3.6 2.9 2.4 10.8 $f_3$ 7.4677 $\pm$ 0.0003 4.7 2.4 2.3 10.2 $f_4$ 7.7659 $\pm$ 0.0003 3.4 2.3 2.1 9.7 $f_6$ 7.8742 $\pm$ 0.0005 2.3 1.8 1.3 5.8 $f_7$ 7.9734 $\pm$ 0.0005 2.4 1.6 1.2 5.6 ------- ----------------------- ----------- ----------- ----------- ------- : Multifrequency solution for our time-resolved photometry of $\theta$ Oph. The signals are ordered according to their frequencies, but labelled in the order of detection. Formal error estimates (following Montgomery & O’Donoghue 1999) are listed for the individual frequencies; formal errors on the amplitudes are $\pm$ 0.20 mmag in $u$, $\pm$ 0.18 mmag in $v$ and $\pm$ 0.16 mmag in $y$. The S/N ratio quoted is for the $y$ filter data.  has also been observed by the HIPPARCOS satellite (ESA 1997). We reanalysed the corresponding photometry of the star and find a main frequency of 7.11605 $\pm$ 0.00002 in these measurements, consistent with the result from our data within the errors. An analysis of our measurements combined with those by HIPPARCOS allows us to refine the value of the dominant frequency to 7.116015 $\pm$ 0.000002 ; aliases are outside the quoted errors for each individual determination. No amplitude variations of the strongest mode seem to have occurred between the HIPPARCOS measurements and ours; the other signals are not detected in the space-based photometry. The residuals from the multifrequency solution in Table2 were searched for additional candidate signals that may be intrinsic. We have first investigated the residuals in the individual filters, then analysed the averaged residuals in the three filters (whereby the $u$ data were divided by 1.5 to scale them to amplitudes and rms scatter similar to that in the other two filters), and found no evidence for additional significant periodicities in any case. The residuals between light curve and fit are 5.2, 4.6 and 4.1 mmag per single $u, v, y$ point, respectively and are thus somewhat higher than the accuracy of the differential comparison star data, suggesting that additional, presently undetected, frequencies could be present. We can now confront the results of our frequency analysis with those in the literature. The frequency of the dominant signal is consistent with all the earlier studies except Henroteau (1922), taking into account some slight (evolutionary?) frequency variability with respect to Brier’s (1971) study. Concerning the remaining frequencies, we note that the resonance period of 0.137255d found by van Hoof (1962) is consistent with our signal $f_5$. On the other hand, none of the secondary or harmonic frequencies claimed by Heynderickx (1992) can be reconciled with our data. We suspect this is due to the small amount of data available to this author. Finally, we note that the $y$ amplitude of our analysis is consistent with that in Heynderickx’ (1992) Walraven $V$ data, i.e. no amplitude variations seem to have occurred between the years 1987 and 2003. Mode identification =================== Our three-colour photometry gives us the possibility of deriving the spherical degree $\ell$ of the individual pulsation modes from an analysis of the colour amplitudes. This involves a comparison of the observed amplitudes with those predicted by models and first requires knowledge of the star’s position in the HR diagram. However,  is not a single star. Besides its low-mass spectroscopic companion discovered by Briquet et al. (2005), it is also a Speckle binary (McAlister et al. 1993). Shatsky & Tokovinin (2002) determined a $K$ magnitude difference of 1.09 mag between the two components and argued that the companion to the $\beta$ Cephei star (hereinafter called ) is physical. From the standard relations by Koornneef (1983) we can infer that the Speckle companion (hereinafter called ) is 1.33 mag fainter in $V$ and must thus be a B5 main sequence star. To determine the effective temperature and luminosity of , we must take the contribution of  to the total light into account. We use the standard Strömgren photometry by Crawford, Barnes & Golson (1970), and adopt the mean $V$ magnitude from the Lausanne Photometric data base ([http://obswww.unige.ch/gcpd/gcpd.html]{}, $V=3.266$) for the system, and then reproduce it and the Strömgren $c_1$ index, which is a measure of the stars’ effective temperatures, with the help of the standard relations by Crawford (1978), and after dereddening. The results are shown in Table 3. $V$ $b-y$ $m_1$ $c_1$ $\beta$ ------------ ------- -------- ------- ------- --------- Observed 3.266 -0.092 0.089 0.104 2.617 Dereddened 3.223 -0.102 0.092 0.102 2.617 3.546 -0.109 0.08 0.07 2.640 4.876 -0.089 0.095 0.25 2.684 Combined 3.266 -0.104 0.083 0.105 2.650 : Johnson V magnitude and Strömgren colour indices of the  system. The observations are reasonably well matched, with the exception of the luminosity sensitive $\beta$ parameter. However, this is not a severe problem as we will derive the star’s luminosity from its parallax. We also note that $\beta$ measurements by other authors are closer to our calculated results as the results from Crawford et al. (1970). The calibration by Napiwotzki, Schönberner & Wenske (1993) applied to the Strömgren indices listed in Table 3 then results in $T_{\rm eff} = 22900 \pm 900$ K and $M_v = -2.5 \pm 0.5$ for , and in $T_{\rm eff} = 18400 \pm 700$ K and $M_v = -1.3 \pm 0.5$ for . The analysis of IUE spectra by Niemczura & Daszy[ń]{}ska-Daszkiewicz (2004) yielded $T_{\rm eff} = 22200 \pm 850$ K (consistent with the combined contribution of both system components) $\log g = 3.77$ and $[M/H]=-0.15\pm0.12$. Finally, the HIPPARCOS parallax of the system (5.79 $\pm$ 0.69 mas) results in $M_v = -2.6 \pm 0.3$ for  and $M_v = -1.3 \pm 0.3$ for , respectively, consistent with the result from Strömgren photometry. The tables by Flower (1996) then yield bolometric corrections of $-$2.2 and $-1.7$ mag, respectively, and thus $M_{\rm bol} =-4.8 \pm 0.5$ for  as well as $M_{\rm bol} =-3.0 \pm 0.4$ for . We show the positions of the  components in the HR diagram derived in this way in Fig.3. It becomes clear that the observed pulsations must originate from  only. ![The position of  and  in the theoretical HR diagram. Some stellar evolutionary tracks labelled with their masses (full lines) and the theoretical borders of the $\beta$ Cephei star instability strip (Pamyatnykh 1999, dashed lines) are included for comparison. All the theoretical results are for a metal abundance of $Z=0.015$.  is located within the instability strip, whereas  is not.](thefig3.ps){width="99mm"} To derive mode identifications for the pulsations of , we have computed theoretical colour amplitudes for modes of $0 \leq \ell \leq 4$ for models with masses between 8.5 and 10 $M_{\sun}$ (in steps of 0.5 $M_{\sun}$), effective temperatures in the range of $4.34 \leq \log T_{\rm eff} \leq 4.38$ and $Z=0.015$. We first computed stellar evolutionary models by means of the Warsaw-New Jersey evolution and pulsation code (described, for instance, by Pamyatnykh et al. 1998). Then we derived the pulsational amplitudes of such models in the parameter space constrained above following Balona & Evers (1999). A range of theoretical frequencies of 6.5 $\leq f \leq$ 8.5 was examined to allow for some nonradial mode splitting. Phase shifts between the light curves in the individual filters were not considered, as no such shifts were observationally found significant even at the 2$\sigma$ level. We show a comparison of the observed and theoretical amplitude ratios of three modes in Fig.4. ![Observed and theoretical uvy amplitude ratios (lines) for three modes of  and $0\leq\ell\leq4$. Amplitudes are normalised to unity at u. The filled circles with error bars are the observed amplitude ratios. The full lines are theoretical predictions for radial modes, the dashed lines for dipole modes, the dashed-dotted lines for quadrupole modes, the dotted lines for modes of $\ell=3$ and the dashed-dot-dot-dotted lines are for $\ell=4$. The small error bars denote the uncertainties in the theoretical amplitude ratios. The upper panel is for mode $f_1$, the middle one for $f_3$, and the lower one for $f_4$.](thefig4.ps){width="85mm"} We note that we took the contribution of  to the total light of the system into account when determining the observed amplitude ratios; we found that  contributes some 23% to the total flux in Strömgren $y$, 22% in $v$ and 19% in $u$. The reliability of the mode identifications in Fig. 4 are not easy to judge. Whereas the modes $f_3$ and $f_4$ can be identified with $\ell=0$ and $\ell=1$, respectively, the situation is less clear for mode $f_1$ (upper panel of Fig.4) where the $u/v$ amplitude ratio points towards an $\ell=1$ mode, but the $u/y$ amplitude ratio suggests $\ell=2$. Similar problems have been found for other modes that are not shown in this figure. We believe that the reason for these problems is a combination of several factors, for instance possible systematic errors in the determination of some of the pulsational amplitudes (which would be particularly severe in $u$), the uncertainties of the star’s position in the HR diagram, its poorly constrained surface metallicity (see Dupret et al. 2004 for a discussion of the latter), and the influence of the light of . We have therefore chosen an alternative approach that appears more objective. We calculated the ratio of the individual $u, v, y$ amplitudes with respect to their mean with the hope of compensating largely for systematic errors in the amplitude determinations. Then we compared these ratios to the theoretical ones treated in the same way by means of a $\chi^2$ analysis, similar to Balona & Evers (1999) and Daszy[ń]{}ska-Daszkiewicz, Dziembowski & Pamyatnykh (2003), but disregarding the pulsational phases since they do, in our case, contain no information on the type of the modes as argued before. The behaviour of $\chi^2$ depending on $\ell$ computed this way is shown in Fig.5 for all modes. ![Mode typing for  by means of the $\chi^2$ method.](thefig5.ps){width="85mm"} Because of the systematic errors that may affect our mode identification, we believe that we cannot interpret the results in Fig.5 in a strict statistical sense (i.e. by comparing the observational $\chi^2$ values to the critical values of a $\chi^2(3)$ distribution and then assign confidence levels to the derived mode identifications), but that we can use them to [*eliminate*]{} some $\ell$ values in the identification process. The $\ell$ assignments we cannot rule out this way are listed in Table 4. ------- --------- ----------- ID Freq. $\ell$ () $f_1$ 7.11600 2 or 1 $f_5$ 7.2881 2, 1 or 3 $f_2$ 7.3697 3, 2 or 1 $f_3$ 7.4677 0 $f_4$ 7.7659 1 $f_6$ 7.8742 3 or 1 $f_7$ 7.9734 0, 1 or 3 ------- --------- ----------- : Possible $\ell$ identifications of the individual modes of  from our $\chi^2$ analysis. Our mode identifications are not very satisfactory at this point, but fortunately we can use other clues to constrain them further. Firstly, since $f_3$ is clearly a radial mode, we can rule out that $f_7$ is also radial because the frequency ratio of these two modes ($f_3/f_7=0.9366$) is considerably larger than any period ratio of low-order radial modes in a star can be. Secondly, Heynderickx, Waelkens & Smeyers (1994) have identified $f_1$ as an $\ell=2$ mode from their Walraven photometry. We have checked this identification with our method (again taking the contribution of  to the total light into account) and also find $\ell=2$ to be clearly the best match between observed and theoretical colour amplitude ratios. Thirdly, Daszy[ń]{}ska-Daszkiewicz et al. (2002) demonstrated that modes of odd $\ell$, starting with $\ell=3$, suffer heavy geometric cancellation in photometric observations of stars using filters. For instance, an $\ell=3$ mode of the same intrinsic amplitude as an $\ell=2$ mode will have only $\sim 1/10$ of its photometric amplitude in the $u, v, y$ filters. We therefore disregard all the possible $\ell=3$ identifications in Table 4 as well. Thus we have arrived at unique $\ell$ identifications for five of the seven modes we detected: $f_1$ is $\ell=2$, $f_3$ is radial, and $f_4, f_6, f_7$ are all $\ell=1$. $f_2$ and $f_5$ can be either $\ell=1$ or 2. As the last step, we examine the frequency spectrum of  (schematically plotted in Fig.6) for the presence of structures that may be useful for further constraining the mode identifications. ![image](thefig6.ps){width="184mm"} Indeed, some interesting features can be discerned. The $\ell=1$ modes $f_4, f_6$ and $f_7$ form a frequency triplet that is almost equally spaced. However, there is a slight asymmetry, and it is exactly in the sense expected for nonradial $m$-mode splitting due to the second-order effects of rotation. We therefore believe that $f_4, f_6$ and $f_7$ are indeed a rotationally split triplet of $\ell=1$ modes. The remaining four modes are also grouped together. Intriguingly, the spacing between $f_2$ and $f_5$ is approximately half the frequency difference of $f_5$ and the $\ell=2$ mode $f_1$. Again, the asymmetry of this hypothesised $f_1, f_5, f_2$ multiplet is consistent with the second-order effects of rotation. $f_3$ does not fit this pattern, but this is not surprising as we have already identified it as a radial mode. The first-order rotational splitting obtained from the $f_1, f_5, f_2$ multiplet is very similar to the splitting of the $f_4, f_6$ and $f_7$ triplet. Thus we suspect that $f_1, f_5$ and $f_2$ are part of a rotationally split $\ell=2$ quintuplet with two components yet undetected. Assuming that the mean splitting of the $\ell=1$ triplet is a good approximation of the surface rotation frequency of  (i.e. neglecting effects of the Coriolis force and possible differential internal rotation), we derive a rotation period of 9.6 days. The absolute magnitude and effective temperature of the star, as determined at the beginning of this section, result in a radius of $5.2\pm1.3$ R$_{\odot}$, and hence in a surface rotation velocity of $27\pm7$ km/s. As the measured projected rotational velocity of  is about 30 km/s (e.g. Abt, Levato & Grosso 2002), there is a chance that we see the star close to equator-on. An eclipsing binary? ==================== If we indeed saw  equator-on, it can be suspected that the spectroscopic companion discovered by Briquet et al. (2005) may cause eclipses. The binary orbit derived by these authors leads to an ephemeris for the times of primary and secondary minimum, respectively. M. Briquet (private communication) predicts $$t_I = HJD~ 2451811.002 + i \times 56.712$$ $$t_{II} = HJD~ 2451834.599 + i \times 56.712$$ where i is the number of orbital revolutions since epoch zero. Given the total mass of the spectroscopic binary system ($\sim 10$M$_{\odot}$), the orbital period and the radius of the primary determined above, we can also estimate the maximum duration of a possible eclipse, amounting to $17\pm4$h. Although we found no obvious evidence for eclipses in our photometric measurements, we folded our data according to this ephemeris and searched them again for possible eclipses. As it turns out, we have no measurements whatsoever during or even near the predicted times of primary minimum, which is not surprising given that our orbital coverage is only $\sim$ 17 per cent. We do have data around the expected times of secondary eclipse, but none is found, which is also not a surprise since the secondary of the spectroscopic binary would much less luminous than  if we see the orbital plane (close to) edge-on, and consequently the depth of a secondary eclipse would be too small to be detected. We again conclude that there are no eclipses in our photometry of the  system. Discussion ========== Our photometric multisite campaign on the star  resulted in the detection of seven independent pulsation modes. Our colour photometry that was intended for mode typing only resulted in two firm identifications, but we believe that this is mostly due to the small photometric amplitudes of all but one of the pulsation modes. However, also the strongest mode could not be unambiguously identified from our data; we had to invoke literature results. Mode identification from colour photometry of stars primarily rests on the amplitudes determined in the blue and ultraviolet ($\lambda < 4200$ Å). We have used a subset of the Strömgren filter system for our measurements as a compromise between wide availability and mode identification potential, with the drawback that the identifications critically depend on the results in the $u$ filter, which may be hard to verify. Consequently, any systematic error in the $u$ measurements can heavily compromise the mode identifications. The Walraven or the Geneva photometric systems would provide a solution to this dilemma, but they are unsuitable for multisite work since few observatories are equipped for their use. However, the pulsation spectrum of  is reasonably simple and the range of excited frequencies is less than 1 , so that an extensive single-site study of this star in one of these photometric systems should be sufficient to check the mode identifications we finally arrived at by adding further constraints to the colour amplitude analysis. We believe that our suggestion that $f_1, f_5$ and $f_2$ are part of a rotationally split $\ell=2$ quintuplet can also be checked by theoretical model calculations. Since the asymmetry of the hypothesised multiplet due to the second-order effects of rotation has been measured to good relative accuracy, and since the rotation rate of the star is constrained by the $f_4, f_6, f_7$ triplet splitting, pulsational models should be able to reproduce the observed asymmetries if our identification of $f_1, f_5$ and $f_2$ is correct. To perform a detailed asteroseismic study, one more ambiguity must then still be overcome: if $f_1, f_5$ and $f_2$ are $\ell=2$ quintuplet members, their $m$ values must be determined. From photometry alone, we cannot distinguish if they correspond to $m = (-2, 0, 1)$ or $m = (-1, 1, 2)$. However, the analysis of archival spectroscopy by Briquet et al. (2005) solves this ambiguity. It is interesting to note that we found evidence that  is seen close to equator on (a result corroborated by Briquet et al. 2005). In such a configuration, the $m=0$ component of $\ell=1$ modes as well as the $|m|=1$ components of $\ell=2$ modes should suffer heavy geometric cancellation. However, such modes are apparently observed. In any case, a seismic investigation of  is possible. We note that we would not expect its outcome to be as fruitful as that for $\nu$ Eri (Pamyatnykh et al. 2004, Ausseloos et al. 2004) since fewer radial overtones of modes are excited, but the general applicability of earlier results could be tested. In addition, the detection of a possible eclipse of the primary would help to constrain the system parameters even tighter, which can in turn assist the seismic modelling. Finally, we would like to point out that the frequency structure of  is remarkably similar to that of V836 Cen (Aerts et al. 2004a): a radial mode close to an incomplete $\ell=2$ multiplet of somewhat lower frequency and a complete $\ell=1$ triplet of higher frequency, and all modes are contained in a very narrow frequency interval (which is of extremely similar size in the co-rotating frame). The only differences are the somewhat higher pulsation frequencies of  and its faster rotation. We thus speculate that once more pulsation spectra of stars become known in detail, important clues on mode excitation can be gathered. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work has been supported by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung under grant R12-N02. GH thanks Maryline Briquet for sharing her results prior to publication, Conny Aerts for sending a copy of R. Briers’ dissertation, Jagoda Daszy[ń]{}ska-Daszkiewicz for supplying some unpublished information and for comments on a draft version of this paper, Alosha Pamyatnykh for supplying theoretical instability strip borders for $Z=0.015$, as well as Luis Balona and Wojtek Dziembowski and his group for permission to use their computer codes. [99]{} Abt H. A., Levato H., Grosso M., 2002, ApJ 573, 359 Aerts C., Thoul A., Daszy[ń]{}ska J., Scuflaire R., Waelkens C., Dupret M. A., Niemczura E., Noels A., 2003, Sci 300, 1926 Aerts C., et al., 2004a, A&A 415, 241 Aerts C., et al., 2004b, MNRAS 347, 463 Ausseloos M., Scuflaire R., Thoul A., Aerts C., 2004, MNRAS 355, 352 Balona L. A., Evers E. A., 1999, MNRAS 302, 349 Breger M., et al., 1993, A&A 271, 482 Briers R. C., 1971, PhD thesis, Katholieke Universiteit Leuven Briquet M., Lefever K., Uytterhoeven K., Aerts C., 2005, MNRAS, in press Crawford D. L., 1978, AJ 83, 48 Crawford D. L., Barnes J. V., Golson J. C., 1970, AJ 75, 624 Daszy[ń]{}ska-Daszkiewicz J., Dziembowski W. A., Pamyatnykh A. A., Goupil M.-J., 2002, A&A 392, 151 Daszy[ń]{}ska-Daszkiewicz J., Dziembowski W. A., Pamyatnykh A. A., 2003, A&A 407, 999 De Ridder J., et al., 2004, MNRAS 351, 324 Dupret M.-A., Thoul A., Scuflaire R., Daszy[ń]{}ska-Daszkiewicz J., Aerts C., Bourge P.-O., Waelkens C., Noels A., 2004, A&A 415, 251 ESA, 1997, The [*Hipparcos*]{} and Tycho catalogues, ESA SP-1200 Flower P. J., 1996, ApJ 469, 355 Handler G., et al., 2004, MNRAS 347, 454 Handler G., Shobbrook R. R., Vuthela F. F., Balona L. A., Rodler F., Tshenye T., 2003, MNRAS 341, 1005 Henroteau F., 1922, Pub. Dom. Obs. Ottawa 8, 1 Heynderickx D., 1992, A&AS 96, 207 Heynderickx D., Waelkens C., Smeyers P., 1994, A&AS 105, 447 Künzli M., North P., Kurucz R. L., Nicolet B., 1997, A&AS 122, 51 McAlister H., Mason B. D., Hartkopf W. I., Shara M. M., 1993, AJ 106, 1639 Montgomery M. H., O’Donoghue D., 1999, Delta Scuti Star Newsletter 13, 28 (University of Vienna) Napiwotzki R., Schönberner D., Wenske V., 1993, A&A 268, 653 Niemczura E., Daszy[ń]{}ska-Daszkiewicz J., 2005, A&A 433, 659 Pamyatnykh A. A., 1999, Acta Astr. 49, 119 Pamyatnykh A. A., Dziembowski W. A., Handler G., Pikall H., 1998, A&A 333, 141 Pamyatnykh A. A., Handler G., Dziembowski W. A., 2004, MNRAS 350, 1022 Shatsky N., Tokovinin A., 2002, A&A 382, 92 Sperl M., 1998, Master’s Thesis, University of Vienna Stankov A., Handler G., 2005, ApJS 158, 193 van Hoof A., 1962, Z. Astrophys. 54, 255 van Hoof A., Blaauw A., 1958, ApJ 128, 273
--- abstract: 'The idea that quantum gravity manifestations would be associated with a violation of Lorentz invariance is very strongly bounded and faces serious theoretical challenges. Other related ideas seem to be drowning in interpretational quagmires. This leads us to consider alternative lines of thought for such phenomenological search. We discuss the underlying viewpoints and briefly mention their possible connections with other current theoretical ideas.' author: - Daniel Sudarsky title: Perspectives on Quantum Gravity Phenomenology --- Introduction ============ The search for a reconciliation of the view of space-time as contemplated within the context of general relativity, and the principles of Quantum Theory has for most of its history been besieged by the seemingly inescapable conclusion that no information about the subject could, in practice, be expected to emerge from the empirical realm. Nevertheless the last few years we have witnessed a flare of interest on precisely this possibility. The change in outlook is due to the realization that in some simple scenarios, some hypothetical manifestations of the effects presumably associated with Quantum Gravity (Q.G.) could become observable. In those schemes the Q.G. effects would be associated with a distortion of the microscopic symmetry structure of space-time. These schemes can be divided into two subsets, in the first one assumes that the Lorentz symmetry is in fact broken and that Q.G. endows space-time with a preferential reference frame, a kind of resurrected “Ether", while in a second class one introduces a modified Lorentz and /or Poincaré structure without invoking a preferential rest frame. Unfortunately these schemes suffer from some serious problems, which could in principle return us to the starting place with its bleak outlook on possible phenomenological guidance in the quest for a Quantum theory of Gravitation. However, now that the “taboo" about Q.G. Phenomenology has been broken, it seems appropriate to explore the notion in a larger context. In this spirit it should be mentioned that the study of some possibilities that involve manifestations of Q.G. in the extended Poincaré algebra in conjunction with the Heisenberg algebra are already the object of intensive research [@Mendez][@Chrysomalis]. In this paper we explore some options along two different lines of thought; the first one, motivated to some degree by the ideas that where put forward in the context of the schemes that considered modification of the fundamental symmetries of space-time mentioned above, an that will be referred to as the “space-time micro-structure signature of Q.G." and the second one inspired by the ideas of R. Penrose regarding the changes in standard Quantum Theory presumably tied with Gravitation, and an application to cosmology which we claim is already evidencing the need for some New Physics. This article is organized as follows: In section 2 we discuss some of the bounds that have been placed in the models for Lorentz invariance violation associated with a preferential frame (Lorentz Invariance Violation or L.I.V. in short) followed by what we regard as a devastating argument against this possibility. In section 3 we briefly discuss a series problematic aspects of the ideas that have been considered in the context of modified fundamental symmetry structures of space-time without preferential frames. In section 4 we describe what seems to be the natural descendent of the schemes of L.I.V., and in section 5 we give short overview of R. Penorse’s proposals for a Quantum Gravity induced “collapse of the wave function" and follow it up with a recent analysis indicating that something of that sort is needed if one wants to justify the arguments (and their predictive success) leading from inflation to the birth of the cosmic structures. Re-birth and Re-death of Ether ================================ One immediate result that emerges once one starts thinking about putting together General Relativity, and Quantum mechanics is that there is a natural scale with units of length that presumably signals the onset of the new physics: The Planck length $ l_{Pl}$. The existence of such a fundamental length scale, when taken together with the well known special relativistic contraction of lengths, has motivated some researchers to consider the possibility that at the fundamental level the space-time structure, which in the quantum context is most naturally thought to be granular, determines by itself a local preferential frame, where the granularity has indeed the isotropic scale $ l_{Pl}$ (in accordance with special relativity, in other frames the scale would be direction dependent). Such situation, so the argument goes, would become manifest, most conspicuously, trough a modification in the dispersion relations for free particles, changing them into something like [@Eather]: E\^2=( P)\^2 + m\^2 + E\^3/M\_[Pl]{}. \[DispRel\] In such expression the preferential frame appears indirectly: the equation being clearly non Lorentz Invariant, would be valid in, at most, one reference frame: the preferential rest frame. If we denote by $W^{\mu}$ the four velocity of the preferential frame such expression can be written in a Lorentz covariant language: P\^ P\_ = m\^2 + (W\^ P\_ )\^3/M\_[Pl]{}. \[CoDispRel\] As we will see, the existence of this four vector $W^{\mu}$, quite often hidden from the discussions, will have dramatic consequences. In fact any modification of the dispersion relations is conceivable only in association with assumption of the existence of new fundamental objects such as $W^\mu$. It is worthwhile mentioning that, indeed, in the two most popular approaches to Quantum Gravity; String Theory (see [@Strings]) and Loop Quantum Gravity (see [@Loops]), it has been argued that there is room for precisely the types of effects just discussed. Most of the work along these lines have centered in direct searches for evidence of such modifications in the dispersion relations, in photons and electrons and other elementary particles and interesting bounds have been obtained in the corresponding parameters $\xi$ (which in principle are taken as different for the various types of particles). For a comprehensive discussion of these results see the review article [@Mattingly]. The central conclusion is that the bounds, extracted directly from these analysis, on the value of the parameters $\xi$, – which were in principle expected to be of order one – are in the range $10^{-4}- 10^{-9}$ for the most common particle species: photons, electrons, neutrons and protons (i.e. quarks). The point we will focus on, and which represents the most devastating argument against this approach can be traced to the following observation: In the preceding discussion the underlying point of view has been that only very high energy particles are useful probes of these ideas, as seems to be indicated by the large suppression of order $E/M_{Pl}$ of the last term in equations \[DispRel\]&\[CoDispRel\] above, relative to the dominant terms. However we must recall that Quantum Field Theory teaches us that particles of all energies contribute to the virtual process that underscore all real particle processes, and thus all processes (even low energy ones) are influenced by the high energy modifications. In fact we should keep in mind that the dispersion relations correspond in fact to the location of the poles in propagators of the quantum field corresponding to the particle type in question[^1]. These issues where first considered in [@Mayers], where the modifications where assumed to be associated with dimension 5 operators, naturally suppressed by the Planck mass scale. In that work the authors found that generically quantum loops lead to unsuppressed Lorentz Violating corrections in the propagators, which would imply violations of Lorentz invariance of such magnitude as to be in blunt contradiction with observations. The dangerous expressions originated from quadratic diverging integrals – and to a lesser extent also the linearly diverging ones – that would appear in connection with the otherwise $M_{Pl}$ suppressed effective Lagrangian operators that loops expansion generate. An important detail that serves to illustrate a rather general point is the following : The L.I.V. operators in the original Lagrangian, all contained the triple product $W^{\mu} W^{\nu} W^{\rho}$. The dangerous integrals are of the form $\int (k_{\mu}k_{\nu} /k^2)d^4k \approx \eta_{\mu \nu}$. Within the effective field theory approach the dangerous integrals would need to be considered as having a cut-off at a the large mass scale $\Lambda$ and thus would be proportional to $\Lambda^2$. When taken together within the structure of the original L.I.V. operators one would end up with an effective term of order $ \Lambda ^2 / M_{Pl}$, and structure product $ W^{\mu} W^{\nu} W^{\rho}\eta_{\mu \nu} =- W^{\rho}$, which would thus lead clearly to a situation with an exceedingly large L.I.V.. The authors of this work then attempt to evade the disastrous conclusions, by considering an “add hoc" proposal to get rid of these dangerous terms: to replace in the structure of the dimension 5 operators every occurrence of the triple product of $ W $’s by the tensor $C^{\mu\nu \rho} =W^{\mu} W^{\nu} W^{\rho} - (1/6)[\eta^{\mu \nu}W^{\rho} + \eta^{\mu \rho}W^{\nu} + \eta^{ \nu \rho}W^{\mu}] $ which has the property that it vanishes upon contraction of any two induces with the metric tensor $\eta $. The odd thing about this is the following: The dangerous integrals are normally argued to give a result proportional to $\eta$ relying on Lorentz Symmetry arguments. Thus one’s position has effectively become the reliance on a symmetry – that is assumed to be broken – to ensure that the loop corrections do not generate the operators that would break it too badly. This sounds very dangerous. Indeed it has been shown in [@AlexI] that, upon the consideration of higher order diagrams, the scheme proposed in [@Mayers] fails, and one ends up with the large L.I.V. one was trying to avoid. In fact such problems can be seen to be rather generic, in the following sense: Let us take seriously the motivational arguments mentioned above, which lead to the suspicion that Quantum Gravity might be associated with a breakdown of Lorentz invariance, and let us consider that at the fundamental scale space-time has a discrete structure characterized by the Planck length, and take this to indicate an underlying granular structure in space-time, with such characteristic length scale, as seen from its proper reference frame[^2] which we will call the [*fundamental frame*]{}. In that case, the consistent treatment of the theory should include a provision indicating that there is a bound, with a fixed and specific value, on the physical wavelength of excitations as seen in the fundamental frame. Therefore, the quantum theory should not contain the corresponding excitations, either as real or as virtual particles. Similar considerations have in fact been made in proposing a saturation of the de-Broglie wave length at the Planck length as the momentum goes to infinity [@saturation]. Thus [*every theory*]{} which we now consider as candidate to be a fundamental theory, should be regarded instead, as merely an effective theory and it should include a momentum cutoff, eliminating the unphysical excitations. That is, we must for instance take the standard model of particle physics and impose on it a cut-off on the particle’s 3-momentum as seen in the fundamental frame[^3]. This feature would in principle have to be combined with other features of the effective theory, such as the change in the propagators of particles which would correspond to the modified dispersion relations. However we will see that, as shown first in [@Collins], the effect of the frame dependent cut-off by itself is disastrous. In order to do this we consider the full propagator of a scalar particle in Yukawa theory. We focus on this theory, in order to illustrate the main point, because of its simplicity and because of the fact that is part of the standard model of particle physics, and thus we can rely on the wealth of knowledge about its phenomenology to confront with the consequences of the aforementioned ideas. The theory is defined by the Lagrangian density: $$\begin{aligned} \label{eq:L.Yukawa} {\cal L} &=& {1 \over 2} (\partial\phi)^2 - \frac{m_0^2}{2} \phi^2 + \bar\psi (i\gamma^\mu\partial_\mu - M_0) \psi + g_0 \phi\bar\psi\psi.\end{aligned}$$ We introduce next the cutoff on spatial momenta in the fundamental frame. Of course this is not a realistic model of Planck-scale granularity. However it does represent a field theory in which the basic Lagrangian gives Lorentz-invariant dispersion relations for low-energy classical modes, and in which there is a Planck-scale cutoff that is bound to a particular frame. Therefore fermion bare propagators are modified according to , \[propFermions\] and similarly the scalar bare propagators are modified according to . \[propScalar\] The requirement on the functions $f(x)$ and $\tilde f(x)$ which specify the cut-off is that they go to $1$ as $x\to 0$ to reproduce Lorentz Invariant low energy behavior and that they go to zero as $ x\to\infty$. The functions $\Delta $ and $\tilde\Delta$ would be specified by concrete proposals for the modified dispersion relations. We concentrate in examining the effect of the cutoff by itself, and thus the changes in the bare dispersion relations will be ignored. Let us consider the full propagator of the scalar field, and more specifically $\Pi(p)$ its self-energy[^4]. The parameter $\Lambda$ is of order the Planck scale. Our choice that the cutoff function depends only on the size of the 3-momentum is for simplicity of calculation, and in fact simple changes in this choice do not change the main result [@Alexis]. The one-loop approximation to the self-energy $\Pi(p)$ is given by the a standard “sausage" Feynman diagram. We wish to investigate its properties when the momentum $p^\mu$ and the mass $m$ are much less than the cutoff $\Lambda$. Thus we make the customary Taylor expansion of $\Pi$ about $p=0$ and obtain (p) =A + p\^2 B + p\^p\^W\_W\_ + \^[(LI)]{}(p\^2) + [O]{}[(p\^4/\^2)]{} . Here $W_{\mu}$ is the 4-velocity of the preferential frame, which appearance can be traced to eqs. \[propFermions\] and \[propScalar\] where $|\vec p| =\sqrt{(\eta_{\mu\nu} +W_\mu W_\nu)p^\mu p^\nu}$. $p^2=p^\mu p^\nu\eta_{\mu\nu}$, with $\eta_{\mu\nu}$ being the space-time metric. The coefficients $A$ and $B$ correspond to the usual Lorentz-invariant mass and wave function renormalization. The fourth term $\Pi^{\rm (LI)}(p^2)$ is Lorentz-invariant. The third term however is clearly Lorentz violating. The coefficient $ \tilde \xi$ is independent of $\Lambda$, and in fact explicit calculations give: $$\label{vani} \tilde \xi = \frac{g^2} {6\pi^2} \left[ 1 + 2 \int \limits_0^{\infty}{dx} x f'(x)^2 \right] .$$ Although this term depends on the details of the function $f$ which models the microscopic quantum gravity effects, it is positive definite. Quantitatively the corresponding Lorentz violation is of order the square of the coupling, rather than being power-suppressed. One might want to treat this term as a renormalization of the space-time metric tensor however, there are many fields in the standard model that differ by the sizes of their couplings. Hence one way to describe the effect would be to say that each of these fields sees a different metric tensor and thus has a different limiting velocity. On the other hand, the limits on the differences in limiting velocities for different particle species are – even using only analysis that predate the latest round of studies – quite stringent [@Weinberg] at the level $10^{-20}$ while the expected value of such differences is about $10^{-2}$ at the least given the values of the standard model coupling constants. Thus in the absence of a mechanism that would prevent this large Lorentz violations, while preserving some small ones, the ideas underlying the proposals discussed at the start of this section, would seem to be rather untenable. Recently Pospelov [@Pospelov] has argued that supersymmetry might provide such mechanism. However he notes that the supersymmetry algebra contains the Lorentz Algebra, and thus it would seem problematic to argue that the latter is broken but the former is at work. He notes nevertheless that the protection from large Lorentz violations would work even if only the translation subgroup was unbroken. Here, we point out that Space-time granularity would break precisely such subgroup. We should emphasize that while the idea that the Planck length might be some sort of “minimum measurable length", seem to put into question the range of validity of Lorentz invariance, as shown in [@carlo], the existence of a minimum measurable length does not of itself imply that local Lorentz invariance is violated any more than the discreteness of the *eigenvalues* of the angular momentum operators implies violation of rotational invariance in ordinary quantum mechanics. On a similar note, the work in [@Rafael] illustrates the point that a discrete structure of space-time does not by itself imply the existence of a preferential rest frame or the violation of Lorentz invariance. We conclude that at present the theoretical ideas that pretend to connect a space-time granularity of quantum gravitational origin, with a violation of Lorentz invariance seem to be in serious trouble, to say the least. If it ainÕt broken why not try bending it?. ============================================ This title has perhaps an exceedingly negative connotation, and fairness requires that the reader be warned that at this point, it can not be argued that these approaches are unviable. However, given the tremendous difficulties that such schemes seem to face, in particular as regards to the physical interpretation that one is to give to the mathematical structures, and which will be briefly discussed below, I can only give my own personal pessimistic outlook for this line of thought. Nevertheless, I shall point to a particular deviation that seems to me more promising not only because it is simpler, but rather because, not only is it grounded in a rigorous mathematical foundation, but is based on a method that is successful in what can be considered as similar instances. This second point of view towards Quantum Gravity Phenomenology is based on the idea that Lorentz Invariance might not be broken and that there would be therefore no preferential frame at all, but that instead, the local geometry of space-time would exhibit departures from that described by special relativity. The options that have been considered can be classified as tied to the notions: 1) that the Lorentz algebra might be replaced by some sort of nonlinear mathematical structure, 2) that the Lorentz algebra, in fact the full Poincaré algebra might be unified with the Heisenberg algebra (i.e. including space-time coordinates) and then modified, and 3) that the space-time structure [*in itself* ]{}might become non-commutative . To go into the detailed way in which problems appear in each one of these alternatives would be far beyond the scope of this paper. However the main problems will be mentioned for the benefit of the reader. Consider first the scenarios where the Lorentz algebra is supposed to appear in a nonlinear form, i.e. where the commutators are not longer linear functions of the generators. It has been shown that in the cases that have been studied [@redefinitions] one can perform a nonlinear redefinition of the generators in which the algebra takes again a linear Lie algebra form (if one adds the central generator and then proceeds as in [@NonLinear-linear]). One then takes the view that those are not the variables that one measures. Then the issue becomes what are the variables that we measure?, and this takes us to the assumptions underlying the way we built our detectors and other devices that measure energy momentum etc.. Here we note that the conservation of energy-momentum is one of the basic principles in which we base our measurements of these quantities, our design of the apparatuses to measure them, and that this fact is a particularly important aspect of such measurements for high energy particles. Thus, the issue becomes what are the quantities that are conserved?, and this takes us to the issue of how one obtains total energy and total momentum for composite objects, which is essentially linked to the selection of the co-product for the algebra generators. If one wants something different from the standard situation one would need a nontrivial (non-primitive) co-product [@coproduct]. The cases that have been analyzed are based on the selection of asymmetric (or non-commutative) co-products, where, say, the total energy of a pair of particles depends on the way we order them calling them first or second. Here one faces a very serious problem because there does not seem to exist a canonical recipe for deciding in each specific situation, which order to take, i.e. given two particles in a collider, which one is called the first and which is second affects their total energy and momentum, a clearly disastrous situation, as one would not know how to proceed. Thus one would lack an interpretation scheme for the formalism, a fact that makes it impossible to use, at least, for phenomenology. In fact even if one chooses a symmetric but nontrivial co-product one faces the so called “spectator problem" where the system of two particles would transform differently if considered as a subsystem of, say a three particle system, that when considered by itself. In that case one would not know in principle how to proceed, as even a particle in a remote region of the universe, which happened to be in the regime where the non-linearity becomes important, would affect the physics of, say, a scattering process at a Fermilab. It thus seems that any recourse to a nontrivial co-product puts us in an essentially untenable situation. Let us note that some of the problems that these proposals face have been pointed out before [@problems]. The second option would start from the requirement that one maintains a Lie Algebra structure and a trivial co-product, and modify only the Lie Algebra structure constants. Within this set of ideas, the most promising ones seem to be those that arise from considerations of algebraic stability applied to the Poincaré-Heisenberg algebra, [@Mendez], [@Chrysomalis]. It is noteworthy that such considerations would take one directly from the Galilean algebra, to the Lorentz algebra, and from the commutative algebra of functions over phase space, to the Heisenberg algebra [@Flato]. Unfortunately, and as clearly noted by the authors of [@Chrysomalis] this approach also suffers from interpretational difficulties, connected to the fact that for composite systems the position operators can not be reasonably expected to be additive. In other words, in such schemes one is asked to consider nonstandard commutators involving the 4-position and 4-momentum operators and a new central operator, while the Lorentz sector remains untouched. In devising an experiment we need to have an unambiguous identification of these objects with the quantities one measures. It seems rather clear that the objects we would call the position operators in these schemes can not be identified with the actual position of objects that are obtained during measurements[^5], and then one is at a loss as to what can be an actual test of the scheme. It is a fact that the momentum operators do not seem, at first sight, to suffer from the same problems that afflict the position operators but the issue of what objects can they in fact be associated with, seems a bit confusing as the momentum operators are intimately connected to the position operators as they both trace their origin to conjugated pair of variables. Thus it seems that one could not associate such momentum operators to objects to which one can not associate the corresponding position operators. These points are not made to suggest that the scheme is unviable, as I do not think it is, but rather to stress the fact that, the hope for its applicability lies in a profound interpretational analysis that would clarify the status of “4- position observables" and connect them to the objects found in the real world. The third set of ideas, the so called non-commutative geometry program starts usually with the postulate that the Minkoswki coordinates do not commute but the commutators are functions of these coordinates themselves and not of any other generators. Thus one assumes that they satisfy a fundamental commutation relation such as = i \^, where $\theta^{\mu \nu}$ is a fundamental antisymmetric c-number tensor. Here we note that, if taken at face value and using it without further modifications[^6], the problem is that there is no Lorentz invariant antisymmetric tensor of rank 2. This point should not confused with the covariance of antisymmetric tensors such as the electromagnetic field strength $F^{\mu \nu}$. The issue is of course, that once we write the specific numerical proposal for the matrix $\theta^{\mu \nu}$, such specific value can be associated at most with one specific reference frame, and then the issue is: which one? In other words, while in the case of $F^{\mu \nu}$ its specific values in a given frame and a given physical situation are determined by the field equations of motion and the boundary conditions (the latter of course have different specific values in the various frames), in the case of a fixed fundamental object associated with the space-time structure such as $ \theta^{\mu \nu}$ those elements are not available. Therefore any specific recipe for $ \theta^{\mu \nu}$ could only be done in association with one specific frame, and furthermore it would imply the existence of a preferential frame where the corresponding matrix with specific values has a particularly simple form. For instance if at all possible it will be only only in one particular frame that we could say that the $\theta^{0 i}$ components are all zero. Thus this scheme incorporates the selection of preferential frames, and is thus, as pointed out in [@Collins], susceptible to the same problems we encountered in section II. Of course one should stress that it is conceivable that a scheme can be constructed that would overcome these difficulties and in this regard its worthwhile to mention the proposals in [@Paolo]. Other proposals one finds in the literature, start by writing: = ix\^i,or, =f\^\_x\^, These schemes in my view suffer from another serious problem: LetÕs focus on the first proposal: If we pretend to view this within the interpretational framework of quantum mechanics, the corresponding uncertainty relations indicate that one could not find a simultaneous eigenvector of $x^0$ and $ x^i$, except when the corresponding eigenvalue of $ x^i$ happens to be zero. Thus one can not localize an event both in space and in time (taking as we said the interpretational framework directly from QM) unless that event is at the origin of the coordinates. The issue is then: Where is the origin of coordinates? It would be clearly unphysical to state that certain points in space become physically differentiated just because we chose them as the origin of coordinates. It is clear that the second scheme suffers from similar problems: If the objects $x^\mu$ have any relation whatsoever with the coordinates we measure it is clear that the non-commutativity becomes larger for larger values of the coordinates ( the same can be said for eigenvalues, expectation values, for the corresponding operators, and in fact for any adjudication of some real values to these objects). In other words the effect decreases as we approach the origin of the coordinates. But where on the universe is this point or region? The only option seem to be to change the interpretational scheme, but then we find ourselves again in a similar conundrum as that of the first direction we explored in this section. It is of course possible that these problems might be overcome but at this point it is fair to say that the situation if far from clear. For further reading on Quantum Field theory in non-commutative space-times see [@QFTNCST]. What might be There? ===================== In view of the hardships one encounters in trying to reconcile the naive idea of a granular structure of space-time – which would naturally be associated with a preferential reference frame where the granularity takes say the most symmetric form – with the tight phenomenological bounds that have been obtained and with the clear expectations from Quantum Field Theory, that the effects of such granular structure would be only lightly suppressed, one is lead to consider more subtle possibilities. In this section we will discuss an alternative way in which a granular structure of space-time might appear, and which would be immune from the previous considerations while still, in principle, susceptible to a phenomenological study. The idea will be discussed in a rather heuristic way and it is fair to say that there is at this point no concrete realization of the proposal. However one can as usual employ the symmetry principles to restrict the possible phenomenological manifestations. These ideas have been studied first in [@NewQGP] We have at this point no real good geometrical picture of how a granularity might be associated to space-time while strictly preserving the Lorentz, and Poincaré symmetries. One can point however to the Poset program [@Rafael] as one that seems to embody such scheme, however we will not at this point commit to any such specific proposal. Instead we seek guidance in analogies with some simple ideas from solid state physics. Thus, we consider the case of a crystal, and note that when a large crystal has the [*same*]{} symmetry (say cubic) of the fundamental crystal, one could expect no deviations from fully cubic symmetry, as a result of the discrete nature of the fundamental building blocks. In fact one would not expect in such situation that the discrete structure of the crystal could be revealed at the macroscopic level by any deviation from precise cubic symmetry. The discrete structure might be studied, of course, but NOT by looking at deviations from such symmetry. However if one considers a macroscopic crystal whose global form is not compatible with the structure of the fundamental crystals, say hexagonal, the surface will necessarily include some roughness, and thus a manifestation of the granular structure, would occur through a breakdown of the exact hexagonal symmetry. Our ideas will be guided by the simple picture above, which will be transported [*mutatis mutandis*]{} from the crystal and the cubic symmetry to the space-time and the Lorentz symmetry. Thus, we will start by assuming that the underlying symmetry of the fundamental structure of space-time is itself the Lorentz Symmetry, which would naturally leads us to expect no violation of the symmetry at the macroscopic level when the space-time is macroscopically Lorentz invariant. Thus, the large scale Lorentz Symmetry is protected by the symmetry of the fundamental granular structure. Thus in a region of space-time normally considered as well approximated by Minkowski metric, the granular structure of the quantum space-time would not become manifest through the breakdown of its symmetry. However, and following with our solid state analogy, we are lead to consider the situation in which the macroscopic space-time is not fully compatible with the symmetry of its basic constituents. The main point is then, that in the event of a failure of the space-time to be exactly Minkowski in an open domain, the underlying granular structure of quantum gravity origin, could become manifest, affecting the propagation of the various matter fields. Such situation should thus involve the Riemann tensor, which is known to precisely describe the failure of a space-time to be Minkowski over an open region. Thus the non-vanishing of Riemann would correspond to the macroscopic description of the situation where the microscopic structure of space-time might become manifest. Moreover, we can expect, due to the implicit correspondence of the macroscopic description with the more fundamental one, that the Riemann tensor would also indicate the space-time directions with which the sought effects would be associated. This selection of special space-time directions, embodies a certain analogy within the current approach, to the global selection of a preferential reference frame that was implicit in the schemes towards Quantum Gravity Phenomenology described in section II. With this ideas in mind we turn now to proposing the corresponding phenomenology. That would imply the consideration of an effective description in the way that the Riemannian curvature could affect, in a nontrivial manner, the propagation of matter fields. Thus we need to consider the Lagrangian terms representing such couplings. Before we do so, we recall that the Ricci tensor represents that part of the Riemann tensor which, at least on shell, is locally determined by the energy momentum of matter at the events of interest. Thus the coupling of matter to the Ricci tensor part of the Riemann tensor would, at the phenomenological level, reflect a sort of pointwise self interaction of matter that would amount to a locally defined renormalization of the usual phenomenological terms such as a the mass or the kinetic terms in the Lagrangian. However we are interested in the underlying structure of space-time rather that the self interaction of matter. Thus we would need to ignore the aspects that encode the latter, which in our case would corresponds to all Lagrangian terms containing the Ricci tensor, coupled to matter fields. The remainder of the Riemann tensor, i.e. the Weyl tensor, can thus be thought, to reflect the aspects of the local structure of space-time associated solely with the gravitational degrees of freedom. Therefore we are lead to consider the coupling of the Weyl tensor with the matter fields. We note that in the absence of gravitational waves, the Weyl tensor is also connected with the nearby “matter sources" but such connection involves the propagation of their influence through the space-time and thus the structure of the latter would be playing a central role in the way the influences become manifest. In this sense the Weyl tensor reflects the “non-local effects" of the matter in contrast with the Ricci tensor or curvature scalar that are determinable from the latter in a completely local way. We further assume observer covariance and the absence of globally defined non-dynamical tensor fields. We are interested in the minimally suppressed terms, those that are only suppressed by the first power of $ M_{Planck}$, which would naturally correspond to the dimension 5 operators, while considering the coupling of the fundamental fields of the standard model, bosons and fermions, to the Weyl tensor. Using the fact that the Weyl tensor, as the Riemann tensor, has mass dimension 2, while the fermions have mass dimension 3/2 and the bosons have mass dimension 1, one can show that there are no non-vanishing dimension five operators coupling the Weyl tensor to the fields of the standard model [@NewQGP]. Thus one can either take this as an indication that the effects one is looking for are more strongly suppressed or search for somehow more indirect approaches. In [@NewQGP] we take the latter approach and consider the following schema. One considers the Weyl tensor viewed as a tensor of type $(2,2)$ as a mapping from the space of antisymmetric tensors of type $ (0,2)$, $\cal S$ into itself. As is well known the space-time metric endows the six dimensional vector space $\cal S$ with a pseudo-Riemannian metric of signature $(+++---)$. Then the Weyl tensor is a symmetric operator on this space ${\cal S}$ , which can therefore be diagonalized, and thus has a complete set of eigenvectors (which are however not necessarily orthogonal). We will assume for simplicity, that all eigenvalues are different, and consider only the eigenvectors $\Xi^{(i)}$ corresponding to non-vanishing eigenvalues $\lambda^{(i)}$, by fixing the normalization of these eigenvectors to be $\pm 1$ (also drop the null eigenvectors). Next we use the antisymmetric tensors $\Xi^{(i)}_{\mu \nu}$ and their associated eigenvalues $\lambda^{(i)}$ to construct the types of Lagrangian terms we are interested in. Finally we look for terms linear in these objects, and recalling that the eigenvalues $\lambda^{(i)} $ have the dimension of the Riemann tensor, we have the least possible suppressions in each sector as follows: In the scalar sector there is in fact no candidate of dimension 5 or 6 for such term. In the vector boson sector, taking into account the requirements of gauge invariance, we are lead to a dimension 6 term \_[m]{}= \_[i]{} \^[i]{} \^[(i)]{}\_ [Tr]{}( F\^\_ F\^ ) . It is worthwhile pointing out that in a purely $ U(1)$ sector one can write a dimension 4 term, \_[m]{}= \_[i]{} \^[i]{} \^[(i)]{}\_ F\^ . This is an unsuppressed term, which is rather surprising, however given that such term can not be written in the case of non-abelian gauge fields, together with fact that in the standard model the $U(1)$ sector mixes with the $SU(2)$ sector suggest that such terms should be absent. We have no tighter argument regarding this possibility at this point, but we will not consider it any further in view of the last observation. Finally, in the fermion sector we have a term, \_= \_[i]{} \^[i]{} \^[(i)]{}\_ |\^\^ .\[neta\] Thus the fermions seem to provide the most promising probes, which seems a fortunate situation, in this scheme. One could also consider coupling directly a scalar made out of the standard model fields to an appropriate power of a scalar constructed out of the Weyl tensor such as $(W_{\mu\nu\rho\sigma}W^{\mu\nu\rho\sigma})^{1/2}$. This proposal, departs slightly from the spirit of the suggestion the space-time structure would naturally and locally select preferential space-time directions. The lack this feature would tend to make the effects, in principle much harder to detect experimentally. On the other hand this line opens the way to consider, effects that would not be suppressed by $M_{Planck}$ at all, such as \_= (W\_W\^)\^[1/4]{}|. \[neta2\] These type of terms in which the space-time structure appears only as a scalar coupled to the matter fields would correspond to a space-time dependence of mass or coupling constants, controlled by local curvature. As we mentioned, the fact that they exhibit no particular signature, would tend to make the related effects, very difficult to probe. Regarding phenomenology one should thus, concentrate clearly in the fermion sector as leading to the most promisingly observable effects. Before continuing we write again the corresponding Lagrangian term, taking now into account, a possible flavor dependence, which could be thought to arise from the detailed way the different fields interact with the virtual excitations that intimately probe the underlying space-time structure. Thus we consider: \^[(2)]{}\_[f]{}=\_[a]{} \_[i]{} \^[i]{} \^[(i)]{}\_ |\_a \^\^\_a .\[10\] where $a$ denotes flavor. Next we note that we have in principle the same types of effects that have been considered in the Standard Model Extension (SME) [@SME] but only with terms of the form $-1/2 H_{\mu \nu}\, \bar\Psi \sigma ^{\mu \nu} \Psi$. Moreover, here the tensor $H_{\mu \nu}$ must be identified with $ -\frac{2\xi }{M_{\rm Pl}} \sum_{i} \lambda^{i}\, \Xi^{(i)}_{\mu \nu}$, and thus has a predetermined space-time dependence dictated by the surrounding gravitational environment. Therefore, special care has to be taken when comparing different experiments at different sites[^7], by taking into account the differences in the surrounding environment that leads to variable values of the relevant curvature related tensors. Finally we briefly comment on the related phenomenology: The relevant experiments must be associated with both, relative large gravitational tidal effects (indicating large curvature) in the local environment together with probes involving polarized matter as the explicit appearance of the Dirac matrix $[\gamma^\mu, \gamma^{\nu}]$ indicates. Both conditions seem from the onset difficult to achieve, and to control. Polarized matter is usually highly magnetic and thus electromagnetic disturbance would need to be controlled to a very high degree as they would tend to obscure any possible effects. Gravitational field gradients are usually exceedingly small on Earth and even in the solar system. Thus, neutrinos crossing regions of large curvature, seem like very good candidates to be studied in this context. We note in particular that a term of the sort we are considering could lead to neutrino oscillations even if they are massless, in close analogy with the ideas exposed in [@neutrinos]. Next we note that the terms in question do not violate CPT so that that particular phenomenological avenue is closed. On the other hand other discrete symmetries, particularly CP could, depending on the environment and state of motion of the probes seem to be open channels for investigation. In this light it would be very interesting to consider the Neutral Kaon system where, as in the fifth force scenario, one would look for energy dependence of the system’s parameters [@MyThesis]. On the other hand, as we mentioned before one expects the that the useful probes would involve polarized matter which would seem to rule out the usefulness of Neutral Kaons. However one should consider other particles, such as neutrinos that might combine, some sort of flavor oscillation with a nontrivial polarization structure. All these ideas are of course in need of a much more detailed study. We end this section by pointing out a quite different proposal regarding possible manifestations of Quantum Gravity: the possibility that a underlying discrete structure of time, would lead to a fundamental decoherence in quantum mechanics considered in [@GIDecoh]. This idea is quite intriguing and in fact is closer in spirit to the ideas and proposals that we address in the next section. What seems to be There ====================== This might sound like a strange title, as it indicates that there is in fact some sort of evidence for a manifestation of Quantum Gravity. We will argue that indeed there is something out there that requires new physics for its understanding. It is of course not at all clear that the problem we will discuss should be related to Quantum Gravity, but since that is the only sphere of fundamental physics for which we have so far failed to find a satisfactory conceptual understanding[^8] we find quite natural to associate the two. In fact the ideas of Penrose regarding the fundamental changes, that he argues[@Penrose], are needed in Quantum Mechanics and their connection to quantum Gravity, are a inspirational precedent for the analysis first reported in [@InflationUS]. There are for instance lingering interpretational problems in quantum mechanics, in particular in connection with the measurement problem. For instance, and as it is often emphasized by R. Penrose, we have in the Copenhagen interpretation, and in fact in any practical application of the theory, two quite different evolution processes: the U process or unitary evolution applied when systems are not subjected to a measurement, and the R process or state reduction process which makes its appearance whenever a measurement is invoked. The point is that without recourse to the R process the theory can make no predictions. But, when exactly should we in principle call upon the R process becomes a question that is not addressed within the theory. Other interpretations have similar problems, for instance in the many worlds interpretation one has the universe splitting with every measurement. However the issue of how in principle do we determine what constitutes a measurement, is no resolved. These issues have prompted R. Penrose to propose that Quantum gravity might play a role in triggering a real dynamical collapse of the wave function of systems [@Penrose]. His proposals would have a system collapsing whenever the gravitational interaction energy between two alternative realizations that appear as superposed in a wave function of a system reaches a certain threshold which is identified with $M_{Planck}$. The ideas can in principle lead to observable effects and in fact experiments to test them are currently being contemplated [@ExpPenrose] (although it seems that the available technology can not yet be pushed to the level where actual tests might be expected to become a reality soon). We have considered in [@InflationUS] a situation for which there exist already a wealth of empirical information and which we have argued can not be fully understood without involving some New Physics, whose required features would seem to be quite close to Penrose’s proposals: The quantum origin of the seeds of cosmic structure. In fact one of the major claimed successes of Inflationary cosmology is its reported ability to predict the correct spectrum for the primordial density fluctuations that seed the growth of structure in our Universe. However when one thinks about it one immediately notes that there is something truly remarkable about it, namely that out of an initial situation which is taken to be perfectly isotropic and homogeneous and based on a dynamics that preserves those symmetries one ends with a non-homogeneous and non isotropic situation. Most of our colleagues who have been working in this field for a long time would reassure us, that there is no problem at all by invoking a variety of arguments. It is noteworthy that these arguments would tend to differ in general from one inflationary cosmologist to another [@Cosmologists]. Other cosmologists do acknowledge that there seems to be something unclear at this point [@Cosmologists2]. In a recent paper [@InflationUS] a critical analysis of such proposals has been carried out indicating that all the existing justifications fail to be fully satisfactory. In particular, the cosmological situation can be seen to be quite different from any other situation usually treated using quantum mechanics when one notes the fact that while in analyzing ordinary situations quantum mechanics offers us, at least one self consistent assignment at all times of a state of the Hilbert space to our physical system (we are of course thinking of the Schroedinger picture). It is well known, that in certain instances there might be several mutually incompatible assignments of that sort, as for instance when contemplating the two descriptions offered by two different inertial observers who consider a given a specific EPR experiment. However, as we said, in all known cases, one has at least one description available. The reader might want to attempt to conceive of such assignment – of a state at each time – when presented with any of the proposed justifications offered to deal with the issue of the transition from a symmetric universe to a non-symmetric one. The reader will find that each instance he/she will be asked to accept one of the following: i) our universe was not really in that symmetric state (corresponding to the vacuum of the quantum field), ii) our universe is still described by a symmetric state, iii) at least at some points in the past the description of the state of our universe could not be done within quantum mechanics, iv) quantum mechanics does not correspond to the full description of a system at all times, or v) our own observations of the universe mark the transition from a symmetric to an asymmetric state. It should be clear that none of these represent a satisfactory alternative, in particular if we want to claim that we understand the evolution of our universe, its structure – including ourselves – , as the result of the fluctuations of quantum origin in the very early stages of our cosmology. Needless is to say that none of these options will be explicitly called upon in the arguments one is presented with, however one or more would be hidden, perhaps in a subtle way, underneath some of the aspects of the explanation. For a more thorough discussion we refer the reader to [@InflationUS]. The interesting part of these situation is that one is forced to call upon to some novel physical process to fill in the missing or unacceptable part of the justification of the steps that are used to take us from that early and symmetric state, to the asymmetric state of our universe today, or the state of the universe we photograph when we look at the surface of last scattering in the pictures of the CMB. In [@InflationUS] we have considered in this cosmological context a proposal calling for a self induced collapse of the wave function along the general lines conceived by Penrose, and have shown that the requirement that one should obtain results compatible with current observations is already sufficient to restrict in important ways some specific aspects of these novel physics. Thus, when we consider, that the origin of such new physics can be traced to some aspects of quantum gravity, one is already in a position of setting phenomenological constraints at least on this aspect of the quantum theory of gravitation. In the following we give a short description of this analysis for the benefit of the reader. The staring point is as usual the action of a scalar field coupled to gravity. \[eq\_action\] S=d\^4x R\[g\] - 1/2\_a\_bg\^[ab]{} - V(), where $\phi$ stands for the inflaton or scalar field responsible for inflation and $V$ for the inflaton’s potential. One then splits both, metric and scalar field into a spatially homogeneous (‘background’) part and an inhomogeneous part (‘fluctuation’), i.e. $g=g_0+\delta g$, $\phi=\phi_0+\delta\phi$. The unperturbed solution correspond to the standard inflationary cosmology which written using a conformal time, has a scale factor $$a(\eta)=-\frac{1}{H_{\rm I} \eta}, \label{expansion}$$ and with the scalar $\phi_0$ field in the slow roll regime. The perturbed metric can be written $$ds^2=a(\eta)^2\left[-(1+ 2 \Psi) d\eta^2 + (1- 2 \Psi)\delta_{ij} dx^idx^j\right],$$ where $\Psi$ stands for the relevant perturbation and is called the Newtonian potential. The perturbation of the scalar field leads to a perturbation of the energy momentum tensor, and thus Einstein’s equations at lowest order lead to $$\nabla^2 \Psi = 4\pi G \dot \phi_0 \delta\dot\phi . \label{main2}$$ Now, write the quantum theory of the field $\delta\phi$. It is convenient to consider instead the field $y=a \delta \phi$. We consider the field in a box of side $L$, and decompose the real field $y$ into plane waves $$y(\eta,\vec{x})=\frac{1}{L^{3}} \Sigma_{ \vec k} \left({\hat{a}}_k y_k(\eta) e^{i \vec{k}\cdot\vec{x}}+{\hat{a}^{\dagger}}_{k} \bar y_k(\eta) e^{-i\vec{k}\cdot\vec{x}}\right),$$ where the sum is over the wave vectors $\vec k$ satisfying $k_i L= 2\pi n_i$ for $i=1,2,3$ with $n_i$ integers. It is convenient to rewrite the field and momentum operators as $${\hat{y}}(\eta,\vec{x})= \frac{1}{L^{3}}\sum_{\vec k}\ e^{i\vec{k}\cdot\vec{x}} \hat y_k (\eta), \qquad {{\hat{\pi}^{(y)}}}(\eta,\vec{x}) = \frac{1}{L^{3}}\sum_{\vec k}\ e^{i\vec{k}\cdot\vec{x}} \hat \pi_k (\eta),$$ where $\hat y_k (\eta) \equiv y_k(\eta) {\hat{a}}_k +\bar y_k(\eta) {\hat{a}^{\dagger}}_{-k}$ and $\hat \pi_k (\eta) \equiv g_k(\eta) {\hat{a}}_k + \bar g_{k}(\eta) {\hat{a}^{\dagger}}_{-k}$ with $$y^{(\pm)}_k(\eta)=\frac{1}{\sqrt{2k}}\left(1\pm\frac{i}{\eta k}\right)\exp(\pm i k\eta),$$ and $$g^{\pm}_k(\eta)=\pm i\sqrt{\frac{k}{2}}\exp(\pm i k\eta) . \label{Sol-g}$$ As we will be interested in considering a kind of self induced collapse which operates in close analogy with a “measurement", we proceed to work with Hemitian operators, which in ordinary quantum mechanics are the ones susceptible of direct measurement. Thus we decompose both $\hat y_k (\eta)$ and $\hat \pi_k (\eta)$ into their real and imaginary parts $\hat y_k (\eta)=\hat y_k{}^R (\eta) +i \hat y_k{}^I (\eta)$ and $\hat \pi_k (\eta) =\hat \pi_k{}^R (\eta) +i \hat \pi_k{}^I (\eta)$ where $$\hat{y_k}{}^{R,I} (\eta) = \frac{1}{\sqrt{2}}\left( y_k(\eta) {\hat{a}}_k{}^{R,I} +\bar y_k(\eta) {\hat{a}^{\dagger}}{}^{R,I}_k\right) ,\qquad \hat \pi_k{}^{R,I} (\eta) =\frac{1}{\sqrt{2}}\left( g_k(\eta) {\hat{a}}_k{}^{R,I} + \bar g_{k}(\eta) {\hat{a}^{\dagger}}{}^{R,I}_{k} \right).$$ We note that the operators $\hat y_k^{R, I} (\eta)$ and $\hat \pi_k^{R, I} (\eta)$ are therefore hermitian operators. Note that the operators corresponding to $k$ and $-k$ are identical in the real case (and identical up to a sign in the imaginary case). Next we specify our model of collapse, and follow the field evolution through collapse to the end of inflation. We will assume that the collapse is somehow analogous to an imprecise measurement of the operators $\hat y_k^{R, I} (\eta)$ and $\hat \pi_k^{R, I} (\eta)$ which, as we pointed out are hermitian operators and thus reasonable observables. These field operators contain complete information about the field (we ignore here for simplicity the relations between the modes $k$ and $-k$). Let $|\Xi\rangle$ be any state in the Fock space of $\hat{y}$. Let us introduce the following quantity: $ d_k^{R,I} = \l {\hat{a}}_k^{R,I} \r_\Xi. $ Thus the expectation values of the modes are expressible as $$\l {{\hat{y}}_k{}^{R,I}} \r_\Xi = \sqrt{2} \Re (y_k d_k^{R,I}), \qquad \l {{{\hat{\pi}^{(y)}}}_k{}^{R,I}} \r_\Xi = \sqrt{2} \Re (g_k d_k^{R,I}).$$ For the vacuum state $|0\rangle$ we have of course: $ \l{{\hat{y}}_k{}^{R,I}}\r_0 = 0, \l{{\hat{\pi}^{(y)}}}_k{}^{R,I}\r_0 =0, $ while their corresponding uncertainties are $$\label{momentito} {(\Delta {\hat{y}}_k {}^{R,I})^2}_0 =(1/2) |{y_k}|^2(\hbar L^3), \qquad {(\Delta {\hat{\pi}}_k {}^{R,I})^2}_0 =(1/2)|{g_k}|^2(\hbar L^3).$$ [**The collapse**]{} Now we will specify the rules according to which collapse happens. Again, at this point our criteria will be simplicity and naturalness. Other possibilities do exist, and may lead to different predictions. What we have to describe is the state $|\Theta\rangle$ after the collapse. We need to specify $d^{R,I}_{k} = \langle\Theta|{\hat{a}}_k^{R,I}|\Theta\rangle $ In the vacuum state, ${\hat{y}}_k$ and ${{\hat{\pi}^{(y)}}}_k$ individually are distributed according to Gaussian distributions centered at 0 with spread ${(\Delta {\hat{y}}_k)^2}_0$ and ${(\Delta {{\hat{\pi}^{(y)}}}_k)^2}_0$ respectively. However, since they are mutually non-commuting, their distributions are certainly not independent. In our collapse model, we do not want to distinguish one over the other, so we will ignore the non-commutativity and make the following assumption about the (distribution of) state(s) $|\Theta\rangle$ after collapse: $$\begin{aligned} \l {{\hat{y}}_k^{R,I}(\eta^c_k)} \r_\Theta&=&x^{R,I}_{k,1} \sqrt{{(\Delta {\hat{y}}^{R,I}_k)^2}_0}=x^{R,I}_{k,1}|y_k(\eta^c_k)|\sqrt{\hbar L^3/2},\\ \l {{{\hat{\pi}^{(y)}}}_k{}^{R,I}(\eta^c_k)}\r_\Theta&=&x^{R,I}_{k,2}\sqrt{{(\Delta {\hat{\pi}^{(y)R,I}}_k)^2} _0}=x^{R,I}_{k,2}|g_k(\eta^c_k)|\sqrt{\hbar L^3/2},\end{aligned}$$ where $x_{k,1},x_{k,2}$ are distributed according to a Gaussian distribution centered at zero with spread one. From these equations we solve for $d^{R,I}_k$. Here we must recognize that our universe, corresponds to a single realization of the random variables, and thus each of the quantities $ x^{R,I}{}_{k,1,2}$ has a single specific value. Latter we will se how to make relatively specific predictions despite of these features. Next we focus on the expectation value of the quantum operator which appears in our basic formula $$\nabla^2 \Psi = s \Gamma \label{main3}$$ (where we introduced the abbreviation $s=4\pi G \dot \phi_0$) and the quantity $\Gamma$ as the aspect of the field that acts as a source of the Newtonian Potential. In the slow roll approximation we have $\Gamma=\delta\dot\phi= a^{-1} \pi^{y}$. We want to say that, upon quantization, the above equation turns into $$\nabla^2 \Psi = s \langle\hat\Gamma\rangle. \label{main4}$$ Before the collapse occurs, the expectation value on the right hand side is zero. Let us now determine what happens after the collapse: To this end, take the Fourier transform of (\[main4\]) and rewrite it as $$\label{modito} \Psi_k(\eta)=\frac{s}{k^2}\langle\hat\Gamma_k\rangle_\Theta. \label{Psi}$$ Let us focus now on the slow roll approximation and compute the right hand side, we note that $\delta\dot\phi=a^{-1}{{\hat{\pi}^{(y)}}}$ and hence we find $$\begin{aligned} \nonumber \langle\Gamma_k\rangle_\Theta&=&\sqrt{\hbar L^3 k}\frac{1}{2a}F(k), \label{F}\end{aligned}$$ where $$F(k) = (1/2) [A_k (x^{R}_{k,1} +ix^{I}_{k,1}) + B_k (x^{R}_{k,2} +ix^{I}_{k,2})],$$ with $$A_k = \frac {\sqrt{ 1+z_k^2}} {z_k} \sin(\Delta_k) ; \qquad B_k =\cos (\Delta_k) + (1/z_k) \sin(\Delta_k)$$ and where $\Delta_k= k \eta -z_k$ with $ z_k =\eta_k^c k$. Next we turn to the experimental results. We will for the most part, disregard the changes to dynamics that happen after re-heating and due to the transition to standard (radiation dominated) evolution. The quantity that is measured is ${\Delta T \over T} (\theta,\varphi)$ which is a function of the coordinates on the celestial two-sphere which is expressed as $\sum_{lm} \alpha_{lm} Y_{l,m}(\theta,\varphi)$. The angular variations of the temperature are then identified with the corresponding variations in the “Newtonian Potential" $ \Psi$, by the understanding that they are the result of gravitational red-shift in the CMB photon frequency $\nu$ so ${{\delta T}\over T}={{\delta \nu}\over {\nu}} = {{\delta ( \sqrt{g_{00}})}\over {\sqrt{g_{00}}}} \approx\delta \Psi$. The quantity that is presented as the result of observations is $OB_l=l(l+1)C_l$ where $C_l = (2l+1)^{-1}\sum_m |\alpha^{obs}_{lm}|^2 $. The observations indicate that (ignoring the acoustic oscillations, which is anyway an aspect that is not being considered in this work) the quantity $OB_l$ is essentially independent of $l$ and this is interpreted as a reflection of the “scale invariance" of the primordial spectrum of fluctuations. Then, as we noted the measured quantity is the “Newtonian potential" on the surface of last scattering: $ \Psi(\eta_D,\vec{x}_D)$, from where one extracts $$\a_{lm}=\int \Psi(\eta_D,\vec{x}_D) Y_{lm}^* d^2\Omega.$$ To evaluate the expected value for the quantity of interest we use (\[Psi\]) and (\[F\]) to write $$\Psi(\eta,\vec{x})=\sum_{\vec k}\frac{s U(k)} {k^2}\sqrt{\frac{\hbar k}{L^3}}\frac{1}{2a} F(\vec{k})e^{i\vec{k}\cdot\vec{x}}, \label{Psi2}$$ where we have added the factor $U(k)$ to represent the aspects of the evolution of the quantity of interest associated with the physics of period from re-heating to de coupling, which includes among others the acoustic oscillations of the plasma. After some algebra we obtain $$\begin{aligned} \alpha_{lm}&=&s\sqrt{\frac{\hbar}{L^3}}\frac{1}{2a} \sum_{\vec k}\frac{U(k)\sqrt{k}}{k^2} F(\vec k) 4 \pi i^l j_l((|\vec k| R_D) Y_{lm}(\hat k),\label{alm1}\end{aligned}$$ where $\hat k$ indicates the direction of the vector $\vec k$. It is in this expression that the justification for the use of statistics becomes clear. The quantity we are in fact considering is the result of the combined contributions of an ensemble of harmonic oscillators each one contributing with a complex number to the sum, leading to what is in effect a 2 dimensional random walk whose total displacement corresponds to the observational quantity. To proceed further we must evaluate the most likely value for such total displacement. This we do with the help of the imaginary ensemble of universes, and the identification of the most likely value with the ensemble mean vale. Now we compute the expected magnitude of this quantity. After taking the continuum limit we find, $$|\alpha_{lm}|^2_{M. L.} =\frac{s^2 \hbar}{2 \pi a^2} \int \frac {U(k)^2 C(k)}{k^4} j^2_l((|\vec k| R_D) k^3dk, \label{alm4}$$ where $$C(k)=1+ (2/ z_k^2) \sin (\Delta_k)^2 + (1/z_k)\sin (2\Delta_k). \label{ExpCk}$$ The last expression can be made more useful by changing the variables of integration to $x =kR_D$ leading to $$|\alpha_{lm}|^2_{M. L.}=\frac{s^2 \hbar}{2 \pi a^2} \int \frac{U(x/R_D)^2 C(x/R_D)}{x^4} j^2_l(x) x^3 dx, \label{alm5}$$ which in the exponential expansion regime where $\mu$ vanishes and in the limit $z_k\to -\infty$ where $C=1$, and taking for simplicity $U (k) =U_0$ to be independent of $k$, (neglecting for instance the physics that gives rise to the acoustic peaks), we find: $$|\alpha_{lm}|^2_{M. L.}=\frac{s^2 U_0^2 \hbar} {2 a^2} \frac{1}{l(l+1)} .$$ Now, since this does not depend on $m$ it is clear that the expectation of $C_l = (2l+1)^{-1}\sum_m |\alpha_{lm}|^2 $ is just $|\alpha_{lm}|^2$ and thus the observational quantity $OB_l=l(l+1)C_l =\frac{s^2 U_0^2 \hbar}{2 a^2} $ independent of $l$ and in agreement with the scale invariant spectrum obtained in ordinary treatments and in the observational studies. Thus, the predicted value for the $OB_l$ is [@InflationUS], $$OB_l= (\pi/6) G\hbar \frac{(V')^2}{V} U_0^2 = (\pi/3)\epsilon (V/M_{Pl}^4) U_0^2,$$ where we have used the standard definition of the slow roll parameter $\epsilon= (1/2) M_{Pl}^2 (V'/V)^2$ which is normally expected to be rather small. We note that if one could avoid $U$ from becoming too large during re-heating, the quantity of interest would be proportional to $\epsilon$ a possibility that was not uncovered in the standard treatments, so one could get rid of the “fine tuning problem" for the inflationary potential, i.e. even if $ V\sim M_{Pl}^4$, the temperature fluctuations in the CMB would be expected to be small. Now let us focus on the effect of the finite value of times of collapse $\eta^c_k$, that is, we consider the general functional form of $C(k)$. The first thing we note is that in order to get a reasonable spectrum there seems to be only one simple option: That $z_k $ be essentially independent of $k$ that is the time of collapse of the different modes should depend on the mode’s frequency according to $\eta_k^c=z/k$. This is a remarkable conclusion which would provide relevant information about whatever the mechanism of collapse is. Lets turn next to one simple proposal about the collapse mechanism which following Penrose’s ideas is assumed to be tied to Quantum Gravity, and examine it with the above results in mind. A version of ‘Penrose’s mechanism’ for collapse in the cosmological setting {#sec_penrose} ---------------------------------------------------------------------------- Penrose has for a long time advocated that the collapse of quantum mechanical wave functions might be a dynamical process independent of observation, and that the underlying mechanism might be related to gravitational interaction. More precisely, according to this suggestion, collapse into one of two quantum mechanical alternatives would take place when the gravitational interaction energy between the alternatives exceeds a certain threshold. In fact, much of the initial motivation for the present work came from Penrose’s ideas and his questions regarding the quantum history of the universe. A very naive realization of Penrose’s ideas in the present setting could be obtained as follows: Each mode would collapse by the action of the gravitational interaction between it’s own possible realizations. In our case one could estimate the interaction energy $E_I(k,\eta)$ by considering two representatives of the possible collapsed states on opposite sides of the Gaussian associated with the vacuum. Let us interpret $\Psi$ literally as the Newtonian potential and consequently the right hand side of equation (\[main2\]) as the associated matter density $\rho$. Therefore, $\rho =\dot\phi_0 \Gamma $, with $\Gamma =\pi^y/a$. Then we would have: E\_I()=\^[(1)]{}(x,) \^[(2)]{}(x,)dV = a\^3 \^[(1)]{}(x,) \^[(2)]{}(x,) d\^3x, which when applied to a single mode becomes: E()= (a\^3/L\^6) \_[ k]{}\^[(1)]{}( ) \^[(2)]{}\_[k]{} () d\^3x = (a\^3/L\^3) \^[(1)]{}\_[ k]{}( ) \^[(2)]{}\_[k]{} (), where $(1),(2)$ refer to the two different realizations chosen. Recalling that $\Psi_{ k} = ( s/k^2) \Gamma_k$, with $s= 4\pi G\dot\phi_0$, and using equation (\[momentito\]), we get $|<\Gamma_k > |^2 = \hbar k L^3 (1/2a)^2$. Then E\_I(k,) = ( /4) (a/k) G (\_0)\^2. In accordance to Penrose’s ideas the collapse would take place when this energy reaches the ‘one-graviton’ level, namely when $E_I(k,\eta)=M_p$, where $M_p$ is the Planck mass, thus one gets $ z_k=\frac{\pi \hbar G \dot \phi_0^2}{H_I M_p}$. So $z_k$ is independent of $k$ which leads to a roughly scale invariant spectrum of fluctuations in accordance with observations. Thus a naive realization of Penrose’s ideas seems to be a good candidate to supply the element that we argued is missing in the standard accounts of the emergence of the seeds of cosmic structure from quantum fluctuations during the inflationary regime in the early universe. Conclusions =========== The dramatic change in outlook that has taken place in the last few years regarding the possibility – despite early pessimistic assessments – that some aspects of quantum gravity might be after all experimentally accessible is a very healthy development for the quantum gravity community. Bringing back to the realm of empirical falsificability of ideas, a discipline that seemed to wander ever deeper into the abyss of unchecked lucubrations, can not but reassure us, that the discipline still lies within the boundaries of scientific research. The early proposals that there might be a breakdown in Lorentz Violation associated with a discrete structure that quantum gravity is supposed to endow space-time with, lead in fact to a vigorous program, which at this time, and due not only to the direct bounds obtained but more importantly to the severe restrictions that QFT puts on these ideas, have to be regarded with a strong dose of skepticism, as the lesson from that stage seem to be in the direction of requiring Quantum Gravity to be free of such effects [^9]. The lasting legacy of this episode, on the other hand, is, I believe, the lesson that we should not give up so easily in our quest for phenomenological manifestations of quantum gravity. In this regard the successors of the program should be divided in three groups, first those ideas that suffer from a lack of clear interpretational status including some for which the existence of a sensible interpretational scheme is highly dubious, and which are briefly discussed in section 2, then there are some ideas that seem to be well defined and have a rather clear interpretational status, and which could, in principle be subjected to experimental investigations, such as the specific search for a gravitationally induced collapse of the wave function proposed by Penrose [@Penrose], the proposals of Pullin and Gambini about a gravitationally induced fundamental decoherence [@GIDecoh], the ideas about a possible non-standard manifestations of curvature in extended quantum systems, first proposed in [@NewQGP] and reviewed in section 3. Finally, as was described in section 4, and first reported in [@InflationUS] the recognition that there are very intriguing aspects of our understanding of the origin of the seeds of cosmic structure, which seem to “account" for the observations, in the sense that the predictions and observations are in agreement, but that on the other hand suffer from unjustified identifications, problematic interpretations, and do not pass a careful and profound examination. In other words the recognition that something else seems to be needed for the whole picture to work, could be pointing us towards an actual manifestation quantum gravity. We have shown that not only the issues are susceptible of scientific investigation based on observations, but that a simple account of what is needed seem to be provided by the extrapolation of Penrose’s ideas to the cosmological setting. We end by stressing that it might well be that we are at the dawn of a new era regarding Quantum Gravity; but we would do well by keeping an open mind, as it is quite likely that such new era, as any region which is truly virgin to exploration, will look rather different that what was expected on arrival. Acknowledgments {#acknowledgments .unnumbered} =============== It is a pleasure to acknowledge very helpful conversations with Chryssomalis Chryssomalakos. This work was supported in part by DGAPA-UNAM IN108103 and CONACyT 43914-F grants. [99]{} R. Vilela Mendez, [*J. Phys.* ]{} [**A 27**]{}, 8091, (1994). C. Chryssomalakos, and E. Okon [*Int. J. Mod. Phys .*]{} [**D 13**]{}, 2003, (2004), \[arXiv: hep-th/0410212\]; C. Chryssomalakos, and E. Okon, [*Int. J. Mod. Phys.* ]{}[**D 13**]{}, 1817, (2004), \[arXiv: hep-th/0407080\]. G. Amelino-Camelia, J. Ellis, N. E. Mavromatos, D. V. Nanopoulos and S. Sarkar, [*Nature*]{} (London) [**393**]{}, 763(1998); D. V. Ahluwalia, [*Nature* ]{} [**398**]{}, 199, (1999); G. Amelino-Camelia, [*Lect. Notes Phys.*]{} [**541**]{}, 1, (2000). V. A. Kosteleck[ý]{} and S. Samuel, [*Phys. Rev.* ]{} [**D 39**]{}, 683, (1989); V. A. Kosteleck[ý]{} and S. Samuel, [*Phys. Rev.* ]{}[**D 40**]{}, 1886 (1989); J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, [*Gen. Rel. Grav.*]{}[**32**]{}, 127, (2000) \[arXiv:gr-qc/9904068\]; J. R. Ellis, N. E. Mavromatos and D. V. Nanopoulos, [*Phys. Rev.* ]{} [**D 61**]{}, 027503, (2000) \[arXiv:gr-qc/9906029\]; J. R. Ellis, K. Farakos, N. E. Mavromatos, V. A. Mitsou and D.V. Nanopoulos, [*Astrophys. J.*]{} [**535**]{}, 139, (2000) \[arXiv:astro-ph/9907340\]; J. R. Ellis, N. E. Mavromatos, D. V. Nanopoulos and G. Volkov, [*Gen. Rel. Grav.*]{} [**32**]{}, 1777, (2000) \[arXiv:gr-qc/9911055\]. R. Gambini and J. Pullin, [*Phys. Rev.*]{} [**D 59**]{}, 124021, (1999); J. Alfaro, H. A. Morales-Tecotl and L. F. Urrutia, [*Phys. Rev.*]{} [**D 66**]{}, 124006 (2002); J. Alfaro, H. Morales-Técotl and L. Urrutia, [*Phys. Rev.*]{} [**D 65**]{}, 103509 (2002); J. Alfaro, H. Morales-Técotl and L. Urrutia, [*Phys. Rev. Lett.*]{} [**84**]{}, 2318, (2000); H. Sahlmann and T. Thiemann, \[ arXiv:gr-qc/0207031\]; M. Bojowald, H. A. Morales-Tecotl and H. Sahlmann, \[ arXiv:gr-qc/0411101\]. R. J. Gleiser and C .N. Kozameh, [*Phys. Rev.*]{} [**D 64**]{}, 083007, (2001); D. Sudarsky, L. Urrutia and H. Vucetich, [*Phys. Rev. Lett.*]{} [**89**]{}, 231301, (2002); D. Sudarsky, L. Urrutia, y H. Vucetich, [*[ Phys. Rev.]{}*]{} [**D 68**]{}, 024010, (2003); T. Jacobson, S. Liberati and D. Mattingly, [*Nature*]{} [**424**]{}, 1019, (2003); D. Mattingly, \[arXiv:gr-qc/0502097\]. R.C. Myers and M. Pospelov, [*Phys. Rev. Lett.*]{} [**90**]{}, 211601, (2003) \[arXiv:hep-ph/0301124\]. A. Perez and D. Sudarsky, [*Phys. Rev. Lett.*]{}, [**91**]{} 179101-1 (2003). A. Kempf, G. Mangano and R. B. Mann, [*Phys. Rev.*]{} [**D 52**]{}, 1108, (1995); D. V. Ahluwalia [*Phys. Lett.*]{} [**A 275**]{}, 31, (2000). J. Collins, A. Perez, D. Sudarsky, L. Urrutia, and H. Vucetich, [*Phys. Rev. Lett.*]{} [**93**]{}, 191301, (2004). D. Sudarsky and J. A. Caicedo, submmited to the Proceedings of the VI Mexican School of Gravitation and Mathematical Phisics, Playa del Carmen, México, November 2004, (Eds. J. Cervantes, M. Alcubierre, and M. Montesinos). S. R. Coleman and S. L. Glashow, [*Phys. Rev.*]{} [**D 59**]{}, 116008, (1999) \[arXiv:hep-ph/9812418\]. S. G. Nibbelink and M. Pospelov, [*Phys. Rev. Lett.*]{} [**94**]{}, 081601, (2005), \[arXiv: hep-ph/0404271\]. C. Rovelli and S. Speziale, [*Phys. Rev.*]{} [**D 67**]{}, 064019 (2003) \[arXiv:gr-qc/0205108\]. F. Dowker and R. Sorkin, \[arXiv:gr-qc/0311055\]. D. V. Ahluwalia, \[arXiv: gr-qc/0212128\]; D. Grumiller W. Kummer and V. Vassilevich [*Ukr. J. Phys.*]{} [**48**]{}, 329, (2003), \[arXiv: hep-th/0301061\]. G. Ameilno-Cammelia, [*Int. J. Mod. Phys.*]{} [**D 11**]{} 35,(2002); J. Kowalski-Glikman and S. Nowak, [Phys. Lett.]{} [**B 539**]{}, 126, (2002). J. Lukierski and A. Nowicki [*Int. J. Mod. Phys.* ]{} [**A 18**]{}, 7, (2003). R. Schützhold and W. G Unruh, [*JETP Lett.* ]{} [**78**]{}, 431, (2003); [*Pisma Zh. Eksp. Teor. Fiz.*]{} [**78**]{},899,(2003) \[arXiv: gr-qc/0308049\]; J. Rembieliński and K. A. Smoliński [*Bull. Soc. Sci. Lett. Lodz*]{} [**53**]{}:57-63,(2003, \[arXiv: hep-th/0207031\]. F. Bayen, M. Flato, C. Fronsdal, C. Lichnerowicz and D. Sternheimer, [*Lett. Math. Phys.*]{} [**1**]{}, 521, (1977). M. Chaichian, P.P. Kulish, K, Nishijima, and A. Tureanu, \[arXiv: hep-th/0408069\], [*Phys. Lett.*]{} [**B 604**]{}, 98 (2004). P. Aschieri, C. Blohmann, M. Dimitrijević, F. Mayer, P. Schupp and J. Wess. [*Class. Quant. Grav.*]{} [**22**]{}, 3511, (2005); \[arXiv: hep-th/0504183\]. M. R. Douglas and N. A. Nekrasov, [*Rev. Mod. Phys.*]{} [**73**]{}, 977-1029,(2001); \[arXiv: hep-th/0106048\]; R. Szabo, [*Phys. Rept.* ]{}[**378**]{}, 207-299, (2003); \[arXiv: hep-th/0109162\]. A. Corichi and D. Sudarsky, \[arXiv: gr-qc/0503078\]. D. Colladay and V. A. Kostelecky, [*Phys. Rev.*]{}[**D 58**]{}, 116002 (1998). E. Fischbach, D. Sudarsky, A. Szafer, C. Talmadge, and S. H. Aronson, [*Annals of Physics*]{} [**182**]{}, 1-89, (1988). M. Gasperini, [*Phys. Rev.*]{} [**D 38**]{}, 2635, (1988); B. Mukhopadhyay [*Mod. Phys. Lett.*]{} [**A 20**]{}, 2145, (2000). D. Sudarsky, E. Fischbach, C. Talmadge, S. Aronson y H. Y. Cheng,[*Annals of Physics*]{} [**207**]{}, 103-139, (1991). R. Gambini, R. A. Porto, J. Pullin, [*Phys. Rev. Lett.*]{} [**93**]{}, 240401, (2004), \[arXiv: hep-th/0406260\]; R. Gambini, R. A. Porto, J. Pullin, [*Phys. Rev.*]{} [**D 70**]{}, 124001,(2004), \[arXiv: gr-qc/0408050\]. R. Penrose, [*The Emperor’s New Mind*]{}, (Oxford University Press 1989); R. Penrose, On Gravity’s Role in Quantum State Reduction, in [*Physics meets philosophy at the Planck scale*]{} Callender, C. (ed.) (2001). A. Perez, H. Sahlmann, and D. Sudarsky, \[arXiv: gr-qc/0508100\]. R. Penrose, Gravitational Collapse of the wave Function: An Experimentally Testable Proposal, in [*Proceedings of the Nineth Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Gravitation and Relativistic Field Theories (MG 9)*]{}, V. G. Gurzadyan, R. T. Jantzen, R. Ruffini (eds.), ( World Scientific 2002). D. Polarski and A. A. Starobinsky \[arXiv:gr-qc/9504030\] (1996); W.H. Zurek, Environment Induced Superselection In Cosmology in [*Moscow 1990, Proceedings, Quantum gravity*]{} (QC178:S4:1990), p. 456-472. (see High Energy Physics Index 30 (1992) No. 624); R. Laflamme and A. Matacz \[arXiv:gr-qc \]; M. Castagnino and O. Lombardi, [*Int. J. Theor. Phys.*]{} [**42**]{}, 1281, (2003) \[arXiv:quant-ph/0211163\]. T. Padmanabhan, [*Cosmology and Astrophysics Through Problems*]{}, (Cambridge University Press 1996). D. J. Bird [*et. al.*]{}, [*Astrophys. J.*]{} [**441**]{}, 144 (1995); J. W. Elbert and P. Sommers, [*Astrophys. J.*]{}, [**441**]{}, 151, (1995). M. Takeda [*et. al.*]{} [*Phys. Rev. Lett.*]{}, [**81**]{}, 1163, (1998); T. Abu-Zayyad [*et. al.*]{}, \[arXiv: astro-ph 0208301\]; [*Mod. Phys. Lett.*]{}, [**A 18** ]{}, 1235, (2003); J. Bahcall and E. Waxman [*Phys. Lett.*]{} [**B 556**]{}, 1, (2003). [^1]: A note for the young reader: One should keep in mind that in working to combine special relativity and quantum mechanics one is taken quite generically into the realm of quantum field theory, and that particles cease to be fundamental entities and are viewed instead as certain type of exited states of the quantum fields. [^2]: Such proper reference frame could be thought of as that in which the granular structure is maximally isotropic. [^3]: One could of course argue that the standard model is in reality also an effective theory, and that for some unspecified reasons such arguments should not apply to it directly but to some more fundamental and yet unknown theory, but then, one would have to give up the argument that motivates the search for these quantum gravity effects using probes and interactions that are described in terms of this theory. [^4]: In perturbation theory, $\Pi(p)$ is the sum over one-particle-irreducible two point graphs for the scalar field. [^5]: The issue is the following: The position operator is not expected to de additive (to find the position of an hydrogen atom one does not add the position of the proton wit that of the electron), while the symmetry of the construction would require similar coproducts for position and momentum operators (the momentum of composite systems as usual being additive). On the other hand if the coproduct is nontrivial, the issue would be how to deal with the position operators for composite objects. [^6]: It is possible of course, to bring in a more elaborated structure to remove the problems of a naive interpretation. In fact in the more methodical formulations many other objects become non-commutative, including for instance entries of the matrices representing the generators of the Lorentz transformations, resulting in new notions of invariance and new types of invariant tensors [@NonComm-LT] [^7]: This is reminiscent of the Situation encountered with the studies of the “Fifth Force" proposals[@Ephraim]. [^8]: There are of course many open issues in fundamental understanding of physics that are not in principle connected with the issue of quantum gravity, however it is only in this latter field that the problems seem to be connected with deep conceptual issues and where one can envision the possibility that their resolution might require a fundamental change of paradigm, as the would be the case if we find we must modify the laws of quantum mechanics. [^9]: There seems to be at this time a single situation where some indications that a breakdown of Lorentz invariance could be at play: The absence of a GZK cut-off in the cosmic ray spectrum [@GZK]. The evidence is still rather controversial [@GZKN] and, on the other hand it is not clear whether some simpler explanations, perhaps including new physics, but unconnected with the issues at hand, do exist.
--- abstract: 'In this paper we establish important relations between Hamiltonian dynamics and Riemannian structures on phase spaces for unitarily evolving finite level quantum systems in mixed states. We show that the energy dispersion (i.e. $1/\hbar$ times the path integral of the energy uncertainty) of a unitary evolution is bounded from below by the length of the evolution curve. Also, we show that for each curve of mixed states there is a Hamiltonian for which the curve is a solution to the corresponding von Neumann equation, and the energy dispersion equals the curve’s length. This allows us to express the distance between two mixed states in terms of a measurable quantity, and derive a time-energy uncertainty relation for mixed states. In a final section we compare our results with an energy dispersion estimate by Uhlmann.' author: - Ole Andersson - Hoshang Heydari title: 'Geometry of quantum dynamics and a time-energy uncertainty relation for mixed states' --- Introduction ============ Ever since the advent of general relativity, scientists have been looking for geometrical principles underlying physical laws. Nowadays it is well known that geometry affects the physics on all length scales, and physical theory building consists to a large extent of geometrical considerations. This paper concerns geometric quantum mechanics, a branch of quantum physics that has received much attention lately (which is largely due to the crucial role geometry plays in quantum information and quantum computing [@Pachos_etal1990; @*Zanardi_etal1999; @*Ekert_etal2000; @*Zanardi_etal2007; @*Rezakhani_etal2010; @Jones_etal2000; @*Falci_etal2000; @*Farhi_etal2001; @*Duan_etal2001; @*Recati_etal2002]). Here we equip the phase spaces for unitarily evolving finite level quantum systems with natural Riemannian structures, and establish remarkable but fundamental relations between these and Hamiltonian dynamics. A quantum system prepared in a pure state is usually modeled on a projective Hilbert space, and if the system is closed its state will evolve unitarily in this space. Aharonov and Anandan [@Anandan_etal1990] showed that for unitary evolutions there is a geometric quantity which, like Berry’s celebrated phase [@Berry1984; @Simon1983], is independent of the particular Hamiltonian used to transport a pure state along a given route. More precisely, they showed that the energy dispersion (i.e. $1/\hbar$ times the path integral of the energy uncertainty) of an evolving state equals the Fubini-Study length of the curve traced out by the state. Using this, Aharonov and Anandan gave a new geometric interpretation of the time-energy uncertainty relation. The state of an experimentally prepared quantum system generally exhibits classical uncertainty, and is most appropriately described as a probabilistic mixture of pure states. It is common to represent mixed states by density operators, and many metrics on spaces of density operators have been developed to capture various physical, mathematical, or information theoretical aspects of quantum mechanics [@Bengtsson_etal2008; @Nielsen_etal2010]. In this paper we utilize a construction by Montgomery [@Montgomery1991] to provide the spaces of isospectral density operators with Riemannian metrics, and we show that these metrics admit a generalization of the energy dispersion result of Aharonov and Anandan to evolutions of finite dimensional quantum systems in mixed states. Indeed, we show that the energy dispersion of an evolving mixed state is bounded from below by the length of the curve traced out by the density operator of the state, and we show that every curve of isospectral density operators is generated by a Hamiltonian for which the energy dispersion equals the curve’s length. The latter result allows us to express the distance between two mixed states in terms of a measurable quantity, and we use it to derive a time-energy uncertainty principle for mixed states. Uhlmann [@Uhlmann1986; @Uhlmann1991] was among the first to develop a mathematical framework similar to the one presented here. In [@Uhlmann1992Energy] he used it to derive an estimate for the energy dispersion of an evolving mixed state. We compare our energy dispersion estimate with that of Uhlmann in this paper’s final section. Geometry of orbits of isospectral density operators =================================================== In this paper we will only be interested in finite dimensional quantum systems that evolve unitarily. They will be modeled on a Hilbert space ${\mathcal{H}}$ of unspecified dimension $n$, and their states will be represented by density operators. Recall that a density operator is a Hermitian, nonnegative operator with unit trace. We write ${\mathcal{D}}({\mathcal{H}})$ for the space of density operators on ${\mathcal{H}}$. Riemannian structure on orbits of density operators --------------------------------------------------- A density operator whose evolution is governed by a von Neumann equation remains in a single orbit of the left conjugation action of the unitary group of ${\mathcal{H}}$ on ${\mathcal{D}}({\mathcal{H}})$. The orbits of this action are in one-to-one correspondence with the possible spectra for density operators on ${\mathcal{H}}$, where by the *spectrum* of a density operator of rank $k$ we mean the decreasing sequence $$\sigma=(p_1,p_2,\dots,p_k) \label{spectrum}$$ of its, not necessarily distinct, positive eigenvalues. Throughout this paper we fix $\sigma$, and write ${\mathcal{D}}(\sigma)$ for the corresponding orbit. To furnish ${\mathcal{D}}(\sigma)$ with a geometry, let ${\mathcal{L}}({\mathbb C}^k,{\mathcal{H}})$ be the space of linear maps from ${\mathbb C}^k$ to ${\mathcal{H}}$ equipped with the Hilbert-Schmidt Hermitian inner product, and $P(\sigma)$ be the diagonal $k\times k$ matrix that has $\sigma$ as its diagonal. Inspired by Montgomery [@Montgomery1991], we set $${\mathcal{S}}(\sigma)=\{\Psi\in{\mathcal{L}}({\mathbb C}^k,{\mathcal{H}}):\Psi^\dagger \Psi=P(\sigma)\},$$ and define $$\pi:{\mathcal{S}}(\sigma)\to{\mathcal{D}}(\sigma),\quad \Psi\mapsto\Psi\Psi^\dagger. \label{bundle}$$ Then $\pi$ is a principal fiber bundle with right acting gauge group $${\mathcal{U}}(\sigma) =\{U\in{\mathcal{U}}(k):UP(\sigma)=P(\sigma)U\},$$ whose Lie algebra is $${\mathfrak{u}}(\sigma) =\{\xi\in{\mathfrak{u}}(k):\xi P(\sigma)=P(\sigma)\xi\}.$$ The real part of the Hilbert-Schmidt product restricts to a gauge invariant Riemannian metric $G$ on ${\mathcal{S}}(\sigma)$, $$G(X,Y)={\frac 12}{\operatorname{Tr}}(X^\dagger Y+Y^\dagger X),$$ and we equip ${\mathcal{D}}(\sigma)$ with the unique metric $g$ that makes $\pi$ a Riemannian submersion. Mechanical connection --------------------- The *vertical* and *horizontal bundles* over ${\mathcal{S}}(\sigma)$ are the subbundles ${\operatorname{V}\!}{\mathcal{S}}(\sigma)={\operatorname{Ker}}\pi_*$ and ${\operatorname{H}\!}{\mathcal{S}}(\sigma)={\operatorname{V}\!}{\mathcal{S}}(\sigma)^\bot$ of the tangent bundle of ${\mathcal{S}}(\sigma)$. Here $\pi_*$ is the differential of $\pi$ and $^\bot$ denotes orthogonal complement with respect to $G$. Vectors in ${\operatorname{V}\!}{\mathcal{S}}(\sigma)$ and ${\operatorname{H}\!}{\mathcal{S}}(\sigma)$ are called vertical and horizontal, respectively, and a curve in ${\mathcal{S}}(\sigma)$ is called horizontal if its velocity vectors are horizontal. Recall that for every curve $\rho$ in ${\mathcal{D}}(\sigma)$ and every $\Psi_0$ in the fiber over $\rho(0)$ there is a unique horizontal lift of $\rho$ to ${\mathcal{S}}(\sigma)$ that extends from $\Psi_0$. This lift and $\rho$ have the same lengths, since $\pi$ is a Riemannian submersion. The infinitesimal generators of the gauge group action yield canonical isomorphisms between ${\mathfrak{u}}(\sigma)$ and the fibers in ${\operatorname{V}\!}{\mathcal{S}}(\sigma)$: $$\label{eq:inf gen} {\mathfrak{u}}(\sigma)\ni\xi\mapsto \Psi\xi\in{\operatorname{V}\!}_\Psi{\mathcal{S}}(\sigma).$$ Furthermore, ${\operatorname{H}\!}{\mathcal{S}}(\sigma)$ is the kernel bundle of the gauge invariant *mechanical connection form* ${\mathcal{A}}_{\Psi}={\mathbbm{I}}_{\Psi}^{-1}J_{\Psi}$, where ${\mathbbm{I}}_{\Psi}:{\mathfrak{u}}(\sigma)\to {\mathfrak{u}}(\sigma)^*$ and $J_{\Psi}:{\operatorname{T}\!}_{\Psi}{{\mathcal{S}}(\sigma)}\to {\mathfrak{u}}(\sigma)^*$ are the *moment of inertia* and *moment map*, respectively, $${\mathbbm{I}}_{\Psi}\xi\cdot \eta=G(\Psi\xi,\Psi\eta),\quad J_{\Psi}(X)\cdot\xi=G(X,\Psi\xi).$$ The moment of inertia is of *constant bi-invariant type* since it is an adjoint-invariant form on ${\mathfrak{u}}(\sigma)$ which is independent of $\Psi$ in ${\mathcal{S}}(\sigma)$. To be exact, $$\label{eq:beta} {\mathbbm{I}}_{\Psi}\xi\cdot \eta={\frac 12}{\operatorname{Tr}}\left(\left(\xi^\dagger \eta+\eta^\dagger \xi\right)P(\sigma)\right).$$ Using we can derive an explicit formula for the connection form. Indeed, if $m_1, m_2, \dots , m_l$ are the multiplicities of the different eigenvalues in $\sigma$, with $m_1$ being the multiplicity of the greatest eigenvalue, $m_2$ the multiplicity of the second greatest eigenvalue, etc., and if for $j=1,2,\dots,l$, $$E_j={\operatorname{diag}}({{\mathbf 0}}_{m_1},\dots,{{\mathbf 0}}_{m_{j-1}},{{\mathbf 1}}_{m_j},{{\mathbf 0}}_{m_{j+1}},\dots,{{\mathbf 0}}_{m_l}),$$ then $$\begin{split} {\mathbbm{I}}_\Psi\Big(&\sum_jE_j\Psi^\dagger XE_jP(\sigma)^{-1}\Big)\cdot\xi=\\ &={\frac 12}{\operatorname{Tr}}\Big(\sum_jE_jX^\dagger\Psi E_j\xi-\xi E_j\Psi^\dagger XE_j\Big)\\ &={\frac 12}{\operatorname{Tr}}\big(X^\dagger\Psi\xi-\xi\Psi^\dagger X\big)\\ &=J_\Psi(X)\cdot\xi \end{split}$$ for every $X$ in ${\operatorname{T}\!}_\Psi{\mathcal{S}}(\sigma)$ and every $\xi$ in ${\mathfrak{u}}(\sigma)$. Hence $$\label{eq:explicit} {\mathcal{A}}_\Psi(X)=\sum_jE_j\Psi^\dagger XE_jP(\sigma)^{-1}.$$ Observe that the orthogonal projection of ${\operatorname{T}\!}_\Psi{\mathcal{S}}(\sigma)$ onto ${\operatorname{V}\!}_\Psi{\mathcal{S}}(\sigma)$ is given by the connection form followed by the infinitesimal generator . Therefore, the *vertical* and *horizontal projections* of $X$ in ${\operatorname{T}\!}_\Psi{\mathcal{S}}(\sigma)$ are $X^\bot=\Psi{\mathcal{A}}_\Psi(X)$ and $X^{||}=X-\Psi{\mathcal{A}}_\Psi(X)$, respectively. Geometrical uncertainty estimates ================================= If $\hat A$ is an observable on ${\mathcal{H}}$, the gauge invariant vector field $X_{\hat A}$ on ${\mathcal{S}}(\sigma)$ is defined by $$X_{\hat A}(\Psi)={\frac{\operatorname{d}}{\operatorname{d}\!{\varepsilon}}}\left[\exp\left(\frac{{\varepsilon}}{i\hbar}\hat A\right)\Psi\right]_{{\varepsilon}=0}.$$ Let $X_A$ be the projection of $X_{\hat A}$ onto ${\mathcal{D}}(\sigma)$, and define the *uncertainty of $\hat A$* to be the scalar field $\Delta A$ on ${\mathcal{D}}(\sigma)$ given by $$\Delta A(\rho)=\sqrt{{\operatorname{Tr}}(\hat A^2\rho)-{\operatorname{Tr}}(\hat A\rho)^2}.$$ We will show that $$\begin{aligned} &\Delta A(\rho)\geq \hbar \sqrt{g(X_A(\rho),X_A(\rho))},\label{main ett}\\ &\Delta A(\rho)=\hbar\sqrt{g(X_A(\rho),X_A(\rho))}\text{ if } X_{\hat{A}}(\Psi)\in{\operatorname{H}\!}_\Psi{\mathcal{S}}(\sigma),\label{main tva}\end{aligned}$$ where $\Psi$ is any element in the fiber over $\rho$. Assertion follows immediately from the observations $$\begin{aligned} &{\operatorname{Tr}}(\hat A^2\rho)=\hbar^2 G(X_{\hat A}(\Psi),X_{\hat A}(\Psi)),\label{alfa}\\ &{\operatorname{Tr}}(\hat A\rho)=i\hbar{\operatorname{Tr}}({\mathcal{A}}(X_{\hat A}(\Psi))P(\sigma))\label{beta}.\end{aligned}$$ For if $X_{\hat{A}}(\Psi)$ is horizontal, then the right hand side of equals $\hbar^2 g(X_{A}(\Psi),X_{A}(\Psi))$, and the right hand side of vanishes. If, on the other hand, $X_{\hat A}(\Psi)$ is not horizontal, we must estimate the difference between $G(X^{\bot}_{\hat A}(\Psi),X^{\bot}_{\hat A}(\Psi))$ and ${\operatorname{Tr}}(\hat{A}\rho)^2$. The identity $$G(X^{\bot}_{\hat A}(\Psi),X^{\bot}_{\hat A}(\Psi)) =-{\operatorname{Tr}}({\mathcal{A}}(X_{\hat A}(\Psi))^2P(\sigma)),\label{gamma}$$ together with and yield $$\begin{split} \Delta A(\rho)^2 =\hbar^{2}&g(X_A(\rho),X_A(\rho))\\ &+\hbar^{2}{\operatorname{Tr}}({\mathcal{A}}(X_{\hat A}(\Psi))P(\sigma))^2\\ &-\hbar^{2}{\operatorname{Tr}}({\mathcal{A}}(X_{\hat A}(\Psi))^2P(\sigma)). \end{split} \label{eq:eq}$$ Now follows from the fact that the difference between the last two terms in is nonnegative. To see this let $U$ in ${\mathcal{U}}(\sigma)$ be such that $iU{\mathcal{A}}_\Psi(X_{\hat A}(\Psi))U^\dagger$ is a diagonal matrix, say $$iU{\mathcal{A}}_\Psi(X_{\hat A}(\Psi))U^\dagger={\operatorname{diag}}(\lambda_1,\lambda_2,\dots,\lambda_k).$$ Then $$\begin{split} {\operatorname{Tr}}({\mathcal{A}}_\Psi(X_{\hat A}(\Psi))P(\sigma))^2 &= -\Big(\sum_jp_j \lambda_j\Big)^2\\ &\geq -\sum_jp_j\lambda_j^2\\ &={\operatorname{Tr}}({\mathcal{A}}_\Psi(X_{\hat A}(\Psi))^2P(\sigma)), \end{split} \label{eq:eqn}$$ since $U$ commutes with $P(\sigma)$ and $x\mapsto x^2$ is convex. Distance and energy dispersion ------------------------------ The distance between two density operators with common spectrum $\sigma$ is defined to be the infimum of the lengths of all curves in ${\mathcal{D}}(\sigma)$ that connects them. We will show that for any two density operators $\rho_0$ and $\rho_1$ in ${\mathcal{D}}(\sigma)$, $${\operatorname{dist}(\rho_0,\rho_1)}=\frac{1}{\hbar}\inf_{\hat H}\int_{t_0}^{t_1}\!\Delta H(\rho){\operatorname{d}\!t},\label{avstand}$$ where the infimum is taken over all Hamiltonians $\hat H$ for which the *boundary value von Neumann equation* $$\dot\rho=X_H(\rho),\qquad \rho(t_0)=\rho_0, \qquad\rho(t_1)=\rho_1,\label{von Neumann}$$ is solvable. The length of a curve $\rho$ in ${\mathcal{D}}(\sigma)$, with domain $t_0\leq t\leq t_1$ is $${\operatorname{Length}[\rho]}= \int_{t_0}^{t_1}\!\sqrt{g(\dot\rho,\dot\rho)}{\operatorname{d}\!t}.\label{length}$$ If $ \dot\rho=X_H(\rho)$, for some Hamiltonian $\hat H$, then, by , the length of $\rho$ is a lower bound for the *energy dispersion*: $${\operatorname{Length}[\rho]}\leq\frac{1}{\hbar}\int_{t_0}^{t_1}\!\Delta H(\rho){\operatorname{d}\!t}. \label{enekvation}$$ There is a Hamiltonian $\hat H$ that generates a horizontal lift of $\rho$, because the unitary group of ${\mathcal{H}}$ acts transitively on ${\mathcal{L}}({\mathbb C}^k,{\mathcal{H}})$. For such a Hamiltonian we have equality in . Moreover, we can take $\rho$ to be *length minimizing*, in the sense that ${\operatorname{Length}[\rho]}={\operatorname{dist}(\rho_0,\rho_1)}$, because ${\mathcal{D}}(\sigma)$ is compact and hence $g$ is complete. Then, $${\operatorname{dist}(\rho_0,\rho_1)}=\frac{1}{\hbar}\int_{t_0}^{t_1}\!\Delta H(\rho){\operatorname{d}\!t}, \label{finalen}$$ by . Assertion follows from and . Time-energy uncertainty relation -------------------------------- Consider a quantum system with Hamiltonian $\hat{H}$, and suppose $\rho$ is a solution to . The *time-average of $\Delta H$* is $$\langle \Delta H\rangle=\frac{1}{\Delta t}\int_{t_0}^{t_1}\Delta H(\rho){\operatorname{d}\!t},\quad \Delta t=t_1-t_0.$$ We will show that if $\rho_0$ and $\rho_1$ are distinguishable [@Englert1996; @Markham_etal2008], then $$\langle \Delta H\rangle \Delta t\geq \frac{\pi\hbar}{2}. \label{energytime}$$ For density operators representing pure states, this reduces to the time-energy uncertainty relation in [@Anandan_etal1990]. Let $\Psi_0$ in $\pi^{-1}(\rho_0)$ and $\Psi_1$ in $\pi^{-1}(\rho_1)$ be such that ${\operatorname{dist}(\rho_0,\rho_1)}={\operatorname{dist}(\Psi_0,\Psi_1)}$. The operators $\rho_0$ and $\rho_1$ have orthogonal supports, being distinguishable, and the same is true for $\Psi_0$ and $\Psi_1$ since the the support of $\Psi_0$ equals the support of $\rho_0$, and likewise for $\Psi_1$ and $\rho_1$. A compact way to express this is $$\Psi_0^\dagger\Psi_1=0,\quad \Psi_1^\dagger\Psi_0=0. \label{vinkelrata}$$ If we consider $\Psi_0$ and $\Psi_1$ elements in ${\mathcal{S}}({\mathbb C}^k,{\mathcal{H}})$, the unit sphere in ${\mathcal{L}}({\mathbb C}^k,{\mathcal{H}})$, they are a distance of $\pi/2$ apart. In fact, $\Psi(t)=\cos(t)\Psi_0+\sin(t)\Psi_1$, with domain $0\leq t\leq \pi/2$, is a length minimizing curve from $\Psi_0$ to $\Psi_1$ in ${\mathcal{S}}({\mathbb C}^k,{\mathcal{H}})$. Consequently, $${\operatorname{dist}(\rho_0,\rho_1)}\geq \pi/2.\label{storre}$$ The uncertainty relation follows from and . Also note that the estimate cannot be improved. Direct computations using yield $\Psi(t)^\dagger\Psi(t)=P(\sigma)$ and $\Psi(t)^\dagger\dot\Psi(t)=0$. Therefore, $\Psi(t)$ is a horizontal curve in ${\mathcal{S}}(\sigma)$, and hence is, in fact, an equality. Uhlmann’s bundle and the Bures distance {#Uhlmann and Bures} ======================================= Uhlmann [@Uhlmann1992Energy] proved that for unitarily evolving quantum systems represented by invertible density operators, the energy dispersion is bounded from below by the Bures distance between the initial and final states. This result, together with , shows that on orbits of invertible density operators, the distance function associated with $g$ is bounded from below by the Bures distance. Here we present an independent argument for this fact, and we give an example of two isospectral density operators between which the two metrics measure different distances. Let ${\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})$ be the space of all invertible maps in ${\mathcal{L}}({\mathbb C}^n,{\mathcal{H}})$ with unit norm, and ${\mathcal{D}_\text{inv}}({\mathcal{H}})$ be the space of all invertible density operators acting on ${\mathcal{H}}$. Then $\Pi:{\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})\to{\mathcal{D}_\text{inv}}({\mathcal{H}})$ defined by $\Pi(\Psi)=\Psi\Psi^\dagger$ is a ${\mathcal{U}}(n)$-bundle, which we call *Uhlmann’s bundle* since it first appeared in [@Uhlmann1991]. The geometry of Uhlmann’s bundle has been thoroughly investigated, and it is an important tool in quantum information theory, mainly due to its close relationship with the Bures metric [@Bures1969; @Uhlmann1992]. Uhlmann’s bundle is equipped with the mechanical connection, which means that the horizontal bundle is the orthogonal complement of the vertical bundle with respect to the Hilbert-Schmidt metric. Moreover, the metric on ${\mathcal{D}_\text{inv}}({\mathcal{H}})$ obtained by declaring $\Pi$ to be a Riemannian submersion, is the Bures metric [@Uhlmann1992]. We denote the associated distance function by ${\operatorname{dist_{B}}}$. Suppose $k=n$ in . Then ${\mathcal{S}}(\sigma)$ is a submanifold of ${\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})$. Moreover, the vertical bundle of ${\mathcal{S}}(\sigma)$ is subbundle of the restriction of the vertical bundle of ${\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})$ to ${\mathcal{S}}(\sigma)$. However, no nonzero horizontal vector in Uhlmann’s bundle is tangential to ${\mathcal{S}}(\sigma)$. To see this, let $\Psi$ be any element in ${\mathcal{S}}(\sigma)$. Then $X$ in ${\operatorname{T}\!}_\Psi{\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})$ is horizontal, i.e. is annihilated by the mechanical connection of the Uhlmann bundle, if and only if [@Uhlmann1991] $$\label{Uhlmann parallel} \Psi^\dagger X-X^\dagger\Psi=0.$$ On the other hand, every $X$ in ${\operatorname{T}\!}_\Psi{\mathcal{S}}(\sigma)$ satisfies $$\label{relation} \Psi^\dagger X+X^\dagger\Psi=0$$ since $\Psi^\dagger\Psi=P(\sigma)$. Clearly, only the zero vector satisfies both and . The distance between $\rho_0$ and $\rho_1$ in ${\mathcal{D}}(\sigma)$ is never smaller than Bures distance between them. Indeed, every curve between $\pi^{-1}(\rho_0)$ and $\pi^{-1}(\rho_1)$ in ${\mathcal{S}}(\sigma)$ is a curve between $\Pi^{-1}(\rho_0)$ and $\Pi^{-1}(\rho_1)$ in ${\mathcal{S}_\text{inv}}({\mathbb C}^n,{\mathcal{H}})$, and since the metrics on the total spaces of the two bundles are induced from a common ambient metric we can conclude that $$\label{bures inequality} {\operatorname{dist}(\rho_0,\rho_1)}\geq{\operatorname{dist_{B}}}(\rho_0,\rho_1).$$ Uhlmann [@Uhlmann1992] and Dittmann [@Dittmann1993; @Dittmann1999] have derived explicit formulas for the Bures metric for density operators on finite dimensional Hilbert spaces. For density operators on ${\mathbb C}^2$ the formula reads $$\label{Dittmann} {\operatorname{dist_{B}}}(\rho,\rho+\delta\rho)^2=\frac 14{\operatorname{Tr}}\big(\delta\rho\delta\rho+\frac {1}{\det\rho}(\delta\rho-\rho\delta\rho)^2\big).$$ We use to show that there are density operators $\rho_0$ and $\rho_1$ acting on ${\mathbb C}^2$ for which the inequality in is strict. Suppose $\sigma=(p_1,p_2)$, and let ${\varepsilon}>0$. For ${\varepsilon}$ small enough, $$\rho(t)=\begin{bmatrix} p_2\sin^2({\varepsilon}t)+p_1\cos^2({\varepsilon}t) & (p_2-p_1)\sin({\varepsilon}t)\cos({\varepsilon}t)\\ (p_2-p_1)\sin({\varepsilon}t)\cos ({\varepsilon}t) & p_1\sin^2({\varepsilon}t)+p_2\cos^2({\varepsilon}t) \end{bmatrix}$$ is a length minimizing curve in ${\mathcal{D}}(\sigma)$ between $\rho_0=\rho(0)$ and $\rho_1=\rho(1)$. Thus $${\operatorname{dist}(\rho_0,\rho_1)}={\operatorname{Length}[\rho]}={\varepsilon}.$$ However, yields $${\operatorname{dist_{B}}}(\rho_0,\rho_1)=\frac {p_1-p_2}{\sqrt{2}}|\sin{\varepsilon}|\sqrt{2+\frac{(p_1-p_2)^2}{2p_1p_2}\sin^2{\varepsilon}}.$$ Conclusion ========== In this paper we have utilized a construction due to Montgomery, to equip the spaces of isospectral density operators acting on a finite dimensional Hilbert space with Riemannian metrics, and we have established important relations between these and Hamiltonian quantum dynamics. Indeed, we have proved that the energy dispersion of a unitarily evolving density operator is bounded from below by the length of the curve traced out by the operator, and that every curve of isospectral density operators can be generated by a Hamiltonian such that the energy dispersion equals the curve’s length. These facts allowed us to express the distance between two density operators in terms of a measurable physical quantity, and the paper culminated in a time-energy uncertainty estimate for mixed states. In a final section we have compared our energy dispersion results with an energy dispersion estimate by Uhlmann. We believe that our results have very interesting applications in optimal quantum control. Such aspects of the theory developed here will be investigated by the authors in a forthcoming paper. There we will focus on the geometry of, and dynamics in, orbits of invertible density operators, and we will classify the Hamiltonians that drive states along evolution curves with minimal energy dispersion. [25]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1103/PhysRevA.61.010305) [****,  ()](\doibase 10.1016/S0375-9601(99)00803-8) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.99.100603) [****,  ()](\doibase 10.1103/PhysRevA.82.012321) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.66.032309) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [**]{} (, , ) in @noop [**]{}, , Vol.  (, , ) pp.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevLett.77.2154) [****,  ()](\doibase 10.1103/PhysRevA.77.042111) @noop [****,  ()]{} in @noop [**]{}, , Vol.  (, , ) pp. @noop [****,  ()]{} @noop [****,  ()]{}
--- --- **On a Hypergraph Approach to Multistage Group Testing Problems [^1]** <span style="font-variant:small-caps;">A. G. D’yachkov</span> `agd-msu@yandex.ru`\ [Lomonosov Moscow State University, Moscow, Russia]{}\ <span style="font-variant:small-caps;">I.V. Vorobyev</span> `vorobyev.i.v@yandex.ru`\ [Lomonosov Moscow State University, Moscow, Russia]{}\ <span style="font-variant:small-caps;">N.A. Polyanskii</span> `nikitapolyansky@gmail.com`\ [Lomonosov Moscow State University, Moscow, Russia]{}\ <span style="font-variant:small-caps;">V.Yu. Shchukin</span> `vpike@mail.ru`\ [Lomonosov Moscow State University, Moscow, Russia]{}\ =0.9 Introduction ============ Group testing is a very natural combinatorial problem that consists in detecting up to $s$ defective elements of the set of objects $[t]=\{1,\ldots,t\}$ by carrying out tests on properly chosen subsets (pools) of $[t]$. The test outcome is positive if the tested pool contains one or more defective elements; otherwise, it is negative. There are two general types of algorithms. In *adaptive* group testing, at each step the algorithm decides which group to test by observing the responses of the previous tests. In *non-adaptive* algorithm, all tests are carried out in parallel. There is a compromise algorithm between these two types, which is called a *multistage* algorithm. For the multistage algorithm all tests are divided into $p$ sequential stages. The tests inside the same stage are performed simultaneously. The tests of the next stages may depend on the responses of the previous. In this context, a non-adaptive group testing algorithm is reffered to as a one stage algorithm. Previous results ---------------- We refer the reader to the monograph [@DH] for a survey on group testing and its applications. In spite of the fact that the problem of estimating the minimum *average* (the set of defects is chosen randomly) number of tests has been investigated in many papers (for instance, see [@ds13; @mt11]), in the given paper we concentrate our attention only on the minimal number of test in the *worst case*. In 1982 [@dr82], Dyachkov and Rykov proved that at least $$\frac{s^2}{2\log_2(e(s+1)/2)}\log_2 t(1+o(1))$$ tests are needed for non-adaptive group testing algorithm. If the number of stages is $2$, then it was proved that $O(s \log_2 t)$ tests are already sufficient. It was shown by studying random coding bound for disjunctive list-decoding codes [@r90; @d03] and selectors [@BGV]. The recent work [@DVPS] has significantly improved the constant factor in the main term of number of tests for two stage group testing procedures. In particular, if $s\to\infty$, then $$\frac{se}{\log_2e}\log_2t (1+o(1))$$ tests are enough for two stage group testing. As for adaptive strategies, there exist such ones that attain the information theory lower bound $s \log_2t (1+o(1))$. However, for $s>1$ the number of stages in well-known optimal strategies is a function of $t$, and grows to infinity as $t\to\infty$. Summary of the results ---------------------- In the given article we present some explicit algorithms, in which we make a restriction on the number of stages. It will be a function of $s$. We briefly give necessary notations in section \[Pre\]. Then, in section \[Hyp\], we present a general idea of searching defects using a hypergraph approach. In section \[Search2\], we describe a $4$-stage group testing strategy, which detects $2$ defects and uses the asymptotically optimal number of tests $2\log_2t(1+o(1))$. As far as we know the best result for such a problem was obtained [@DSW] by Damashke et al. in 2013. They provide an exact two stage group testing strategy and use $2.5\log_2t$ tests. For other constructions for the case of $2$ defects, we refer to [@mr98; @DL]. Preliminaries {#Pre} ============= Throughout the paper we use $t$, $s$, $p$ for the number of elements, defectives, and stages, respectively. Let ${\triangleq}$ denote the equality by definition, $|A|$ – the cardinality of the set $A$. The binary entropy function $h(x)$ is defined as usual $$h(x)=-x\log_2(x)-(1-x)\log_2(1-x).$$ A binary $(N \times t)$-matrix with $N$ rows ${{\textbf{\textit{x}}}}_1, \dots, {{\textbf{\textit{x}}}}_N$ and $t$ columns ${{\textbf{\textit{x}}}}(1), \dots, {{\textbf{\textit{x}}}}(t)$ (codewords) $$X = \| x_i(j) \|, \quad x_i(j) = 0, 1, \quad i \in [N],\,j \in [t]$$ is called a [*binary code of length $N$ and size $t$*]{}. The number of $1$’s in the codeword $x(j)$, i.e., $|{{\textbf{\textit{x}}}}(j)| {\triangleq}\sum\limits_{i = 1}^N \, x_i(j)= wN$, is called the [*weight*]{} of ${{\textbf{\textit{x}}}}(j)$, $j \in [t]$ and parameter $w$, $0<w<1$, is the *relative weight*. One can see that the binary code $X$ can be associated with $N$ tests. A column ${{\textbf{\textit{x}}}}(j)$ corresponds to the $j$-th sample; a row ${{\textbf{\textit{x}}}}_i$ corresponds to the $i$-th test. Let ${{\bf u}}\bigvee {{\bf v}}$ denote the disjunctive sum of binary columns ${{\bf u}}, {{\bf v}}\in \{0, 1\}^N$. For any subset ${{\mathcal{S}}}\subset[t]$ define the binary vector $$r(X,{{\mathcal{S}}}) = \bigvee\limits_{j\in{{\mathcal{S}}}}{{\textbf{\textit{x}}}}(j),$$ which later will be called the *outcome vector*. By ${{\mathcal{S}}}_{un}$, $|{{\mathcal{S}}}_{un}|{\leqslant}s$, denote an unknown set of defects. Suppose there is a $p$-stage group testing strategy $\mathfrak{S}$ which finds up to $s$ defects. It means that for any ${{\mathcal{S}}}_{un}\subset[t]$, $|{{\mathcal{S}}}_{un}|{\leqslant}s$, according to $\mathfrak{S}$: 1. we are given with a code $X_1$ assigned for the first stage of group testing; 2. we can design a code $X_{i+1}$ for the $i$-th stage of group testing, based on the outcome vectors of the previous stages $r(X_1,{{\mathcal{S}}}_{un})$, $r(X_2,{{\mathcal{S}}}_{un})$, …, $r(X_i,{{\mathcal{S}}}_{un})$; 3. we can identify all defects ${{\mathcal{S}}}_{un}$ using $r(X_1,{{\mathcal{S}}}_{un})$, $r(X_2,{{\mathcal{S}}}_{un})$, …, $r(X_p,{{\mathcal{S}}}_{un})$. Let $N_i$ be the number of test used on the $i$-th stage and $$N_T(\mathfrak{S})=\sum_{i=1}^p N_i$$ be the maximal total number of tests used for the strategy $\mathfrak{S}$. We define $N_p(t, s)$ to be the minimal worst-case total number of tests needed for group testing for $t$ elements, up to $s$ defectives, and at most $p$ stages. Hypergraph approach to searching defects {#Hyp} ======================================== Let us introduce a hypergraph approach to searching defects. Suppose a set of vertices $V$ is associated with the set of samples $[t]$, i.e. $V = \{1,2\ldots, t\}$. **First stage:** Let $X_1$ be the code corresponding to the first stage of group testing. For the outcome vector $r=r(X_1,{{\mathcal{S}}}_{un})$ let $E(r,s)$ be the set of subsets of ${{\mathcal{S}}}\subset V$ of size at most $s$ such that $r(X,{{\mathcal{S}}})=r(X,{{\mathcal{S}}}_{un})$. So, the pair $(V,E(r,s))$ forms the hypergraph $H=H(X_1)$. We will call two vertices *adjacent* if they are included in some hyperedge of $H$. Suppose there exist a *good* vertex coloring of $H$ in $k$ colours, i.e., assignment of colours to vertices of $H$ such that no two adjacent vertices share the same colour. By $V_i\subset V$, $1{\leqslant}i{\leqslant}k$, denote vertices corresponding to the $i$-th colour. One can see that all these sets are pairwise disjoint. **Second stage:** Now we can perform $k$ tests to check which of monochromatic sets $V_i$ contain a defect. Here we find the cardinality of set ${{\mathcal{S}}}_{un}$ and $|{{\mathcal{S}}}_{un}|$ sets $\{V_{i_1},\ldots,V_{i_{|{{\mathcal{S}}}_{un}|}} \}$, each of which contains exactly one defective element. **Third stage:** Carrying out ${\left\lceil}\log_2|V_{i_1}|{\right\rceil}$ tests we can find a vertex $v$, corresponding to the defect, in the suspicious set $V_{i_1}$. Observe that actually by performing $\sum\limits_{j=1}^{{{\mathcal{S}}}_{un}}{\left\lceil}\log_2|V_{i_j}|{\right\rceil}$ tests we could identify all defects ${{\mathcal{S}}}_{un}$ on this stage. **Fourth stage:** Consider all hyperedges $e\in E(r,s)$, such that $e$ contains the found vertex $v$ and consists of vertices of $v\cup V_{i_2}\cup \ldots\cup V_{i_{|{{\mathcal{S}}}_{un}|}}$. At this stage we know that the unknown set of defects coincides with one of this hyperedges. To check if the hyperedge $e$ is the set of defects we need to test the set $[t]\backslash e$. Hence, the number of test at fourth stage is equal to degree of the vertex $v$. Optimal searching of 2 defects {#Search2} ============================== Now we consider a specific construction of $4$-stage group testing. Then we upper bound number of tests $N_i$ at each stage. **First stage:** Let $C=\{0,1,\dots q-1\}^{\hat{N}}$ be the $q$-ary code, consisting of all $q$-ary words of length $\hat{N}$ and having size $t=q^{\hat{N}}$. Let $D$ be the set of all binary words with length $N'$ such that the weight of each codeword is fixed and equals $ wN'$, $0<w<1$, and the size of $D$ is at least $q$, i.e., $q {\leqslant}{N' \choose {wN'}}$. On the first stage we use the concatenated binary code $X_1$ of length $N_1=\hat{N}\cdot N'$ and size $t=q^{\hat{N}}$, where the inner code is $D$, and the outer code is $C$. We will say $X_1$ consists of $\hat{N}$ layers. Observe that we can split up the outcome vector $r(X_1,{{\mathcal{S}}}_{un})$ into $\hat{N}$ subvectors of lengths $N'$. So let $r_j(X_1,{{\mathcal{S}}}_{un})$ correspond to $r(X_1,{{\mathcal{S}}}_{un})$ restricted to the $j$-th layer. Let $w_j$, $j\in[\hat{N}]$, be the relative weight of $r_j(X_1,{{\mathcal{S}}}_{un})$, i.e., $|r_j(X_1,{{\mathcal{S}}}_{un})| = w_jN'$ is the weight of the $j$-th subvector of $r(X_1,{{\mathcal{S}}}_{un})$. If $w_j=w$ for all $j\in[\hat{N}]$, then we can say that ${{\mathcal{S}}}_{un}$ consists of 1 element and easily find it. If there are at least two defects, then suppose for simplicity that ${{\mathcal{S}}}_{un}=\{1,2\}$. The two corresponding codewords of $C$ are $c_1$ and $c_2$. There exists a coordinate $i, 1{\leqslant}i{\leqslant}\hat{N}$, in which they differs, i.e., $c_1(i)\neq c_2(i)$. Notice that the relative weight $w_i$ is bigger than $w$. For any $i\in[\hat{N}]$ such that $w_i>w$, we can colour all vertices $V$ in $q$ colours, where the colour of $j$-th vertex is determined by the corresponding $q$-nary symbol $c_i(j)$ of code $C$. One can check that such a coloring is a good vertex coloring. **Second stage:** We perform $q$ tests to find which coloured group contain $1$ defect. **Third stage:** Let us upper bound the size $\hat{t}$ of one of such suspicious group: $$\hat{t}{\leqslant}{w_1N' \choose wN'}\cdot\ldots \cdot {w_{\hat{N}}N' \choose wN'}.$$ In order to find one defect in the group we may perform ${\left\lceil}\log_2 \hat{t}{\right\rceil}$ tests. **Fourth stage:** On the final step, we have to bound the degree of the found vertex $\it{v}\in V$ in the graph. The degree $\deg(\it{v})$ is bounded as $$\deg(\it{v}){\leqslant}{wN' \choose (2w-w_1)N'}\cdot\ldots \cdot {wN' \choose (2w-w_{\hat{N}})N'}.$$ We know that the second defect corresponds to one of the adjacent to $v$ vertices. Therefore, to identify it we have to make ${\left\lceil}\log_2\deg(\it{v}){\right\rceil}$ tests. The optimal choice of the parameter $w$ gives the procedure with total number of tests equals $2\log_2t(1+o(1))$. [99]{} *Du D.Z.*, *Hwang F.K.*, Combinatorial Group Testing and Its Applications, 2nd ed., *Series on Applied Mathematics*, vol. 12, 2000. *Damaschke P.*, *Sheikh Muhammad A.*, *Triesch E.*, Two new perspectives on multi-stage group testing, *Algorithmica*, vol. 67, no. 3, pp. 324-354, 2013. *M?zard M.*, *Toninelli, C.*, Group testing with random pools: Optimal two-stage algorithms, *Information Theory, IEEE Transactions on*, vol. 57, no. 3, pp. 1736-1745, (2011). *D’yachkov A.G.*, *Rykov V.V.*, Bounds on the Length of Disjunctive Codes, // *Problems of Information Transmission*, vol. 18. no 3. pp. 166-171, 1982. *D’yachkov A.G.*, *Vorobyev I.V.*, *Polyanskii N.A.*, *Shchukin V.Yu.*, Bounds on the Rate of Disjunctive Codes, *Problems of Information Transmission*, vol. 50, no. 1, pp. 27-56, 2014. *Rashad A.M.*, Random Coding Bounds on the Rate for List-Decoding Superimposed Codes. *Problems of Control and Inform. Theory.*, vol. 19, no 2, pp. 141-149, 1990. *D’yachkov A.G.*, Lectures on Designing Screening Experiments, *Lecture Note Series 10*, Combinatorial and Computational Mathematics Center, Pohang University of Science and Technology (POSTECH), Korea Republic, Feb. 2003, (survey, 112 pages). *De Bonis A.*, *Gasieniec L.*, *Vaccaro U.*, Optimal two-stage algorithms for group testing problems, *SIAM J. Comp.*, vol. 34, no. 5 pp. 1253-1270, 2005. *Damaschke P.*, *Sheikh Muhammad A.*, *Wiener G.* Strict group testing and the set basis problem. *Journal of Combinatorial Theory, Series A*, vol. 126, pp. 70-91, August 2014. *Macula A.J.*, *Reuter G.R.*, Simplified searching for two defects, *Journal of statistical planning and inference*, vol. 66, no. 1, pp 77-82, 1998. *Deppe C.*, *Lebedev V.S.*, Group testing problem with two defects, *Problems of Information Transmission*, vol. 49, no. 4, pp. 375-381, 2013. [^1]: The research is supported in part by the Russian Foundation for Basic Research under Grant No. 16-01-00440.
--- abstract: 'Transversality of stable and unstable manifolds of hyperbolic periodic trajectories is proved for monotone cyclic systems with negative feedback. Such systems in general are not in the category of monotone dynamical systems in the sense of Hirsch. Our main tool utilized in the proofs is the so-called cone of high rank. We further show that stable and unstable manifolds between a hyperbolic equilibrium and a hyperbolic periodic trajectory, or between two hyperbolic equilibria with different dimensional unstable manifolds also intersect transversely.' author: - | \ Yi Wang[^1] $\,$ and Dun Zhou\ School of Mathematical Science\ University of Science and Technology of China\ Hefei, Anhui, 230026, P. R. China\ title: Transversality for Cyclic Negative Feedback Systems --- Introduction ============ Oscillations frequently occur and play a fundamental role in biological systems and networks. It has been widely observed that many biological oscillators have a cyclic structure consisting of negative feedback loops. Such cyclic nature of interactions appears in neural systems, cellular control systems and the description of cascades of enzimatic reactions coupled with gene transcription (see e.g., [@Tsai; @Fer; @Hasting1977; @EL]). Typical examples of cyclic negative feedback models include the Goodwin oscillator, a well-studied model relevant to circadian oscillations ([@Goodw]); the Repressilator, a transcriptional negative feedback loop constructed in Escherichia coli ([@EL; @MuHo]); the Metabolator, a synthetic metabolic oscillator ([@Fung]); and the Frzilator, a model of the control of gliding motions in myxobacteria ([@Igo]), etc. Consequently, negative feedbacks which are embedded in a cyclic architecture, are believed to be the underling principle for a system to admit oscillations in a fluctuating environment. For such classes of models, many results can be found in the literature (see e.g., [@Hasting1977; @EL; @FrT; @Ton]). In particular, all the oscillator models previously introduced can be written in an abstract form as $$\label{feedback-equation} \begin{split} \dot{x}_1 &=f_1(x_1,x_n),\\ \dot{x}_i &=f_i(x_i,x_{i-1}),\quad 2\leq i\leq n-1,\\ \dot{x}_n &=f_n(x_{n},x_{n-1}).\\ \end{split}$$ where the nonlinearity $f=(f_1,f_2,\cdots,f_n)$, together with their partial derivatives with respect to $x_j$, are continuous in $\mathbb{R}^n$ and that there exists $\delta_i\in \{-1,1\}$, $1\leq i\leq n$, such that $$\label{feedback-condition} \delta_i\frac{\partial f_i(x_i,x_{i-1})}{\partial x_{i-1}}>0,\quad \text{for all}\ (x_i,x_{i-1})\in \mathbb{R}^2.$$ A remarkable result has been accomplished by Mallet-Paret and Smith [@Mallet-Paret1990]: They have shown that the omega-limit set of any bounded orbit of system - can be embedded in $\mathbb{R}^2$, and hence, the Poincaré-Bendixson property severely constrains possible dynamics of the system. Such insight confirms that a cyclic structure consisting of negative feedback loops is responsible for the emergence of oscillations in biological systems. Following [@Mallet-Paret1990], we call system - a monotone cyclic feedback system (MCFS). Let $\Delta=\delta_1 \delta_2\cdots \delta_n$, then there are two types of MCFS depending on the sign of $\Delta$. If $\Delta=1$ (resp. $\Delta=-1$), then system - is called a MCFS with positive (resp. negative) feedback. A MCFS with positive feedback ($\Delta=1$) is in particular a monotone dynamical system in the sense of Hirsch [@HiSm; @Smi95] with respect to certain usual convex cone and many classical results for monotone dynamical systems contained in [@HiSm; @Smi95] apply to -. However, if $\Delta=-1$, such system is not monotone in the usual sense of Hirsch [@HiSm; @Smi95]. In the theory of dynamical systems, transversality of stable and unstable manifolds of critical elements plays a central role in connection with structural stability (see e.g., [@Palis]). Despite this fact, there are not many results in the literature to verify if transversality holds for a given dynamical system. Fusco and Oliva [@Fusco1987; @fusco1990] have presented two classes of finite-dimensional cooperative ODE systems which possess the transversality. For scalar parabolic equations, Henry [@Hen] and Angenent [@Ang] have proved transversality of the invariant manifolds of stationary solutions (see also Chen et al. [@Chen] for time-periodic cases) with separated boundary condition. For periodic boundary condition, Czaja and Rocha [@CRo] have recently shown that the stable and unstable manifolds of two hyperbolic periodic orbits always intersect transversally. The other automatic transversality results have been completed in [@JR1; @JR2]. Going back to the MCFS -. When the feedback is positive (i.e., $\Delta=1$), the main results in Fusco and Oliva [@fusco1990] may imply that any connecting orbit between two hyperbolic periodic orbits or between a hyperbolic periodic orbit and a hyperbolic equilibrium is automatically transversal. However, it deserves to point out that all the aforementioned systems, in both finite-dimensional and infinite-dimensional settings, fall in the category of monotone dynamical systems in the sense of Hirsch [@HiSm; @Smi95]. To the best of our knowledge, there are very few nontrivial explicit examples outside the category of monotone dynamical systems, where invariant manifolds of critical elements (particularly, of periodic orbits) are known to intersect transversely. In this paper, we will extensively focus on system - with negative feedback ($\Delta=-1$). Our main purpose is to show that this system admits transversality of stable and unstable manifolds of critical elements. As we mentioned before, such system is not monotone in the usual sense of Hirsch. So, we presented here a class of explicit systems, not in the category of Hirsch [@HiSm; @Smi95] but including many cyclic negative feedback biological models, for which “transversality" property holds. Our approach is motivated by the recent work of Sanchez [@San1; @San2] on a newly-extended notion of monotone flows with respect to certain so-called cones of rank $k$. These cones were already considered by Fusco and Oliva [@fusco1991] (see also Krasnoselskij et al. [@Kra] for infinite-dimensional settings). Such cones consist of straight lines and contain a $k$-dimensional linear subspace and no higher dimensional subspace. A usual convex cone $K$ (in the sense of Hirsch [@HiSm]) defines the generalized cone $K\cup (-K)$ which is of rank $1$. For system - with negative feedback, Mallet-Paret and Smith [@Mallet-Paret1990] introduced an integer-valued Lyapunov functional $N$. This function $N$ is not defined everywhere but only on an open and dense subset of $\mathbb{R}^n$ on which it is also continuous. It is locally constant near points where it is defined and strictly decreasing as $t$ increases through points where it is not defined. The existence of $N$ enables us to present two modified functions of $N$ (see Lemma \[zero-property2\]) and construct a family of nested cones, say $K_1\subset K_2\cdots \subset K_j$, of even rank (except that the largest cone $K_j$ is of odd rank when $n$ is an odd number), and obtain monotonicity of the system with respect to these high-rank cones (see Proposition \[map-int\]). In particular, if system - is linear, by virtue of the generalized Perron-Frobenius Theorem with respect to high-rank cones ([@fusco1991 Theorem 1], see also [@Kra] for the infinite dimensional settings), we are able to decouple $\mathbb{R}^n$ into many $2$-dimensional invariant subspaces $W_1,W_2,\cdots,W_j$ (When $n$ is odd, the last space $W_j$ is just $1$-dimensional) of the corresponding solution operator. Moreover, the growth rates of the solution operator on different invariant subspaces are strictly separated (see Lemma \[rootspace-arra\]). As a consequence, we here generalize the Floquet theory established in Mallet-Paret and Smith [@Mallet-Paret1990] for time-periodic cases to general time-dependent cases by appealing a different approach. Based on the theory obtained above and motivated by [@Fusco1987; @fusco1990], we are able to investigate transversality of stable and unstable manifolds of critical elements of the system. More precisely, we will show that for any two hyperbolic periodic orbits $\Gamma^{-}$ and $\Gamma^{+}$, the unstable manifold $W^u(\Gamma^{-})$ of $\Gamma^{-}$ and the stable manifold $W^s(\Gamma^{+})$ of $\Gamma^{+}$ will always intersect transversely (see Theorem \[transversality\]). Moreover, such “automatic" transversality will still hold if one of the two periodic orbits ($\Gamma^{+}$ or $\Gamma^{-}$) is replaced by a hyperbolic equilibrium. When considering transversality between two hyperbolic equilibria, we show that if the dimensions of their unstable manifolds are different, then their corresponding stable and unstable manifolds will also intersect transversely. This paper is organized as follows. In section 2, we first collect some properties of the integer-valued Lyapunov function $N$ introduced in [@Mallet-Paret1990]; and then present two modified functions of $N$, by which one can define the nested cones of high-rank so that the flow generated by - with negative feedback is monotone with respect to these high-rank cones. Moreover, if system - is linear, we generalize the Floquet theory in [@Mallet-Paret1990] for time-periodic cases to general time-dependent cases by the generalized Perron-Frobenius Theorem for high-rank cones. In section 3, we proved transversality of the stable and unstable manifolds of critical elements for system -. Cones of high-rank in Linear System =================================== In this section, we will introduce and investigate cones of high-rank for the linear negative feedback system $$\label{linearity-com-system} \begin{split} \dot{x}_1 &=a_{11}(t)x_1+a_{1n}(t)x_n,\\ \dot{x}_i &=a_{i,i-1}(t)x_{i-1}+a_{ii}(t)x_{i},\quad 2\leq i\leq n-1,\\ \dot{x}_n &=a_{n,n-1}(t)x_{n-1}+a_{nn}(t)x_n,\\ \end{split}$$ with all the coefficient functions being continuous on $\mathbb{R}$ and satisfying the following condition: $$\label{positive-a-a} a_{1,n}(t)<0\, \textnormal{ and }\,a_{i,i-1}(t)>0,\quad i=2,\cdots,n-1,$$ for all $t\in \mathbb{R}$. Combining with the generalized Perron-Frobenius Theorem developed by [@fusco1991], we will eventually split $\mathbb{R}^n$ into many invariant subspaces, whose dimension is no more than $2$, of the solution operator of system . Hereafter, we always write the coefficient matrix as $A(t)=(a_{ij}(t))_{n\times n}$. We now introducing an integer-valued Lyapunov function $N$ associated with . From [@Mallet-Paret1990], if we denote the set $\Lambda=\{x|x\in \mathbb{R}^n \ \text{and}\ x_i\neq 0,i=1,2,\cdots,n\}$, then one can define a continuous map $N$ on $\Lambda$, taking values in $\{0,1,2,\cdots,n\}$, by $$N(x)=\text{card} \{i|\delta_i x_i x_{i-1}<0\},$$ while here $\delta_1=-1$ and $\delta_i=1,\ 2\leq i\leq n$. Henceforth, we let $\tilde{n}=n$ for $n$ is odd and $\tilde{n}=n-1$ for $n$ is even. Moreover, it follows from [@Mallet-Paret1990] that $$N(x)\in\{1,3,\cdots,\tilde{n}\}$$ for any $x\in \Lambda$. Clearly, $\Lambda$ is open and dense in $\mathbb{R}^n$. Motivated by [@Fusco1987; @fusco1990], we now define two functions $$N_{m},N_M:\mathbb{R}^n\rightarrow \{1,3,\cdots,\tilde{n}\}$$ by letting $N_{m}(x)$, $N_M(x)$ be the minimum and maximum value of $N(x')$ for $x'\in \mathcal{U}\cap \Lambda$, where $\mathcal{U}$ being a small neighborhood of $x\in \mathbb{R}^n$. These two functions will then help us extend (continuously) the domain $\Lambda$ of $N$ to $$\mathcal{N}=\{x\in \mathbb{R}^n|N_m(x)=N_M(x)\}.$$ Note that $\mathcal{N}$ is also open and dense in $\mathbb{R}^n$ and $\mathcal{N}$ is the maximal domain on which $N$ is continuous. \[zero-property\] Let $x(t)$ be a nontrivial solution of . Then: - $x(t)\in \mathcal{N}$ except at isolated values of $t$ and $N(x(t))$ is nonincreasing as $t$ increases with $x(t)\in \mathcal{N}$; - If $x(t_0)\notin \mathcal{N}$, then for $\varepsilon>0$ small, one has $N(x(t_0+\varepsilon))<N(x(t_0-\varepsilon))$; - There exists a $t_0>0$ such that $x(t)\in \mathcal{N}$ and $N(x(t))$ is constant for $t\in [t_0,+\infty)$ and for $t\in (-\infty,-t_0],$ respectively. See [@Mallet-Paret1990 Proposition 1.1] for (i) and (ii). It follows from (i) and (ii) that $N(x(t))$ can drop to a lower value only finitely many times, which implies (iii). Moreover, we have the following additional property of the relation between $N$ and $N_m$ (resp. $N_M$). \[zero-property2\] Let $x(t)$ be a nontrivial solution of . If $x(t_0)\notin\mathcal{N}$, then for $\varepsilon>0$ small enough, one has $$\label{min-max} N(x(t_0+\varepsilon))=N_m(x(t_0))\,\, \textnormal{ and }\,\,N(x(t_0-\varepsilon))=N_M(x(t_0)).$$ Before proving this lemma, we need the following technical lemma. \[perturb\] Let $y(t)$ be the solution of $$\label{perturb-equation} \dot{y}=B(t)y+g(t),\quad y(0)=0,$$ where $B(t)$ is a continuous $n\times n$ matrix function and $g(t)$ is a continuous $n$-vector valued function satisfying $g(t)=g_mt^m+o(t^m)$, as $t\to 0$. Here $g_m\in \mathbb{R}^n$ and $m$ is a nonnegative integer. Then, one has $$y(t)=\frac{g_m}{m+1}t^{m+1}+o(t^{m+1}),\,\, \textnormal{ as } t\to 0.$$ This lemma is directly from the L’Hospital principle. (See also [@Mallet-Paret1990 p.374]). Without loss of generality we assume that $t_0=0$. We first consider the case that solution $x(t)$ with initial value $x(0)=(x_1,x_2,\cdots,x_n)$, where $x_1\neq 0$ and $x_i=0$ for $i=2,\cdots,n$. For each $i=2,\cdots,n$, the equation $$\dot{x}_i(t)=a_{i,i-1}(t)x_{i-1}(t)+a_{i,i}(t)x_i(t)$$ satisfies the assumptions in Lemma \[perturb\]. Therefore, $$x_2(t)=a_{2,1}(0)x_1(0)t+o(t),\,\,\textnormal{ as } t\to 0.$$ After the iteration in the corresponding equations, we obtain that $$x_i(t)=\frac{(\prod_{j=2}^{i}a_{j,j-1}(0))\cdot x_1(0)\cdot t^{i-1}}{(i-1)!}+o(t^{i-1}),\,\,\textnormal{ as } t\to 0,$$ for each $2\leq i\leq n$. Since $a_{i,i-1}(0)>0$ for $2\leq i\leq n$, it is clear that $x_i(t)$ shares the same symbol with $x_1(0)$, and hence, $x(t)\in \Lambda$ for all $t>0$ small enough. This implies that $N(x(t))=1$ for $t>0$ small enough. Since $N(x)\geq 1$ for any $x\in\Lambda$, we have $N(x(t))=N_m(x(0))$ for $t>0$ small. On the other hand, for $t<0$ with ${\lvertt\rvert}$ small enough, the symbol of $x_i(t)$ will change alternately with respect to the index $i$. As a consequence, $N(x(t))=N_M(x(0))=\tilde{n}$, for $t<0$ with ${\lvertt\rvert}$ small. So, we have proved this lemma for the special case of $x(0)=(x_1,0,\cdots,0)$. By repeating the argument above, one can obtain this lemma for the case of $x(0)=(0,\cdots,x_j,\cdots,0)$ with $x_j\neq 0$. We now consider the general case. Given any index $j\ge 1$ with $x_j(0)=0$ and any index $i$ with $x_i(0)\ne 0$, one can follow the same argument as in the paragraphs above to obtain that, for ${\lvertt\rvert}>0$ small, the symbol of $x_j(t)$ can be determined by the symbol of the sum $$\label{sum-x-j-ex} \left[\sum_{\substack{1\le i<j \\x_i(0)\ne 0}}(\prod_{k=i}^{j-1}a_{k+1,k}(0))\cdot x_i(0)\cdot t^{j-i}\right]+c_j\cdot\left[\sum_{\substack{j< i\le n \\x_i(0)\ne 0}}(\prod_{k=i}^{n-1}a_{k+1,k}(0))\cdot x_i(0)\cdot t^{n+j-i}\right],$$ where $c_j=a_{1,n}(0)\cdot\prod_{l=1}^{j-1}a_{l+1,l}(0)$. Here, we set $\prod_{k=n}^{n-1}a_{k+1,k}(0)=1.$ Based on , one can define an index set $J:=\{j:x_j(0)=0\}$. Note that $x(0)\ne 0$ and $x(0)\notin \mathcal{N}$. Then $J$ is a nonempty proper subset of $\{1,\cdots,n\}$. Now we partition $J$ into a finite union of pairwise disjoint integer segments $J_1,\cdots, J_m$ (mod $n$, e.g., $J_1=\{n-1,n,1,2\}$). For each $J_s$, one may write $J_s=\{j_s,j_s+1,\cdots, j_s+n_s\}$ (indices mod $n$), and then $j_s-1,j_s+n_s+1\notin J$. We first consider the case (i): $1\notin J$. In this case, for any $J_s$ and any $j\in J_s$, it follows from that the symbol ${\rm sgn}[x_j(t)]$ of $x_j(t)$ (choose ${\lvertt\rvert}>0$ smaller, if necessary) is determined by $(\prod_{k=j_s-1}^{j-1}a_{k+1,k}(0))\cdot x_{j_s-1}(0)\cdot t^{j-j_s+1}$. Note that $1\notin J_s$. Then $\prod_{k=j_s-1}^{j-1}a_{k+1,k}(0)>0$, which implies that ${\rm sgn}[x_j(t)]$ is determined by $x_{j_s-1}(0)\cdot t^{j-j_s+1}$. As a consequence, if $t>0$ is sufficiently small, then ${\rm sgn}[x_j(t)]={\rm sgn}[x_{j_s-1}(0)]$ for any $j\in J_s$. This entails that, for $t>0$ small enough, $J_s$ contributes no increase for $N$ in the neighborhood of $x(0)$. Due to arbitrariness of $J_s$, it then follows that $N(x(t))=N_m(x(0))$ for $t>0$ sufficiently small. On the other hand, if $t<0$ is sufficiently small, then ${\rm sgn}[x_j(t)]=(-1)^{j-j_s+1}\cdot{\rm sgn}[x_{j_s-1}(0)]$ for any $j\in J_s$. Therefore, for $t<0$ small enough, $J_s$ contributes the largest increase for $N$ in the neighborhood of $x(0)$. So $N(x(t))=N_M(x(0))$ for $t<0$ sufficiently small. Thus, has been verified in this case. Now consider the case (ii): $1\in J$. In this case, there is a unique $s_*$ such that $1\in J_{s_*}$. For any integer segment $J_s$ of $J$ satisfying $1\notin J_s$, one can repeat the same argument in the previous paragraph and obtain that, for $t>0$ small enough, $J_s$ contributes no increase for $N$ in the neighborhood of $x(0)$; and for $t<0$ small enough, $J_s$ contributes the largest increase for $N$ in the neighborhood of $x(0)$. So, it suffices to consider $J_{s_*}$. We write $$J_{s_*}=\{j_{s_*},j_{s_*}+1,j_{s_*}+2,\cdots,1,\cdots, j_{s_*}+n_{s_*}\} \,\,\textnormal{(indices mod} \,n\textnormal{)}.$$ Following such notation, we define a subset $R\subset J_{s_*}$ as $R=\{j\in J_{s_*}|j=1,\text{ or }j\text{ is on ``the right side" of }1\}$. If $j\in R$ then $j<j_{s_*}-1\le n$. Together with $j_{s_*}-1\notin J$, it then follows from that ${\rm sgn}[x_j(t)]$ is determined by the symbol of $c_j\cdot(\prod_{k=j_{s_*}-1}^{n-1}a_{k+1,k}(0))\cdot x_{j_{s_*}-1}(0)\cdot t^{n+j-(j_{s_*}-1)}$ whenever ${\lvertt\rvert}$ is sufficiently small. Then implies that, for any ${\lvertt\rvert}$ sufficiently small, $$\begin{aligned} \label{111} {\rm sgn}[x_j(t)]=\left\{ \begin{split} -{\rm sgn}[x_{j_{s_*}-1}(0)],\,\text{ if }j\in R\text{ and }t>0;\\ (-1)^{n+j-j_{s_*}}{\rm sgn}[x_{j_{s_*}-1}(0)],\,\text{ if }j\in R\text{ and }t<0, \end{split}\right.\end{aligned}$$ If $j\in J_{s_*}\setminus R$, then $1<j_{s_*}-1<j$. Again by , we obtain that ${\rm sgn}[x_j(t)]$ is determined by the symbol of $(\prod_{k=j_{s_*}-1}^{j-1}a_{k+1,k}(0))\cdot x_{j_{s_*}-1}(0)\cdot t^{j-j_{s_*}+1}$ whenever ${\lvertt\rvert}$ is small. Thus, $$\begin{aligned} \label{112} {\rm sgn}[x_j(t)]=\left\{ \begin{split} {\rm sgn}[x_{j_{s_*}-1}(0)],\,\text{ if }j\in J_{s_*}\setminus R\text{ and }t>0;\\ (-1)^{j-j_{s_*}+1}{\rm sgn}[x_{j_{s_*}-1}(0)],\,\text{ if }j\in J_{s_*}\setminus R\text{ and }t<0; \end{split}\right.\end{aligned}$$ for any ${\lvertt\rvert}$ sufficiently small. Therefore, if $t>0$ is sufficiently small, then ${\rm sgn}[x_j(t)]={\rm sgn}[x_{j_{s_*}-1}(0)]$ for $j\in J_{s_*}\setminus R$, and ${\rm sgn}[x_j(t)]=-{\rm sgn}[x_{j_{s_*}-1}(0)]$ for $j\in R$. Noticing that $\delta_1=-1$ and $\delta_i=1 (2\leq i\leq n)$ in the definition of $N$, one obtains that, for $t>0$ sufficiently small, $J_{s_*}$ contributes no increase for $N$ in the neighborhood of $x(0)$. Similarly, by virtue of the expression of ${\rm sgn}[x_j(t)]$ in -, $J_{s_*}$ contributes the largest increase for $N$ in the neighborhood of $x(0)$, for $t<0$ is sufficiently small. As a consequence, for case (ii), we have also obtained that $N(x(t))=N_m(x(0))$ for $t>0$ sufficiently small and $N(x(t))=N_M(x(0))$ for $t<0$ sufficiently small. Thus, we have completed the proof. Motivated by [@Fusco1987], for any given integer $0\leq h\leq \frac{\tilde{n}+1}{2}$, let $K_h$ and $K^h$ be the sets $$\begin{aligned} \begin{split} K_h=\{0\}\cup\{x\in \mathbb{R}^n: N_M(x)\leq 2h-1\},\\ K^h=\{0\}\cup\{x\in \mathbb{R}^n:N_m(x)> 2h-1\}. \end{split}\end{aligned}$$ In particular, we set $K_0=\{0\}$ and $K^{0}=\mathbb{R}^n$. It is not difficult to see that $K_h\setminus \{0\}$ and $K^h\setminus \{0\}$ are open sets, $K_h \cap K^h=\{0\}$ and the closure $\overline{K_h\cup K^h}=\mathbb{R}^{n}$. Hereafter, we denote by $\bar{K}_h$ (resp. $\bar{K}^h$) the closure of $K_h$ (resp. $K^h$), by ${\rm Int}\bar{K}_h$ the interior of $\bar{K}_h$. Since $0\in \overline{(K_h\setminus \{0\})}$, $\bar{K}_h=\overline{(K_h\setminus \{0\})}$. Recall that $K_h\setminus \{0\}$ is an open set, we have ${\rm Int}\bar{K}_h=K_h\setminus \{0\}$. \[map-int\] Let $\Phi(t)$ be a fundamental matrix of with $\Phi(0)=I$. Then for any $t>0$, one has $$\Phi(t)(\bar{K}_h\setminus\{0\})\subset {\rm Int}\bar{K}_h.$$ Suppose that there exist $x_0\in \bar{K}_h\setminus\{0\}$ and $t_0>0$, such that $\Phi(t_0)x_0\notin K_h\setminus \{0\}$. Then $N_M(\Phi(t_0)x_0)>2h-1$. Since $\mathcal{N}$ is open and dense, one can find a sequence $x_n\in \mathcal{N} \cap (K_h\setminus \{0\})$ (which entails that $N(x_n)\leq 2h-1$) such that $x_n\rightarrow x_0$ as $n\rightarrow \infty$. On the other hand, by Lemma \[zero-property2\], one can choose $\epsilon_0>0$ small enough, such that $t_0-\epsilon_0>0$ and $N(\Phi(t_0-\epsilon_0)x_0)=N_M(\Phi(t_0)x_0)>2h-1$. Since $\mathcal{N}$ is an open set and $N(\cdot)$ is continuous on $\mathcal{N}$, one has $\Phi(t_0-\epsilon_0)x_n\in \mathcal{N}$, and hence, $N(\Phi(t_0-\epsilon_0)x_n)=N(\Phi(t_0-\epsilon_0)x_0)>2h-1$ for $n$ sufficiently large, which contradicts the fact that $N(\Phi(t_0-\epsilon_0)x_n)\leq N(x_n)\leq 2h-1$. We have completed the proof. Based on Proposition \[map-int\], we give the following corollary which is useful in the forthcoming section. \[solution-space\] Let $A(t)$ be the coefficient matrix of . Then: - If $\Sigma_{0}\subset K_h$ is a linear subspace and $\Sigma_{t}$ is the image of $\Sigma_{0}$ at time $t$ under , then ${\rm dim}\Sigma_t={\rm dim}\Sigma_0$ and $\Sigma_t\subset K_h$ for all $t\geq 0$. - If $\Sigma^{0}\subset K^h$ is a linear subspace and $\Sigma^{t}$ is the image of $\Sigma^{0}$ under , then ${\rm dim}\Sigma^t={\rm dim}\Sigma^0$ and $\Sigma^t\subset K^h$ for all $t\leq 0$. We only prove (i), the other case is similar. It is easy to see that ${\rm dim}\Sigma_t=\text{dim } \Sigma_0$ for all $t\in \mathbb{R}$, by the standard solution theory of homogeneous linear differential equations. For any nonzero vector $x_0\in \Sigma_0$, by Proposition \[map-int\], $\Phi(t)x_0\in{\rm Int}\bar{K}_h=K_h\setminus\{0\}$ for all $t>0$, where $\Phi(t)$ is the solution operator of . So $\Sigma_t\subset K_h$ for all $t\geq 0$. We now introduce the concept of a cone of rank $k$ (see [@Kra; @fusco1991; @San1]): [Let $k\ge 1$ be an integer. A closed subset $K\subset \mathbb{R}^n$ is called [*a cone of rank $k$*]{}, if for any $x\in K$ and $\lambda \in \mathbb{R}$, one has $\lambda x\in K$. Moreover, $\max\{\mathrm{dim} W|W\text{ is a subspace of }\mathbb{R}^n\text{ and } W\subset K \}=k$.]{} [It is easy to see that a usual convex cone $C$ (in the sense of Hirsch [@HiSm]) defines the cone $K=C\cup (-C)$ which is of rank $1$.]{} \[dim-cone\] For each $h=1,\cdots,\frac{\tilde{n}-1}{2},$ $\bar{K}_h$ is a cone of rank $2h$. More precisely, let $V$ be a subspace of $\mathbb{R}^n$. Then $$d_h=\mathrm{max} \{\mathrm{dim} V|V\subset\bar{K}_h\}=2h.$$ Before proving this proposition, we need a technical lemma as follows. \[space-split\] Let $A$ be a $n\times n$ matrix of the following form $$A= \left( \begin{array}{ccccc} 0 & 0 & & -1 \\ 1 & 0 & 0 & \\ & \ddots & \ddots & 0\\ 0 & & 1 & 0 \\ \end{array} \right).$$ Then one has: - If $n$ is even, there exist $\frac{\tilde{n}+1}{2}$ invariant subspaces $E_k$ of $A$, with $\mathrm{dim}E_k=2$ for $k=1,\cdots,\frac{\tilde{n}+1}{2}$. Moreover, for any nonzero vector $\xi\in E_k$, one has $\xi\in \mathcal{N}$ and $N(\xi)=2k-1$. - If $n$ is odd, there exist $\frac{\tilde{n}+1}{2}$ invariant subspaces $E_k$ of $A$, with $\mathrm{dim}E_k=2$, for $k=1,\cdots,\frac{\tilde{n}-1}{2}$, and $\mathrm{dim}E_{\frac{\tilde{n}+1}{2}}=1$. Moreover, for any nonzero vector $\xi\in E_k$, one has $\xi\in \mathcal{N}$ and $N(\xi)=2k-1$. - Let $W_{i,j}=E_i\oplus \cdots\oplus E_j$, for $1\leq i\leq j\leq \frac{\tilde{n}+1}{2}$. Then for any nonzero vector $\xi\in W_{i,j}$, one has $$2i-1\leq N_m(\xi)\leq N_M(\xi)\leq 2j-1.$$ We only prove (i), because the proof of (ii) is similar. Since the characteristic polynomial of this matrix $A$ is $\lambda^n+1$, the eigenvalues of this matrix are $\lambda_k=\cos\frac{2(k-1)\pi+\pi}{n}+i\sin\frac{2(k-1)\pi+\pi}{n}$, $k=1,\cdots,n$ and the corresponding eigenvectors are $\eta_k=(\lambda^{n-1}_k,\lambda^{n-2}_k,\cdots,1)^T$, $k=1,\cdots,n$. Because $n$ is even, all the roots are conjugate complex roots. Let $E_k={\rm span}\{{\rm Re}\eta_k,{\rm Im}\eta_k\}$ for $k=1,2,\cdots,\frac{\tilde{n}+1}{2}$, then these spaces are invariant under $A$. Moreover, dim$E_k=2$ for $k=1,2,\cdots,\frac{\tilde{n}+1}{2}$. Clearly, ${\rm Re}\eta_k$ and ${\rm Im}\eta_k$ belong to $\mathcal{N}$ and $N({\rm Re}\eta_k)=N({\rm Im}\eta_k)=2k-1$ for $k=1,2,\cdots,\frac{\tilde{n}+1}{2}$. Given any $\xi\in E_k\setminus\{0\}$, the solution $x(t)$ of $\dot{x}=Ax$ with initial value $x(0)=\xi$ can be expressed as $$\label{repre-solution} x(t)=(c_kq_k(t)+\tilde{c}_k\tilde{q}_k(t))e^{\mu_k t},$$ where $\mu_k={\rm Re} \lambda_k$, $q_k(t)$ and $\tilde{q}_k(t)$ are periodic functions with $q_k(0)={\rm Re}\eta_k$ and $\tilde{q}_k(0)={\rm Im}\eta_k$. By Lemma \[zero-property\](iii), there exists $T_0>0$ and $l,s\in \mathbb{N}$ such that $N(x(t))=l$ for $t>T_0$ and $N(x(t))=s$ for $t<-T_0$. Since $(c_kq_k(t)+\tilde{c}_k\tilde{q}_k(t))$ is also periodic function, we have $s=l$. Consequently, Lemma \[zero-property\](ii) implies that $x(t)\in\mathcal{N}$ and $N(x(t))=l$ for all $t\in \mathbb{R}$. In particular, $\xi\in \mathcal{N}$. By the arbitrariness of $\xi$, we have $E_k\setminus\{0\}\subset \mathcal{N}$. Recall that $N({\rm Re}\eta_k)=N({\rm Im}\eta_k)=2k-1$. Combining with the connectivity of $E_k\setminus\{0\}$, the continuity of $N$ on $\mathcal{N}$ then implies that $N(\xi)=2k-1$ for all $\xi\in E_k\setminus\{0\}$. For (iii), we also consider the case that $n$ is even, the other case is similar. Choose a nonzero vector $\xi\in W_{i,j}$, then $\xi={\Sigma}_{k=i}^j(c_k{\rm Re}\eta_k+\tilde{c}_k{\rm Im}\eta_k)$. Without loss of generality, we assume that $c_k\neq 0$ and $\tilde{c}_k\neq 0$, for $k=i,\cdots,j$. Similar as in (i), the solution $x(t)$ of $\dot{x}=Ax$ with initial value $x(0)=\xi$, can be represented in the following form $$x(t)={\Sigma}_{k=i}^j(c_kq_k(t)+\tilde{c}_k\tilde{q}_k(t))e^{\mu_k t},$$ where $\mu_k={\rm Re}\lambda_k$, $q_k(t)$ and $\tilde{q}_k(t)$ are periodic functions with $q_k(0)={\rm Re}\eta_k$ and $\tilde{q}_k(0)={\rm Im}\eta_k$, for $k=i,\cdots,j$. Moreover, we note that $\mu_i>\cdots>\mu_j$. From Lemma \[zero-property\](iii), there exist $T_0>0$ and $l,s\in \mathbb{N}$ such that $N(x(t))=l$ for all $t\geq T_0$ and $N(x(t))=h$ for all $t\leq-T_0$. Since $q_k(t)$ and $\tilde{q}_k(t)$ are periodic for $k=i,\cdots,j$, there exist two sequences $t_m\rightarrow -\infty$ and $\tilde{t}_m\rightarrow \infty$ as $m\rightarrow \infty$ such that $e^{-\mu_jt_m}x(t_m)\rightarrow (c_j{\rm Re}\eta_j+\tilde{c}_j{\rm Im}\eta_j)$ and $e^{-\mu_i\tilde{t}_m}x(\tilde{t}_m)\rightarrow (c_i{\rm Re}\eta_i+\tilde{c}_i{\rm Im}\eta_i)$ as $m\rightarrow\infty$. By virtue of (i) of this lemma, it entails that $h=2j-1$ and $l=2i-1$. So, $2i-1=N(x(T_0))\leq N_m(x(0))\leq N_M(x(0))\leq N(x(-T_0))= 2j-1$. We have completed the proof. It is easy to see that $d_h\geq 2h$ from Lemma \[space-split\] (iii), by choosing $i=1,j=h$. Suppose that $d_h>2h$, then there exists a subspace $V_1\subset \bar{K}_h$ with dim $V_1>2h$. Thus, one can choose at least $(2h+1)$ linearly independent column-vectors $\xi_1,\cdots,\xi_{2h+1}\in V_1$. For $y=\Sigma_{i=1}^{2h+1}\gamma_i\xi_i$, since $B=(\xi_1,\cdots,\xi_{2h+1})$ is an $n\times {(2h+1)}$ matrix with ${\rm Rank}(B)=2h+1$, by choosing $\gamma_i$ suitably, we may obtain some $y$ whose $2h+1$ components are equal to $1$ or $-1$, alternatively. This then implies $N_m(y)\geq 2h+1$. On the other hand, since $y\in \bar{K}_k$ and the open and dense of $\mathcal{N}$, there exists a sequence $x_n\in \bar{K}_h\cap \mathcal{N}$ such that, $N(x_n)\leq 2h-1$ and $x_n\rightarrow y$ as $n\rightarrow \infty$, which means $N_m(y)\leq 2h-1$, a contradiction. Thus, we have proved that $d_h=2h$. \[K-cone\] [By virtue of Proposition \[dim-cone\], we obtain that $\bar{K}_h$ (resp. $\bar{K}^h$), for $h=1,\cdots, \frac{\tilde{n}-1}{2}$, are cones with rank $\bar{K}_h=2h$ (resp. rank $\bar{K}^h=n-2h$). ]{} In order to generalize the Floquet Theory in [@Mallet-Paret1990] for time-periodic cases to general time-dependent cases, we need the following generalized Perron-Frobenius Theorem (See e.g., [@fusco1991 Theorem 1]). \[perron-thm\] Let $K\subset \mathbb{R}^n$ be a cone of rank $d$. Assume that $L$ is a linear operator on $\mathbb{R}^n$ satisfying $L(K\setminus\{0\})\subset {\rm Int} K$. Then there exist (unique) subspaces $V_1$, $V_2$ such that - $V_1\cap V_2=\{0\}$, $\mathrm{dim}$ $V_1=d$, $\mathrm{dim}$ $V_2=n-d$, - $LV_j\subset V_j$, $j=1,2$, - $V_1\subset \{0\}\cup {\rm Int} K$, $V_2\cap K=\{0\}$. Moreover, if $\sigma_1(L)$ and $\sigma_2(L)$ are the spectra of $L$ restricted to $V_1$ and $V_2$, then between $\sigma_1(L)$ and $\sigma_2(L)$ there is a gap: $$\lambda\in \sigma_1(L),\ \mu\in\sigma_2(L)\Rightarrow |\lambda|>|\mu|.$$ Now we are ready to present the following proposition which generalizes the Floquet Theory obtained in [@Mallet-Paret1990] for time-periodic cases. \[rootspace-arra\] Let $\Phi(t)$ be a fundamental matrix of with $\Phi(0)=I$. Then for any fixed $t>0$, there exist subspaces $W_h$, $h=1,2,\cdots,\frac{\tilde{n}+1}{2}$, which are invariant with respect to $\Phi(t)$ and satisfy: $$\begin{aligned} {\rm dim} W_h=2,\quad h=1,\cdots,\frac{\tilde{n}-1}{2},\\ {\rm dim} W_{\frac{\tilde{n}+1}{2}}=\left\{ \begin{aligned} & 2 \quad \text{if}\ n=\tilde{n}+1\ \text{is even}, \\ & 1 \quad \text{if}\ n=\tilde{n}\ \text{is odd}, \\ \end{aligned} \right. \end{aligned}$$ and $$\mathbb{R}^n=W_1\oplus W_2\oplus\cdots \oplus W_{\frac{\tilde{n}+1}{2}}.$$ If $x\in W_i\setminus\{0\}$ then $x\in \mathcal{N}$ and $N(x)=2i-1$, for $i=1,\cdots,\frac{\tilde{n}+1}{2}$. If $x\neq 0$ and $ x\in W_h\oplus W_{h+1}\cdots \oplus W_k$, then $N_m(x)\geq 2h-1$ and $N_M(x)\leq 2k-1$. Moreover, if $\nu_i$ and $\mu_i$ are the minimum and the maximum module of characteristic values of the restriction of $\Phi(t)$ to $W_i$, then $$\mu_1\geq\nu_1>\mu_2\geq\nu_2>\cdots>\mu_{\frac{\tilde{n}+1}{2}}\geq \nu_{\frac{\tilde{n}+1}{2}}.$$ For any fixed $t>0$, It follows from Lemma \[map-int\] and Remark \[K-cone\] that $\Phi(t)$ and $\bar{K}_h$ satisfy the assumptions in Lemma \[perron-thm\]. As a consequence, if we let $d_h=\text{max}\{\text{dim}V|V\ \text{a subspace},\ V\subset \bar{K}_h\}$, then there exist subspaces $V_h^1$, $V_h^2$ which are invariant under $\Phi(t)$ satisfying ${\rm dim}V_h^1=d_h$, ${\rm dim}V_h^2=n-d_h$, $\mathbb{R}^n=V_h^1\oplus V_h^2$, $V_h^1\subset \bar{K}_h$ and $V_h^2\cap\bar{K}_h=\{0\}$. Moreover, if $\sigma_h^1$ and $\sigma_h^2$ are the spectra of the restriction of $\Phi(t)$ to $V_h^1$ and $V_h^2$, then for any $\lambda^1\in \sigma_h^1$ and $\lambda^2\in \sigma_h^2$, one has $|\lambda^1|>|\lambda^2|$. Since $K_1\subset K_2\subset\cdots\subset K_{\frac{\tilde{n}+1}{2}}$, we have $V_1^1\subset V_2^1\subset\cdots\subset V_{\frac{\tilde{n}+1}{2}}^1$ and $V_1^2 \supset V_2^2\supset \cdots\supset V_{\frac{\tilde{n}+1}{2}}^2$. Let $W_h=V_h^1\cap V_{h-1}^2$, for $h=1,\cdots,\frac{\tilde{n}+1}{2}$ (here $V_0^2=\mathbb{R}^n$). Then it is clear that all these $W_h$’s are invariant under $\Phi(t)$. Moreover, $$\text{dim} W_h=d_h-d_{h-1};\quad W_h\cap\bar{K}_{h-1}=\{0\}.$$ By Lemma \[dim-cone\], $d_h=2h$ for $h=1,\cdots,\frac{\tilde{n}-1}{2}$. Then it yields that $\text{dim} W_h=2$ for $h=1,\cdots \frac{\tilde{n}-1}{2},$ and $\text{dim} W_{\frac{\tilde{n}+1}{2}}=2\textnormal{ or }\,1$ (for $n\text{ being even or odd} $), and $$\mathbb{R}^n=W_1\oplus W_2\oplus\cdots\oplus W_{\frac{\tilde{n}+1}{2}}.$$ Note also that $W_h\oplus\cdots\oplus W_k\subset V_k^1\cap V_{h-1}^2$, $V_k^1\subset K_k$ and $V_{h-1}^2\cap\bar{K}_{h-1}=\{0\}$. Then for any nonzero vector $x\in W_h \oplus\cdots\oplus W_k$, one has $x\in K_{k}$ but $x\notin \bar{K}_{h-1}$ (here $\bar{K}_{0}=\{0\}$). So, $N_M(x)\leq 2k-1$ and $N_m(x)\geq 2h-1$. In particular, for $h=k$, $N_M(x)=N_m(x)=2h-1$. Finally, the fact that $W_h\subset V_h^1$, $W_{h+1}\subset V_h^2$ and $|\lambda^1|>|\lambda^2|$ whenever $\lambda^1\in \sigma_h^1$ and $\lambda^2\in \sigma_h^2$ implies that, if $\nu_h$ and $\mu_h$ are the minimum and maximum module of characteristic values of $\Phi(t)|W_h$, then $\nu_h>\mu_{h+1}$ for $h=1,\cdots,\frac{\tilde{n}-1}{2}$. Thus, we have proved the lemma. Transversality ============== In this section, we will prove that stable and unstable manifolds of two hyperbolic periodic solutions (or a hyperbolic equilibrium and a hyperbolic periodic orbit) of intersect transversely. Furthermore, we will point out that, under certain condition, stable and unstable manifolds of two hyperbolic equilibriums also intersect transversely. Before we proceed our approach, it deserves to point out that a change of variables $x_i\rightarrow \mu_{i}x_i$, where $\mu_i \in \{-1,1\}$ are appropriately chosen, yields a MCFS with negative feedback, where $\delta_1=-1$ and $\delta_i=1,2\leq i \leq n$. Hereafter, we always assume that $\delta_1=-1$ and $\delta_i=1,2\leq i \leq n$. Let $p(t)$ be an $\omega$-periodic solution of with $\omega>0$ and $\Gamma$ be the orbit of $p(t)$. Consider the linearized equation of along $p(t)$: $$\dot{z}=Df(p(t))z,\quad t\in \mathbb{R},\quad z\in \mathbb{R}^n,$$ which is an $\omega$-periodic linear equation in the form of . $p(t)$ is called [*hyperbolic*]{} if none of its Floquet multipliers is on the unit-circle $\mathbb{S}^1\subset \mathbb{C}$ except $1$. Let $A\subset \mathbb{R}^n$ be a nonempty subset of $\mathbb{R}^n$, the distance from a point $x_0\in \mathbb{R}^n$ to $A$ is defined by $d(x_0,A)=\underset{x\in A}{{\rm inf}}{\lVertx_0-x\rVert}.$ We write $\phi(t,x)$ as the solution of - satisfying $\varphi(0,x)=x$. Now define the [*stable [(resp]{}. unstable[)]{} manifold*]{} $W^s(\Gamma)$ (${\rm resp}.\ W^u(\Gamma)$) of $\Gamma$ as $$\begin{split} &W^s(\Gamma)=\{x\in \mathbb{R}^n| \ \lim\limits_{t \to +\infty}{d(\varphi(t,x),\Gamma)}=0 \},\\ &W^u(\Gamma)=\{x\in \mathbb{R}^n| \ \lim\limits_{t \to +\infty}{d(\varphi(-t,x),\Gamma)}=0 \}. \end{split}$$ It is known that $W^s(\Gamma)$ and $W^u(\Gamma)$ are $C^1$-manifolds (See [@Chicon Chapter 1]). Two smooth submanifolds $M$ and $N$ of $\mathbb{R}^n$ are said to [*intersect transversely*]{} (written as $M\pitchfork N$) if either $M\cap N=\emptyset$ or at each point $x\in M\cap N$, the tangent spaces $T_xM$, $T_xN$ span $\mathbb{R}^n$. For briefly, we write $\varphi(t)=\varphi(t,x)$. Our main result in this section is the following \[transversality\] Let $\varphi(t)$ be a solution of which connects two hyperbolic periodic orbits $\Gamma^{-}$ and $\Gamma^{+}$. Then $$W^u(\Gamma^{-})\pitchfork W^s(\Gamma^{+}).$$ The proof of Theorem \[transversality\] can be broken into Proposition \[transversal1\] and Proposition \[transversal2\]. Before proving these propositions, we give some notations and useful lemmas. Hereafter, we let $Q=\{x|x=\varphi(t),\ t\in (-\infty,+\infty)\}$ with the initial value $\varphi(0)=x_0$ and the two hyperbolic periodic orbits $\Gamma^{\pm}=\{x|x=p^{\pm}(t),\ t\in [0,\omega^{\pm})\}$, where $\omega^{\pm}>0$ is the minimum positive period of $p^{\pm}(t)$ in Theorem \[transversality\]. Since $\Gamma^{\pm}$ is hyperbolic, there exists a tubular neighborhood of $\Gamma^{\pm}$ and a $C^1$-fibration $\mathcal{F}^{\pm}$ (see e.g., [@Hirsch]), which is positively (or negatively) invariant under the flow of . The existence of such foliation implies that $p^{\pm}(t)$ can be chosen so that: $$\label{limit-approach} \lim\limits_{t \to \pm \infty}{\|\varphi(t)-p^{\pm}(t)\|}=0.$$ \[linear-variation\] There exist $t_0>0$ and $h^+,h^-\in \{1,2,\cdots,\frac{\tilde{n}+1}{2}\}$, satisfying $h^+\le h^-$, such that $$N(\dot{\varphi}(t))=2h^+-1,\quad \text{for } t\geq t_0;\quad N(\dot{\varphi}(t))=2h^--1,\quad \text{for } t\leq -t_0.$$ Moreover, $N(\dot{p}^+(t))=2h^+-1$ and $N(\dot{p}^-(t))=2h^--1$, for all $t\in \mathbb{R}$. Clearly, $\dot{\varphi}(t)$ is a solution of the linear equation $\dot{y}=Df(\varphi(t))y$. Note that $Df(\varphi(t))$ is a coefficient matrix of type , then the existence of $h^+$, $h^-$ and $t_0$ is confirmed by Lemma \[zero-property\](iii). Because $\dot{p}^{\pm}(t)$ is a periodic solution of $\dot{y}=Df(p^{\pm}(t))y$, $N(\dot{p}^{\pm}(t))$ is well defined for all $t\in \mathbb{R}^n$ and independent of $t$. By , we may assume that $\Gamma^{\pm}$ and $Q$ are in a compact set $M\subset \mathbb{R}^n$. Together with the uniform continuity of $f(x)$ on $M$, implies that $$\lim\limits_{t \to \pm \infty}{\|\dot{\varphi}(t)-\dot{p}^{\pm}(t)\|}=0.$$ It then follows from continuity of $N$ that $N(\dot{p}^{\pm}(t))=2h^{\pm}-1$. Henceforth, we let $\Phi^{\pm}(t,s)$ and $\Phi(t,s)$ $(t\ge s)$, be the solution operator of the linear equations $\dot{y}=Df(p^{\pm}(t))y$ and $\dot{y}=Df(\varphi(t))y$, respectively. For briefly, we write $\Phi^{\pm}=\Phi^{\pm}(\omega^{\pm},0)$. Then $\dot{p}^{\pm}(0)$ is an eigenvector of $\Phi^{\pm}$ corresponding to the (simple) eigenvalue $1$. By virtue of Lemma \[rootspace-arra\], one may define the module of characteristic values of $\Phi^{\pm}$ by $$\mu^{\pm}_1\geq\nu^{\pm}_1>\mu^{\pm}_2\geq\nu^{\pm}_2>\cdots>\mu^{\pm}_{\frac{\tilde{n}+1}{2}}\geq \nu^{\pm}_{\frac{\tilde{n}+1}{2}},$$ and hence, $$\label{spectrum} \nu^-_{h^--1}>1,\quad \mu^+_{h^++1}<1,$$ where $h^+$ and $h^-$ are defined in Lemma \[linear-variation\] satisfying $h^+\leq h^-$. In the following, we will consider the case of (i) $h^+<h^-$; (ii) $h^+=h^-$, respectively. \[transversal1\] Let $\Gamma^+,\ \Gamma^-,$ and $\varphi$ be defined as in Theorem \[transversality\]. Then if $h^+<h^-$, one has $$W^u(\Gamma^-)\pitchfork W^s(\Gamma^+).$$ By Lemma \[rootspace-arra\], if we let $\Sigma^-$ be the eigenspace of $\Phi^-$ defined as $\Sigma^-=W^-_1\oplus\cdots \oplus W^-_{h^--1}$ , then implies that $\Sigma^-\subset T_{p^-(0)}W^u(\Gamma^-)$. Moreover, $\Sigma^-\subset K_{h^--1}$ and dim $ \Sigma^-=2h^--2$. Note that $W^u(\Gamma^-)$ is a smooth manifold and $K_{h^--1}\setminus\{0\}$ is an open set. Then, for the positive integer $j\in \mathbb{Z}$ sufficiently large, there is a $2h^--2$ dimensional subspace $\tilde{\Sigma}^-\subset T_{\varphi(-j\omega^-)}W^u(\Gamma^-)\cap K_{h^--1}$. Now let $\tilde{\Sigma}^-_0$ be the image of $\tilde{\Sigma}^-$ under $\Phi(0,-j\omega^-)$, then $\tilde{\Sigma}^-_0$ is a linear subspace of $\mathbb{R}^n$. It then follows from Corollary \[solution-space\] (i) that $$\text{dim } \tilde{\Sigma}^-_0=2h^--2 \,\, \text{ and } \,\, \tilde{\Sigma}^-_0\subset T_{\varphi(0)}W^u(\Gamma^-)\cap K_{h^--1}.$$ On the other hand, let $\Sigma^+=W^+_{h^-}\oplus\cdots\oplus W^+_{\frac{\tilde{n}+1}{2}}$ be the eigenspace of $\Phi^+$. Then, by ${\mu}^+_{h^++1}<1$ and $h^+<h^-$, we have $\Sigma^+ \subset T_{p^+(0)}W^s(\Gamma^+)$. Similarly as above, one can find a subspace $\tilde{\Sigma}^+_0$ of $\mathbb{R}^n$ such that $$\tilde{\Sigma}^+_0 \subset T_{\varphi(0)}W^s(\Gamma^+)\cap K^{h^--1}\,\, \text{ with } \,\, \text{dim }\tilde{\Sigma}^+_0=n-2h^-+2.$$ Recall that $K_{h^--1}\cap K^{h^--1}=\{0\}$, then $\tilde{\Sigma}^+_0\cap \tilde{\Sigma}^-_0=\{0\}$. Combining the fact that $\text{dim }\tilde{\Sigma}^+_0+\text{dim }\tilde{\Sigma}^-_0=n$, we obtain that $\tilde{\Sigma}^+_0\oplus \tilde{\Sigma}^-_0=\mathbb{R}^n$. This completes the proof. Now we consider the second case, that is, $h^+=h^-$. Motivated by [@fusco1990], we first give the following technical lemma. \[zero-constant\] Let $\Omega=\Gamma^{+}\cup Q\cup\Gamma^{-}$, if $h^+=h^-=h$ in Lemma \[linear-variation\], then one has $$y-x\in \mathcal{N} \quad{\rm and}\ N(y-x)=2h-1$$ for any distinct $x,y\in \Omega$. Now choose $x,y\in \Omega$ with $x\neq y$. We consider the following three cases. Case (i). If $x,\ y\in \Gamma^+$, then by the definition of $\Gamma^+$ there exist $r,\ s\in[0,\omega^+)$ with $r\neq s$, such that $x=p^+(r)$, $y=p^+(s)$. Let $q^+(t)=p^+(s+t)-p^+(r+t)$, then $q^+(t)$ is a periodic function and it satisfies the linear system with $$a_{ij}(t)=\int_{0}^{1}\frac{\partial f_i}{\partial x_j}(u_{i-1}(l,t),u_i(l,t))dl,$$ where $u_j(l,t)=lp_j^+(s+t)+(1-l)p_j^+(r+t)$, $j=i-1,i$. Here we write $a_{10}(t)=a_{1n}(t)$ and $x_0=x_n$. So $q^+(t)\in \mathcal{N}$ for all $t\in \mathbb{R}$, in particular, one has $q^+(0)=y-x\in \mathcal{N}$. Since $\Gamma^+\times \Gamma^+$ is homeomorphic to $S^1\times S^1$, let $\Delta=\{(x,x)|x\in \Gamma^+\}$, then $(\Gamma^+\times \Gamma^+)\setminus \Delta$ is a connected set. Note that the map $(\Gamma^+\times \Gamma^+)\setminus \Delta\rightarrow\mathbb{R}^n; (x,y)\rightarrow y-x$ is continuous, then $M^+=\{y-x|x,y\in\Gamma^+,y\neq x\}$ is also connected. By the continuity of $N$ and connectivity of $M^+$, $N$ is a constant on $M^+$. Note also that $$y-x=p^+(s)-p^+(r)=(s-r)\dot{p}^+(r)+o(s-r),$$ for $|s-r|$ (hence $||y-x||$) sufficiently small, one has $N(y-x)=N(\dot{p}^+(r))$ if ${\lVerty-x\rVert}$ is sufficiently small. Hence, by Lemma \[linear-variation\], $N(y-x)=2h-1$ for such $x$ and $y$, which implies that $N=2h-1$ on $M^+$. For $x,y\in \Gamma^-$. The same argument yields that $M^-=\{y-x|x,y\in\Gamma^-,y\neq x\}\subset\mathcal{N}$ and $N=2h-1$ on $M^-$. Case (ii). If $x,y\in Q$, then there exist $r,s\in (-\infty,+\infty)$ with $r\neq s$, such that $x=\varphi(r)$ and $ y=\varphi(s)$. Let $q(t)=\varphi(s+t)-\varphi(r+t)$, then it follows from that $$\label{approach-differnece} \lim\limits_{t \to \pm \infty}{\|q(t)-q^{\pm}(t)\|}=0,$$ where $q^{\pm}(t)=p^{\pm}(s+t)-p^{\pm}(r+t)$. If $|r-s|$ is not a multiple of $\omega^+$ or $\omega^-$, then by Lemma \[zero-property\](iii), one has $q(t)\in \mathcal{N}$ for $|t|$ large enough. Moreover, by case (i) one has already known that $N(q^{\pm}(t))=2h-1$. So, implies that $N(q(t))=2h-1$ for all $t\in \mathbb{R}$. In particular, $N(y-x)=N(q(0))=2h-1$. If $|r-s|=k\omega^+$ for some positive integer $k$, we also claim that $q(0)=y-x\in \mathcal{N}$. For otherwise, it follows from Lemma \[zero-property\] (ii) that $N(q(-\varepsilon))>N(q(\varepsilon))$ for all $\varepsilon>0$ small, and hence, $$\label{q+-q-com} \textnormal{ either } N(q(-\varepsilon))\neq 2h-1, \textnormal{ or } N(q(\varepsilon))\neq 2h-1.$$ On the other hand, one can choose sequences $q^{\pm}_k=\varphi(s\pm \varepsilon+\frac{1}{k})-\varphi(r\pm \varepsilon)$. By the statement in the previous paragraph, one obtains $q^{\pm}_k\in \mathcal{N}$ and $N(q^{\pm}_k)=2h-1$ for $k$ sufficiently large and $q^{\pm}_k\rightarrow q(\pm\varepsilon)$ as $k\rightarrow \infty$, contradicting . Thus, $q(0)\in \mathcal{N}$. Moreover, choose $\tilde{q}_k=\varphi(s+\frac{1}{k})-\varphi(r), k=1,2,\cdots$, again we obtain $\tilde{q}_k\rightarrow q(0)$ and $N(\tilde{q}_k)=2h-1$. Consequently, $N(y-x)=N(q(0))=2h-1$. Case (iii). For general $x,y\in \Omega$. If $y-x\in \mathcal{N}$, one can choose sequences $y_n,x_n\in Q$ approaching $y$ and $x$. So, by case (ii), $N(y-x)=N(y_n-x_n)=2h-1$. If $y-x\notin \mathcal{N}$, then there always exist $\bar{x},\bar{y}\in \Omega$ with $\|\bar{x}-x\|$ and $\|\bar{y}-y\|$ sufficiently small such that $\bar{y}-\bar{x}\in \mathcal{N}$ and $N(\bar{y}-\bar{x})\neq 2h-1$, which contradicts that $N(\bar{y}-\bar{x})= 2h-1$. We have proved this lemma. Now we finish the proof of Theorem \[transversality\] by proving the following Proposition \[transversal2\]. \[transversal2\] Let $\Gamma^+,\ \Gamma^-,$ and $\varphi$ be defined as in Theorem \[transversality\]. If $h^+=h^-=h$, then $$W^u(\Gamma^-)\pitchfork W^s(\Gamma^+).$$ Choose a subsequence $\{t_k\}\subset \{-l\omega^-\}_{l=1}^\infty$ and let $w^k=\frac{\varphi(t_k)-p^-(0)}{||\varphi(t_k)-p^-(0)||}$ for $k=1,\cdots,n\}$. Without loss of generality, we may assume that $w^k$ converges to $w$ as $k\to \infty$. Now, we write $a^{(k)}_{ij}(t)=\int_{0}^{1}\frac{\partial f_i}{\partial x_j}(u^{(k)}_{i-1}(l,t),u^{(k)}_i(l,t))dl$, where $u_j^{(k)}(l,t)=l\varphi_j(t+t_k)+(1-l)p_j^-(t)$, $j=i-1,i$. By , $\varphi(t)$ and $p^{\pm}(t)$ are bounded and uniformly continuous on $\mathbb{R}$, and hence, the sequence of matrix-valued functions $A^{(k)}(t)=(a^{(k)}_{ij}(t))$ is equicontinuous and uniformly bounded. By the Ascoli-Arzela’s lemma, there is a subsequence of $A^{(k)}(t)$, still denoted by $A^{(k)}(t)$, which converges to $Df(p^-(t))$ uniformly for $t$ on any compact interval. Let $\phi^{(k)}(t)=\frac{\varphi(t+t_k)-p^-(t)}{\|\varphi(t_k)-p^-(0)\|}$. Then, by a standard result in the theory of ordinary differential equations [@Hale Lemma 3.1,Chapter I], $\phi^{(*)}(t)$ is a solution of $\dot{z}=Df(p^-(t))z$ with $\phi^{(*)}(0)=w$ and $\phi^{(*)}(t)=\lim\limits_{k \to \infty}{\phi^{(k)}(t)}$, uniformly for $t$ on any compact interval. We claim that $\phi^{(*)}(t)\in \mathcal{N}$ and $N(\phi^{(*)}(t))=2h-1$ for all $t\in \mathbb{R}$. Indeed, by Lemma \[zero-property\](iii), one can find a $t_0>0$ such that $\phi^{(*)}(t)\in \mathcal{N}$ and $N(\phi^{(*)}(t))=N_1$ (resp. $N_2$) for all $t\geq t_0$ (resp. $t\leq -t_0$). Fix such $t_0$, it follows from the continuity of $N$ that $N(\phi^{(k)}(t_0))=N_1$ and $N(\phi^{(k)}(-t_0))=N_2$ for all $k$ sufficiently large. By virtue of Lemma \[zero-constant\], we obtain that $N_1=N_2=2h-1$, and hence, it entails that $\phi^{(*)}(t)\in \mathcal{N}$ and $N(\phi^{(*)}(t))=2h-1$ for all $t\in \mathbb{R}$. Thus we have proved the claim. Noticing that $w^k=\phi^{(k)}(0)$, one has $w=\phi^{(*)}(0)$. Hence, the claim implies that $N(w)=2h-1$. Thus, by Lemma \[rootspace-arra\], we obtain that $w\in W^-_{h}$. Since $\varphi(t_k)\rightarrow p^-(0)$ and $\varphi(t_k)\in \mathcal{F}^-_{p^-(0)}$ for $k$ sufficiently large, $w$ is tangent to the fiber $\mathcal{F}^-_{p^-(0)}$ at $p^-(0)$. So, $w$ is linearly independent of $\dot{p}^-(0)$, and hence, $W^-_{h}=\text{span}\{w,\ \dot{p}^-(0)\}$. Moreover, noticing that $\varphi(t_k), p^-(0)\in W^u(\Gamma^-)$. Then $w\in T_{p^-(0)}W^u(\Gamma^-)$ and $T_{p^-(0)}W^u(\Gamma^-)\supseteq W^-_{1}\oplus\cdots\oplus W^-_{h}$. On the other hand, observing that $T_{p^-(0)}W^u(\Gamma^-)\subseteq W^-_{1}\oplus\cdots\oplus W^-_{h}$. Then, one has $T_{p^-(0)}W^u(\Gamma^-)= W^-_{1}\oplus\cdots\oplus W^-_{h}$. Now let $\Sigma^-=W^-_1\oplus\cdots \oplus W^-_{h}$ and $\Sigma^+=W^+_{h+1}\oplus\cdots \oplus W^+_{\frac{\tilde{n}+1}{2}}$. Recall that $\Sigma^+ \subset T_{p^+(0)}W^s(\Gamma^+)$. Then, similarly as the argument in Proposition \[transversal1\], one can obtain the transversality, which complete our proof. Now we will consider the case that there is an orbit $\varphi(t)$ connecting between a hyperbolic equilibrium and a hyperbolic periodic orbit or two hyperbolic equilibria. An equilibrium $e$ of is called [*hyperbolic*]{} if $Df(e)$ does not possess any eigenvalue whose real part is equal to $0$. Denote by $W^s(e)$ and $W^u(e)$ the [*stable*]{} and [*unstable*]{} manifold of $e$, respectively. Then, we have: \[hyperbolic-fixed\] Let $\varphi(t)$ be a solution of . Assume that $\varphi(t)$ connects two hyperbolic critical points $e^+,\ e^-$, then we have: $${\rm dim}W^u(e^+)\leq {\rm dim}W^u(e^-).$$ In particular, if ${\rm dim}W^u(e^+)< {\rm dim}W^u(e^-)$, then $ W^s(e^+)\pitchfork W^u(e^-).$ By Lemma \[zero-property\](iii), there exist $h^+,\ h^-\in \{1,2,\cdots,\frac{\tilde{n}+1}{2}\}$ with $h^+\leq h^-$, and some $t_0>0$, such that $N(\dot{\varphi}(t))=2h^+-1$ (resp. $N(\dot{\varphi}(t))=2h^--1$) for all $t\ge t_0$ (resp. $t\le -t_0$). It follows from Proposition \[rootspace-arra\] that there are $\frac{\tilde{n}+1}{2}$ invariant spaces $W^+_i,\ i=1,\cdots,\frac{\tilde{n}+1}{2}$ of the matrix $\exp\{Df(e^+)\}$ and the module of the corresponding eigenvalues satisfied $\mu_1^+\geq\nu_1^+>\mu_2^+\geq\nu_2^+>\cdots>\mu_{\frac{\tilde{n}+1}{2}}^+\geq \nu_{\frac{\tilde{n}+1}{2}}^+$. Let $m$ be the minimum integer that $\nu_m^+<1$, then $$T_{e^+}W^s(e^+)\subset W^+_{m}\oplus\cdots\oplus W^+_{\frac{\tilde{n}+1}{2}}\subset K^{m}.$$ Clearly, $\lim\limits_{t \to \infty}{\varphi(t)}=e^+$ implies that $\dot{\varphi}(t)\in T_{\varphi(t)}W^s(e^+)$. Since $W^s(e^+)$ is a $C^1$ manifold and $K^{m}\setminus\{0\}$ is an open set, one obtains that $\dot{\varphi}(t)\in T_{\varphi(t)}W^s(e^+)\subset K^{m}$ for all $t>0$ sufficiently large. Recall that $N(\dot{\varphi}(t))=2h^+-1$ for all $t\ge t_0$. Then $h^+\ge m$. Note also that ${\rm dim} W^u(e^+)\leq 2m-1$. It follows that ${\rm dim}W^u(e^+)\leq 2h^+-1$. A similar argument with respect to $e^-$ yields that ${\rm dim} W^u(e^-)\geq 2h^--1$. As a consequence, $${\rm dim}W^u(e^+)\leq 2h^+-1\le 2h^--1\le {\rm dim}W^u(e^-).$$ Let $m^+=\text{dim}W^u(e^+)$ and $m^-=\text{dim}W^u(e^-)$. If $m^+< m^-$, then one can replace $h^-$ and $h^+$ by $[\frac{m^-+1}{2}]$ and $[\frac{m^+}{2}]$ in Proposition \[transversal1\]. Note that $h^+<h^-$ in this case. Then one can repeat the proof in Proposition \[transversal1\] to obtain that $W^s(e^+)\pitchfork W^u(e^-)$. \[general-trans\] Let $\varphi(t)$ be a solution of . Assume that $\varphi(t)$ connect two hyperbolic critical elements $\gamma^+,\ \gamma^-$(fixed point or periodic orbit), then: $$W^u(\gamma^-)\pitchfork W^s(\gamma^+)$$ provided one of the following condition holds: One of these two hyperbolic critical elements is a periodic orbit and the other is fixed point. $\gamma^+$ and $\gamma^-$ are fixed points, moreover [dim]{} $W^u(\gamma^+)<{\rm dim}W^u(\gamma^-)$. For (i), without loss of generality, we assume that $\gamma^+$ is an equilibrium and denote it by $e^+$. Then, from Lemma \[hyperbolic-fixed\], we have dim$W^s(e^+)\geq n-2h^++1$ which means that $W^+_{h^++1}\oplus\cdots\oplus W^+_{\frac{\tilde{n}+1}{2}}\subset T_{e^+}W^s(e^+)$ . Let $\Sigma^-=W^-_{1}\oplus\cdots\oplus W^-_{h^+}$ and $\Sigma^+=W^+_{h^++1}\oplus\cdots\oplus W^+_{\frac{\tilde{n}+1}{2}}$, then similarly as in Proposition \[transversal1\], we have $W^u(\gamma^-)\pitchfork W^s(\gamma^+)$. For (ii), see Lemma \[hyperbolic-fixed\]. [10]{} S. B. Angenent, The Morse-Smale property for a semi-linear parabolic equation, *J. Differential Equations* **62** (1986), 427–442. M. X. Chen, X. Y. Chen and J. K. Hale, Structural stability for time-periodic one-dimensional parabolic equations, *J. Differential Equations* **96** (1992), 355–418. C. Chicone, Ordinary differential equations with applications. Springer New York, (2006). R. Czaja and C. Rocha, Transversality in scalar reaction-diffusion equations on a circle, *J. Differential Equations* **245** (2008), 692–721. M. B. Elowitz and S. Leibler, A synthetic oscillatory network of transcriptional regulators, *Nature* **403** (2000), 335–338. J. E. Ferrell, T. Y. Tsai and Q. Yang, Modeling the cell cycle: why do certain circuits oscillate? *Cell* **144** (2011), 874–885. A. Fraser and J. Tiwari, Genetic feedback-repression. ii. cyclic genetic system, *J. Theor. Biol.* **47** (1974), 397–412. E. Fung et al. A synthetic gene-metabolic oscillator, *Nature* **435** (2005), 118–122. G. Fusco and W. Oliva, Jacobi matrices and transversality, *Proc. Roy. Soc. Edinburgh Sect. A* **109** (1988), 231–243. G. Fusco and W. Oliva, Transversality between invariant manifolds of periodic orbits for a class of monotone dynamical systems, *J. Dynam. Differential Equations* **2** (1990), 1–17. G. Fusco and W. Oliva, A Perron theorem for the existence of invariant subspaces, *Ann. Mat. Pura Appl. (4)* **160** (1991), 63–76. B. C. Goodwin, Oscillatory behavior in enzymatic control processes, *Adv. Enzyme Regul.* **3** (1965), 425–438. J. Hale, Ordinary differential equations. Krieger Publishing Company, (1980). S.P. Hasting, J. Tyson and D. Webster, Existence of periodic solutions for negative feedback cellular control systems, *J. Differential Equations* **25** (1977), 39–64. D. B. Henry, Some infinite dimensional Morse-Smale systems defined by parabolic partial differential equations, *J. Differential Equations* **59** (1985), 165–205. M. W. Hirsch and H. Smith, Monotone dynamical systems, in: Handbook of Differential Systems (Ordinary Differential Equations), vol. 2, Elsevier, Amsterdam, (2005), 239–358. M. W. Hirsch, J. Palis, C. Pugh and M. Shub, Neighborhoods of hyperbolic sets, *Invent. Math.* **9** (1970), 121–134. O. A. Igoshin, A. Goldbeter, D. Kaiser and G. Oster, A biochemical oscillator explains several aspects of Myxococcus xanthus behavior during development, *Proc. Natl. Acad. Sci. U.S.A.* **101** (2004), 15760–15765. R. Joly and G. Raugel, Generic hyperbolicity of equilibria and periodic orbits of the parabolic equation on the circle, *Trans. Amer. Math. Soc.* **362** (2010), 5189-5211. R. Joly and G. Raugel, Generic Morse-Smale property for the parabolic equation on the circle, *Ann. Inst. H. Poincar¡äe, Anal. Non Lin¡äeaire* **27** (2010), 1397-1440. M. A. Krasnoselskii, J. A. Lifschits and A. V. Sobolev, Positive linear systems, Heldermann Verlag, Berlin, 1989. J. Mallet-Paret and H. Smith, The Poincare-Bendixson theorem for monotone cyclic feedback systems, *J. Dynam. Differential Equations* **2** (1990), 367–421. J. Mallet-Paret and G. Sell, Systems of differential delay equations: Floquet multipliers and discrete lyapunov functions, *J. Differential Equations* **125** (1996), 385–440. S. Müller, J. Hofbauer, L. Endler, C. Flamm, S. Widder, P. Schuster, A generalized model of the repressilator, *J. Math. Biol.* **53** (2006), 905–937. J. Palis and W. De Mello, Geometric theory of dynamical systems–An introduction, Springer-Verlag, Berlin (1982). H. Smith, Monotone Dynamical Systems, Amer. Math. Soc., Providence, (1995). L. A. Sanchez, Cones of rank 2 and the Poincaré-Bendixson property for a new class of monotone systems, *J. Differential Equations* **216** (2009), 1170–1190. L. A. Sanchez, Existence of periodic orbits for high-dimensional autonomous systems, *J. Math. Anal. Appl.* **363** (2010), 409-418. A. Tonnelier, Cyclic negative feedback systems: what is the chance of oscillation? *Bull. Math. Biol.* **76** (2014), 1155-1193. T. Y. Tsai, Y. S. Choi and W. Ma, J. R. Pomerening, C. Tang and J. E. Ferrell, Robust, tunable biological oscillations from interlinked positive and negative feedback loops, *Science* **321** (2008), 126-129. [^1]: Partially supported by NSF of China No. 11371338 and 91130016.
--- author: - 'A. Gozar' - 'G. Blumberg' title: Collective Spin and Charge Excitations in Quantum Spin Ladders --- : the Structure and General Properties {#sec:1} ====================================== In 1988, material research focussed around the study of high temperature superconducting copper-oxides brought about new phases of Cu-O based systems, the two-leg spin-ladders (2LL’s), that have the general formula (A$_{1-x}$A’$_{x}$)$_{14}$Cu$_{24}$O$_{41}$, with A an alkaline earth metal and A’ a trivalent (transition or lanthanoid) metal [@McCarronMRB88; @SiegristMRB88]. There were well-founded hopes that these materials could provide useful insight for the unresolved problems posed by the 2D cuprates [@DagottoScience96; @DagottoRPP99] and from this perspective two main reasons triggered the interest of the scientific community. One of them was based on a number of physical properties that are common for both ladders and high T$_{c}$’s. These include the presence of similar Cu-O-Cu antiferromagnetic (AF) correlations which give rise to a finite spin gap and were predicted to generate $d$-wave like pairing of doped carriers [@DagottoPRB92RiceEL93], the evidence for ’pseudogap’ phenomena in optical absorption spectra [@OsafunePRL99] and, most importantly, the discovery of superconductivity under pressure evolving with hole doping in the AF environment [@UeharaJPSJ96; @MaekawaNature96]. The second reason resides in the crystal similarities and more precisely the fact that one can imagine building the 2D square Cu-O lattice by gradually increasing the coupling between individual 2LL’s [@SachdevScience00], the simplicity of the latter making them more tractable for theoretical analysis. The unit cell of Sr$_{14}$Cu$_{24}$O$_{41}$ contains four formula units, 316 atoms in all, this large number of atoms being due to the presence of two nearly commensurate substructures: the CuO$_{2}$ chains and the Cu$_{2}$O$_{3}$ 2LL’s. A better understanding of the two interacting blocks can be achieved by decomposing the chemical formula into (Sr$_{2}$Cu$_{2}$O$_{3}$)$_{7}$(CuO$_{2}$)$_{10}$: planes of CuO$_{2}$ chains are stacked alternately with planes of Cu$_{2}$O$_{3}$ ladders and these are separated by Sr buffer layers, see Fig. \[f11\]. The lattice constants of the individual sub-systems satisfy the approximate relation $7\ c_{ladder} \approx 10\ c_{chain}$. The $b$-axis is perpendicular to the Cu-O layers which define the $(ac)$ plane, the $c$-axis being along the ladder/chain direction. A valence counting shows that Sr$^{2+}_{14}$Cu$_{24}$O$^{2-}_{41}$ is intrinsically doped, the average valence per Cu atoms being $+ 2.25$. Optical conductivity [@OsafunePRL97], X-ray absorption [@NuckerPRB00], $dc$ resistivity and magnetic susceptibility [@KatoPhysicaC96] measurements, as well as evaluations of the Madelung potential [@MizunoPhysicaC97] and valence-bond-sums [@KatoPhysicaC96] support the idea that in this compound the holes reside mainly in the chain structures and the isovalent Ca substitution for Sr in Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$ induces a transfer of holes into the more conductive ladders. A relatively large ladder carrier density change from 0.07 hole per Cu for $x = 0$ to about 0.2 for $x = 11$ due to Sr substitution was inferred from low energy optical spectral weight transfer[@OsafunePRL97], but X-ray absorption [@NuckerPRB00], while still supporting the hole migration scenario, is in favor of a less pronounced hole transfer. On the other hand, La$^{3+}$ and Y$^{3+}$ substitutions for Sr decrease the total hole concentration, the compound containing no holes per formula unit. As a result, the ladder systems provide the opportunity to study not only magnetism in low dimensional quantum systems like undoped ladders but also competing ground states and carrier dynamics in an antiferromagnetic environment. Data interpretation, encumbered by the presence of two interacting subsystems in crystals, is being helped by experimental realizations of other related compounds like SrCu$_{2}$O$_{3}$, which contains only 2LL planes (Fig. \[f11\]c), or Sr$_{2}$CuO$_{3}$ and SrCuO$_{2}$, which incorporates only quasi-1D Cu-O chain units with a similar coordination as in Fig. \[f11\]b. Unfortunately, doping in these latter systems has not been achieved so far. Ca substitution in has an important impact on the transport properties because of the chain-ladder hole transfer. Indeed, while Sr$_{14}$Cu$_{24}$O$_{41}$ is an insulator showing an activation gap $\Delta \approx 2100$ K (180 meV), a crossover from insulating to metallic conduction at high temperatures takes place around $x = 11$ and for $x = 12$ the $c$-axis $dc$ resistivity has a minimum around 70 K separating quasi-linear metallic (above T = 70 K) and highly insulating behavior at low temperatures [@OsafunePRL99]. At higher Ca concentrations superconductivity under pressure has been observed, for example, a T$_{c}$ of 12 K under a pressure P = 3 GPa was found in x = 13.6 Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$ [@UeharaJPSJ96]. These properties, many of them common also to the 2D superconducting cuprates, underscore the potential value of the ladder systems for the understanding of superconductivity and also for the problem of identifying possibly competing order parameters in doped Mott-Hubbard systems. The plan for this chapter is to present the magnetic properties of $S = 1/2$ 2LL’s along with our Raman scattering data on the two-magnon (2M) excitation in Sr$_{14}$Cu$_{24}$O$_{41}$, showing its polarization, resonance and relaxation properties. This is followed by the the analysis of Ca substitution effects on the low and high energy charge/spin degrees of freedom, our data supporting a scenario involving density-wave fluctuations as one of the competing orders for superconductivity. Magnetic Properties of Sr$_{14}$Cu$_{24}$O$_{41}$ ================================================= Energy Scales ------------- Responsible for the magnetic properties are the Cu atoms which carry a spin $S = 1/2$ due to a missing electron on their $3d$ shells. The AF super-exchange between them is mediated by the O ligand $2p$ orbitals. The optical absorption due to transitions across the charge-transfer gap (determined by the energy difference between the Cu$3d$ and O$2p$ orbitals) is seen to occur at around 2 eV [@OsafunePRL97]. The sign of the super-exchange as a function of the Cu-O-Cu bond angle can be qualitatively estimated semi-empirically as the balance of two terms: the first term is a relatively small, weakly bond angle dependent, ferromagnetic interaction while the second is antiferromagnetic, large for a 180$^{\circ}$ Cu-O-Cu bond but strongly varying with the bond angle, tending to zero around 90$^{\circ}$ [@AndersonRSV1C2]. [**Cu-O chains –**]{} As a result of nuclear magnetic/quadrupole resonance (NMR/NQR) [@TakigawaPRB98], X-ray [@FukudaPRB02] and inelastic neutron scattering (INS) [@EcclestonPRL98; @RegnaultPRB99] measurements, the following picture provide clarification over some controversial aspects regarding charge/spin ordering in these structures. NMR/NQR data identified two Cu$_{chain}$ sites, one carrying spin $1/2$ and one non-magnetic because of Zhang-Rice (ZR) singlet formation, that is a spin $S = 0$ state made out of a O$2p$ hole and a Cu$3d$ hole due to orbital hybridization. The data suggested the existence of a superstructure from the multipeak structure of the NMR spectra below about 150 K [@TakigawaPRB98]. X-ray studies [@FukudaPRB02] established a five-fold charge modulation in the chains’ ground state along the $c$ direction which exists at all temperatures below 300 K and a correlation length longer than 200 Å, confirming an ordered pattern involving AF spin dimers separated by two ZR singlets, see Fig. \[f12\]. Neutron scattering further supports such a superstructure by analyzing magnetic excitations out of the chain structures and evaluates the dominant intra-dimer exchange to be J$_{1} \approx 10$ meV [@EcclestonPRL98; @RegnaultPRB99] which is also sets the value of spin gap in the dimerized chain. Surprisingly, the inter-dimer and inter-chain exchanges were found to be of the same order of magnitude but of different signs: J$_{2} \approx$ -1.1 meV and J$_{a} \approx$ = 1.7 meV [@RegnaultPRB99] and, consistent with NMR/NQR data [@TakigawaPRB98], 2D spin correlations due to J$_{a}$ were shown to develop below a characteristic temperature of about 150 K. Notable is the fact that if the ZR complexes are effectively made of truly Cu$^{3+}$ ions, the modulation shown in Fig. 1.2 would correspond to a Cu$_{chain}$ valence of 2.6+ meaning that all the holes are located in the chains. Residual carriers are however present in the ladders and microwave [@KitanoEL01] and NMR/NQR [@TakigawaPRB98] data suggested the possibility of charge ordering in these systems too. [**Cu-O ladders –**]{} At low temperatures Sr$_{14}$Cu$_{24}$O$_{41}$ can be regarded as an example of a 2LL structure close to half-filling (undoped with carriers). Moreover, an individual 2LL, shown in Fig. \[f13\] is expected to incorporate the essence of the spin dynamics in this subsystem. This is because the Cu-O-Cu bonds which are close to 180$^{\circ}$ generate a strong super-exchange J$_{\parallel}$ and J$_{\perp}$ (see Fig. \[f13\]) of the order of 130 meV ($\approx$ 1000-1). This value is about two orders of magnitude stronger than the (frustrated) ferromagnetic inter-ladder interaction, see Fig. \[f11\]. From the 2D cuprates experience, an expected Raman signature at energies of several J’s is a two magnon (2M) like excitation consisting of a pair of spin-flips. Low temperature behavior seen in magnetic susceptibility and NMR data show that, unlike in the cuprates, the low frequency spin behavior is not determined by gapless spin-wave modes, expected when one ignores small anisotropies which can create long wavelength gaps. Here there is a substantial spin-gap from the singlet ground state to the lowest triplet ($S = 1$) excitation. The gap value for Sr$_{14}$Cu$_{24}$O$_{41}$ extracted from the temperature dependent Knight shift in Cu-NMR data was $\Delta_{S} \approx 32$ meV (260 -1) [@TakigawaPRB98; @MagishiPRB98], in good agreement with the gap extracted from neutron scattering data [@EcclestonPRL98] in the same material as well as with the quasi-activated magnetization data \[$\chi(T) \propto (1/\sqrt{T}) e^{-\Delta/k_{B}T}$ see Ref. [@DagottoScience96]\] in the 2LL SrCu$_{2}$O$_{3}$ [@AzumaPRL94]. Spin-gap determination from magnetization measurements in Sr$_{14}$Cu$_{24}$O$_{41}$ is more ambiguous due to the prominent contribution from the chains. The magnetic properties of the Sr$_{14}$Cu$_{24}$O$_{41}$ ladders, is the concern of the following sections. Undoped Two-Leg Ladders: Theoretical Aspects -------------------------------------------- The starting point for the determination of the ladder excitation spectrum has been the AF nearest-neighbor isotropic Heisenberg Hamiltonian, allowing for the leg and rung couplings J$_{\parallel}$ and J$_{\perp}$, see Fig. \[f13\]a. This Hamiltonian reads: $$H = H_{\parallel} + H_{\perp} = J_{\parallel} \sum_{i,\alpha = 1,2} {\bf S}_{i,\alpha} \cdot {\bf S}_{i+1,\alpha} + J_{\perp} \sum_{i} {\bf S}_{i,1} \cdot {\bf S}_{i,2} \label{e11}$$ From the crystal structure one can anticipate that the relevant parameter range for the leg to rung super-exchange ratio is $ y = J_{\parallel}/J_{\perp} \approx 1$. The excitation spectrum could be easily understood starting from the strong coupling limit, $J_{\parallel}/J_{\perp} \rightarrow 0$: the ground state is a simple product of singlets sitting on each rung, see Fig \[f13\]b. Excited N-particle states (where N is the number of triplets) are highly degenerate and are obtained by exciting elementary triplets on N different rungs [@DagottoScience96; @DagottoRPP99; @DagottoPRB92RiceEL93; @BarnesPRB93]. The nature of the ground and first excited states evolves smoothly when a small J$_{\parallel}$ is present. This allows the rung triplets to propagate along the ladder giving rise to dispersion in the reciprocal space. The bandwidth is proportional to J$_{\parallel}$ and the band minimum of the one-triplet branch is at the Brillouin zone boundary, $k = \pi$ [@BarnesPRB93]. In the limit of uncoupled AF $S =1/2$ chains, $J_{\parallel}/J_{\perp} \rightarrow \infty$, the result is also known and the ground state is characterized by an algebraic decay of magnetic correlations, the excitation spectrum is gapless with soliton-like $S = 1/2$ excitations (spinons) [@FadeevPL81]. The picture described above is supported by theoretical calculations, and it turns out that in the general case the ’physics’ of undoped 2LL’s is dominated by the strong coupling limit. - *The ground state* is disordered and has exponential falloff of the spin-spin correlations. A good description of the magnetic correlations is achieved within the resonance valence bond (RVB) model [@WhitePRL94]. For a pictorial representation see Fig. \[f16\]b. Although a high $J_{\parallel}/J_{\perp}$ increases singlet correlations beyond nearest-neighbors, a ground state built up as a superposition of short-ranged resonating valence bonds remains a good approximation. For odd-leg ladders long-ranged singlets must be included in the ground state description [@WhitePRL94]. - *The one particle excitations* of the ladder have a gap $\Delta_{S}$ because any finite J$_{\perp}$ confines the $S = 1/2$ spinons binding them to an integer $S = 1$ ’magnon’. Results of series expansions around the Ising limit for 2LL’s at various couplings $y = J_{\parallel}/J_{\perp}$ from Ref. [@OitmaaPRB96] are shown in Fig. 1.4a. These results are further confirmed by exact diagonalizations [@DagottoPRB92RiceEL93], numerical [@BarnesPRB93] and perturbative [@KnetterPRL01] analysis. It has been also found that the spin gap remains finite for even leg ladders (although the gap decreases with increasing the number of legs) while odd-legged ladders are gapless and have a power law fall-off of spin-spin correlations [@WhitePRL94]. This resembles the gapless and gapped alternance of the spectrum for isotropic AF half-integer and integer spin chains [@Haldane83]. The similarity is not accidental since a spin $S$ chain can be described as $2S$ coupled spin $S = 1/2$ chains with appropriately chosen interchain coupling. This analogy is beautifully confirmed by the dispersion found above the Néel temperature in an experimental realization of a Haldane system, CsNiCl$_{3}$, a quasi-1D nearly isotropic $S = 1$ AF chain [@BuyersPRL86]. In Fig. \[f14\] we show for comparison the experimental elementary magnon dispersion in CsNiCl$_{3}$ along with experimental data and theoretical predictions for 2LL. - *The two-particle states*: The elementary magnon branch will generate a two-magnon continuum starting from $2\Delta_{S}$ at $k = 0$. In addition, this spectrum contains additional magnetic bound/antibound states. These are states with discrete energies which are found below/above the two particle continuum [@TrebstPRL00andThesis]. Bound states have been found in the singlet ($S = 0$), triplet ($S = 1$) and quintuplet ($S = 2$) sectors. A typical excitation spectrum calculated perturbatively for isotropic coupling, $J_{\parallel}/J_{\perp} = 1$, and containing several types of two-particle excitations discussed above is shown in Fig. \[f15\]. A particularity of 2LL’s is the fact that the bound states ’peel off’ the continuum at finite values of $k$. The importance of higher order spin terms will be stressed in the following sections in connection with data analysis. This analysis will show that one has to go beyond the nearest neighbor Heisenberg Hamiltonian of Eq. (\[e11\]) in order to explain the experimental data. Regarding the question whether the best description at all energies is in terms of fractional or integer spin excitations, it is worth noticing that, at least in the limit $J_{\parallel}/J_{\perp} \leq 1$, there is no necessity to resort to fractional spin states. A description in terms of truly bosonic excitations works well in the sense that spectral densities of spin-ladders can be described well by using integer spin excitations [@KnetterPRL01]. Low Temperature Two-Magnon Light Scattering in Sr$_{14}$Cu$_{24}$O$_{41}$ ------------------------------------------------------------------------- In this section we will discuss symmetry, spectral and resonance properties of the 2M excitation in at T = 10 K. Figure \[f16\] shows Raman spectra in $(cc)$, $(aa)$ and $(ac)$ polarization taken with an excitation energy $\omega_{in} =$ 1.84 eV. The spectra consist of a lower energy part where phonons are observed (see caption of Fig. \[f16\]) and a sharp asymmetric peak at 3000 -1 present in parallel polarizations. In both $(aa)$ and $(cc)$ polarizations the 3000 -1 peak is situated at exactly the same energy. In $(ac)$ polarization this feature is not present. The energy of the 3000 -1 mode, much larger than the relevant magnetic interactions in the chain structures, allows an unambiguous assignment of this excitation to the ladder systems. A comparison with the 2D tetragonal cuprates [@GirshPRB96; @SugaiPRB90] in terms of energy scales argues for the interpretation of the 3000 -1 peak in terms of ladder 2M excitations. Moreover, in 2D cuprates the 2M feature has B$_{1g}$ symmetry, this representation becoming the identical representation in the orthorhombic group to which the ladder structure belongs. Indeed, as can be seen from Fig. \[f16\], in this excitation is fully symmetric. Although for the 2D cuprates a semi-classical counting of broken magnetic bonds within a local Néel environment (see Fig. \[f16\]b) gives a good estimate ($3 J$) for the 2M energy (which is found by more elaborate calculations to be situated around $2.7\ J$), in 2LL’s this approach is not suitable. On one hand any small anisotropy in the exchange parameters $J_{\parallel}$ and $J_{\perp}$ should lead to different peak energies in $(aa)$ and $(cc)$ polarizations, see Fig. \[f16\]b, which is not observed, and on the other hand, even in the improbable case of less than 0.03% anisotropy given by our energy resolution, this ’Ising counting’ estimates $J \approx 200$ meV which is almost 50% higher than the super-exchange in related 2D cuprates. The failure of this approach may be related to the fact that the ground state of the 2LL’s cannot be described classically. A RVB description of the ladder ground state has been proposed [@WhitePRL94]. This can be understood as a coherent superposition of ’valence bonds’, which are spin singlets, shown in Fig. \[f16\]c. For even leg ladders the RVB states are short ranged (the singlets extend only over nearest neighbor Cu spins) and in this context, starting from an ’instantaneous configuration’ of the ground state, the 2M excitation can be visualized as a state in which two neighboring singlets get excited into a higher energy singlet state made out of two triplet excitations. **Symmetry –** The polarization selection rules for the 2M scattering can be explained using the effective spin Hamiltonian corresponding to the photon induced spin exchange process [@FleuryPR68; @ShastryPRL90] which reads $$H_{FL} \propto \sum_{<i,j>} ({\bf e}_{in} \cdot {\bf r}_{ij}) ({\bf e}_{out} \cdot {\bf r}_{ij}) {\bf S}_{i} \cdot {\bf S}_{j} \label{e12}$$ where **S**$_{i}$, **S**$_{j}$ are Cu spins on the lattice sites $i$ and $j$, **r**$_{ij}$ in the vector connecting these sites and **e**$_{in}$/**e**$_{out}$ are the unit vectors corresponding to the incoming/outgoing polarizations. The polarization prefactor shows that the 2M scattering should occur only in parallel polarizations, consistent with the experimental observations. **Determination of J’s –** The problem of quantitatively estimating the magnitude of the super-exchange integrals is non-trivial in spite of the fact that there are several experimental techniques which probed magnetic excitations like neutron scattering [@EcclestonPRL98; @MatsudaJAP00], Raman [@SugaiPSS99; @GozarPRL01] and IR spectroscopy [@WindtPRL01; @NunnerPRB02]. For the latter technique, the authors claim that the strong mid-IR absorption features between 2500 and 4500 -1 are due to phonon assisted 2M excitations. The main problem was to reconcile by using only the Hamiltonian from Eq. (\[e11\]) the smallness of the zone boundary spin gap $\Delta_{S} = 32$ meV [@EcclestonPRL98] with respect to the magnitude of the one triplet energies close to the Brillouin zone center (see Ref. [@MatsudaJAP00] and Fig. \[f14\]), which is thought to determine the position of the 2M Raman peak [@SchmidtPRL03] as well as the structure and the large energy range in which the mid-IR magnon absorption is seen [@WindtPRL01; @NunnerPRB02]. The proposed solution to this problem was to consider, besides $J_{\parallel}$ and $J_{\perp}$ the presence of a ring exchange $J_{ring}$ [@BrehmerPRB99], which is a higher order spin correction whose effect can be understood as a cyclic exchange of the spins on a square plaquette determined by two adjacent ladder rungs, see Fig. \[f14\]. The net effect of including such an interaction, which has the form $H_{ring} = 2 J_{ring} [({\bf S}_{1,i} \cdot {\bf S}_{1,i+1}) ({\bf S}_{2,i} \cdot {\bf S}_{2,i+1}) + ({\bf S}_{1,i} \cdot {\bf S}_{2,i}) ({\bf S}_{1,i+1} \cdot {\bf S}_{2,i+1}) - ({\bf S}_{1,i} \cdot {\bf S}_{2,i+1}) ({\bf S}_{1,i+1} \cdot {\bf S}_{2,i})]$, is to renormalize down the spin gap so that the ratio of the magnon energy at the zone boundary with respect to the one at the zone center is decreased. The introduction of $J_{ring} \approx 0.1 J_{\perp}$ helped fitting the INS data (see Ref. [@MatsudaJAP00] and Fig. \[f14\]) and an even higher ratio is able to better reproduce the experimental Raman and IR data (see Fig. \[f17\]). The parameter sets used for the quantitative analysis of the spectroscopic data have $J_{\parallel} / J_{\perp}$ between 1.25 and 1.3 and a sizeable cyclic exchange, $J_{ring} / J_{\perp}$ of about 0.25 - 0.3. The absolute value chosen for $J_{\perp}$ is 1000 - 1100 -1. Both the value of $J$ and $J_{ring}$ are quantitatively consistent with those inferred for the 2D AF cuprates [@KataninPRB02]. In the latter case, the cyclic exchange was used in order to reproduce the neutron scattering findings regarding the $k$ dependence of the energy of the one-magnon excitations in the proximity of the Brillouin zone boundary [@KataninPRB02]. However, as opposed to the cuprates, the 2M seen in Fig. \[f16\] at 3000 -1 cannot provide a direct determination of the super-exchange, even if no terms other than $J_{\parallel}$ and $J_{\perp}$ had to be included in the spin Hamiltonian. This problem is related to the fact that, in spite of the theoretical results shown in Fig. \[f17\]b which suggest good agreement with the experiment, the spectral shape of the sharp 2M feature and its origin is still an open question; this issue will be discussed in the following. **Two-magnon relaxation –** While in the case of 2D cuprates theory has problems with explaining the large scattering width of the 2M excitation, in 2LL’s the situation is reversed; this is one of the most interesting points made in Ref. [@GozarPRL01]. To emphasize the 2M sharpness, we compare it in Fig. \[f18\] to the corresponding excitation in Sr$_{2}$CuO$_{2}$Cl$_{2}$ which has one of the sharpest 2M feature among 2D AF copper oxides [@GirshPRB96] as well as to the multi-spinon scattering from a 2LL at quarter filling (which can be mapped on a quasi 1D $S = 1/2$ AF chain), as seen in the high temperature phase of NaV$_{2}$O$_{5}$. For Sr$_{2}$CuO$_{2}$Cl$_{2}$ the FWHM is about 800 -1 [@GirshPRB96] and this is comparable, in relative units, with the large scattering width observed for the spinon continuum. In the width is only about 90 -1. The 2M approximation for the magnetic light scattering in 2D cuprates, while giving a good estimate for the 2M peak energy, cannot reproduce its spectral profile. This approximation makes the following three basic assumptions: - the ground state is a fully ordered Néel state; - the spin pair excitations consist of states which have exactly two spins flipped with respect to the Néel configuration; - since the light wavelength is much larger than the unit cell, only combinations of $(k,-k)$ magnons are allowed. This approach neglects quantum fluctuations which means that the true ground state will also contain configurations of flipped spins and also that the spin-pair states will be admixtures of 2, 4, 6 ... spin flips in the ground state. The narrow calculated width of the 2M was found, however, to be stable with respect to the inclusion of higher order spin interactions. Neither exact diagonalization nor Monte Carlo simulations were able to fully reproduce the 2M scattering width [@CanaliPRB92SandvikPRB98] although these calculations improved the results obtained within the 2M approximation. It has been proposed by Singh *et al.* in Ref. [@SinghPRL89] that it is the quantum fluctuations effects inherent to the Heisenberg model with $S = 1/2$ which lead to the observed broadening. The importance of intrinsic inhomogeneities and the role of phonons have also been invoked in the literature. We were surprised that even in lower dimensionality (the structure determined by the 2LL’s is quasi-1D), where the quantum fluctuations are expected to be stronger, the 2M Raman spectra display a narrow profile, a phenomenon which questions the importance attributed to these effects in low spin systems [@SinghPRL89]. This prominent question triggered theoretical work, part of which is shown in Fig. \[f17\]. The authors of Ref. [@SchmidtPRL03] challenged our point and claimed a resolution in terms of both the existing quasi-commensuration between the unit cell constants of the chain and ladder structures ($7\ c_{ladder} \approx 10\ c_{chain}$) and the supermodulation induced by the charge order in the chain structures, which is shown in Fig. \[f12\]. The calculation of the 2M Raman response without the modulation (lower panel in Fig. \[f17\]b) reveals indeed a broader 2M peak [@SchmidtEL01], while inclusion of chain-ladder interaction renders a sharp 2M excitation because of the backfolding of the dispersion of the elementary triplet (Figs. 1 and 2 in Ref. [@SchmidtPRL03]). This opens gaps at the points of intersection with the supermodulation wavevectors and will have a drastic effects in the spectral shape because of the induced divergences in the density of states. The agreement with the experimental data in Fig. \[f17\] is pretty good; however, these claims have recently been put to rest by a Raman experiment, Ref. [@GoblingPRB03], in the undoped 2LL compound SrCu$_{2}$O$_{3}$ (which contains no chains but only undoped 2LL’s), experiment which revealed a 2M peak as sharp as in . This clearly shows that the sharpness is related neither to the interaction between the two substructures in nor to the residual carriers in the 2LL structure of but instead it is due to intrinsic 2LL’s effects. Two major differences between the 2LL’s and 2D cuprates or 1D AF spin chains are the facts that in the former the low energy relaxation channels are suppressed due to the presence of a spin gap and also that the excitation spectrum of 2LL’s supports the existence of magnetic bound states outside the continuum of excitations. Although this may be a plausible explanation, the 2M singlet bound state peels off the continuum only at finite values of $k$, see Fig. \[f15\], and besides that, the energy at $k = 0$ is too small ($2 \Delta_{S} = 64$ meV = 512 -1) to account for the observed peak energy at 3000 -1. If the sharpness is from the hump-dip feature in the dispersion of the elementary triplet close to the Brillouin zone center, Fig. \[f15\]b and the corresponding Van Hove singularities, it seems that such divergences are found only at finite values of $k$ while at $k = 0$ the spectral density is quite broad [@KnetterPRL01]. This is why we suggest here an explanation in relation to a possible spin density wave (SDW) modulation which is intrinsic to 2LL’s and will lead to a backfolding of the magnon dispersion. This effect is similar in spirit with the one proposed in Ref. [@SchmidtPRL03] but this time due to intrinsic effects. Regarding the asymmetry of the 2M feature it would also be worth considering multi-magnon interaction effects which may lead to the asymmetric Fano-like shape of the sharp 3000 -1 feature due to the interaction with the underlying magnetic continuum. Noteworthy is the resemblance of the elementary triplet dispersion in 2LL’s and the $k$ dependence of the one-magnon excitation in La$_{2}$CuO$_{4}$ away from the Brillouin zone center. There are several articles, some of them very recent [@TranquadaNature04], which stress the failure of the spin wave models in 2D cuprates arguing that the ’physics’ of magnetic excitations is fundamentally different at low and high energies: while semi-classical magnon theory holds at low energies, it has been argued that at short wavelengths the effect of fluctuations is more pronounced and the spin dynamics suggest an underlying structure similar to the one provided by 2LL’s, which is due to a SDW-like modulation in the 2D planes. Interestingly, the data in Sr$_{2}$CuO$_{2}$Cl$_{2}$ and NaV$_{2}$O$_{5}$ from Fig. \[f18\] suggest instead a more pronounced similarity to the magnetic scattering in 1D $S = 1/2$ AF chains. It seems at this point that not only the 2M profile in 2LL’s but also the one in 2D cuprates constitute open questions which have recently received renewed attention. It would be very interesting if the physics in these two systems is found to be related to each other. **Two-magnon excitation profile –** A summary of our experimental study of the 2M dependence on the incoming photon energy is shown in Fig. \[f19\]. Like the 2D cuprates, the Cu-O based ladders are known to be charge-transfer (CT)-type Mott insulators, the CT gap being determined by the energy difference between the Cu $3d$ and O $2p$ orbitals. A Raman resonant study is interesting since, along with optical absorption, it gives information about the nature of the ground as well as of high energy electronic states across the CT gap. This is because the photon induced spin exchange takes place in two steps: a photoexcited state consisting of an electron-hole pair is created by the interaction of the system in its ground state with an incoming photon and then this intermediate state collapses into an excited magnetic state characterized by broken AF bonds. One expects therefore that such a process, in which the interaction with light occurs in the 2$^{nd}$ order perturbation theory, will show a strong dependence on the incoming photon energy [@GirshPRB96]. This is what we observe in : the Raman data at T = 10 K are shown in Fig.\[f19\]a. In Fig. \[f19\]b we show the ratio of the 2M intensity in $(cc)$ polarization with respect to $(aa)$ configuration as a function of $\omega_{in}$ and in Fig. \[f19\]c the resonant Raman excitation profile (RREP) is plotted along with the optical conductivity data provided by the authors of Ref. [@OsafunePRL97]. For both $(cc)$ and $(aa)$ polarization the resonant enhancement has a maximum around 2.7 eV, about 0.7 eV higher than the CT edge. The intensity is small for $\omega_{in} < 2$ eV and increases monotonically as the photon energy approaches the CT gap, this increase being followed by a drop for excitations about 3 eV. The intensity displays an order of magnitude variation as the incident photon energy changes in the visible spectrum. Besides the correction for the optical response of the spectrometer and detector, by using the complex refractive index derived from ellipsometry and reflectivity measurements, the ’raw’ Raman data were also corrected for the optical properties of the material at different wavelengths. We observe changes in the spectral shape of the 2M as the incident frequency is changed, in the 2LL’s case the 2M acquiring sidebands on the high energy side. These changes are more pronounced in $(aa)$ polarization where for instance the 2.65 eV spectrum (which is close to the edge seen in the $a$-axis conductivity) shows a 2M as a gap-like onset of a continuum. While the fact that the 2M profile changes substantially with $\omega_{in}$ is also true for 2D cuprates, one can notice several differences too. One of them is that the RREP in 2LL’s follows more closely the edges of the optical conductivity data. Moreover, if in the case of cuprates *two* peaks were predicted (and confirmed experimentally) to occur for the 2M peak at $2.8 J$ in the RREP [@ChubukovPRL95] (when the incoming energy is in resonance with the bottom and top of the electron-hole continuum) from the data we show in Fig. \[f19\]c up to $\omega_{in} = 3.05$ eV we observe only one, rather broad, peak. It has been argued from numerical diagonalizations of finite clusters [@TohyamaPRL02] that this dissimilarity between the 2D cuprates and 2LL’s is due to the difference in the spin correlations characterizing the initial and final excited magnetic states, i.e. the weight of the long ranged Néel type spin-spin correlations in calculating the matrix elements of the current operator plays an important role. It also turns out that, due to the special topology of 2LL’s, a study of the 2M RREP in conjunction with an angular dependence of the 2M intensity in parallel polarization in 2LL’s can be helpful for determining a relation between the ratio of the super-exchange integrals $J_{\parallel}$ and $J_{\perp}$ and microscopic parameters like hopping integrals and on site Coulomb interactions [@FreitasPRB00]. Using the effective expression for the photon induced spin exchange coupling mechanism, Eq. (\[e12\]), taking into account the anisotropy of the coupling constants denoted by $A$ and $B$ along the rung and leg directions and using the relationship between $H_{FL}$ and the 2D Heisenberg ladder Hamiltonian from Eq. (\[e11\]), one can derive the following angular dependence of the 2M intensity for ${\bf e}_{in} \parallel {\bf e}_{out}$: $I_{\parallel} (\omega,\theta) = I (\omega, \theta) [ \cos^{2} (\theta) - \frac{A}{B} \frac{J_{\perp}}{J_{\parallel}} \sin^{2} (\theta) ]$ [@FreitasPRB00]. From this formula, $J_{\perp} / J_{\parallel}$ can be calculated if the $A$ to $B$ ratio is known. At angles $\theta \neq 0^{\circ}, 90^{\circ}$ from an experimental point of view one has to be careful that the different optical properties of the ladder materials along the $a$ and $c$ axes will induce a non-negligible rotation of the polarization of the incident electric field inside the crystal [@GozarPRB02]. As we see from Fig. \[f19\]b the value of $A / B$ is excitation energy dependent and our data suggest that this ratio approaches a constant value in the preresonant regime. From Fig. \[f19\] and using an anisotropy ratio $y = J_{\parallel} / J_{\perp} = 1.25$ (see Fig. \[f17\]) we obtain $A / B \approx 2.5$ in the preresonant regime, which would be compatible with an anisotropic local Cu$d$-O$p$ excitation and slightly different hopping parameters along and across the ladder [@FreitasPRB00]. Effects of Temperature and Ca(La) Substitution on the Phononic and Magnetic Excitations in =========================================================================================== Temperature Dependent Electronic and Magnetic Scattering in ------------------------------------------------------------ The effects of temperature and Ca(La) substitution for Sr discussed in this section set the stage for the following section in which low energy Raman, transport and soft X-ray data argue for the existence of density wave correlations in compounds. In Fig. \[f110\]a we show the temperature dependence of the $c$-axis conductivity $\sigma_{c} (\omega)$ and in panel (b) the Raman response in for T =300 and 10 K. In both IR and Raman data large changes are observed as the crystal is cooled from room temperature. In Fig. \[f110\]a there is a strong suppression of spectral weight below an energy scale of about 1 eV. The same figure shows two relevant energy scales of this system: one is the CT gap around 2 eV which was discussed in connection to the resonance properties of the 2M , and the other one is the activation energy inferred from the Arhenius behavior of the $dc$ resistivity above about 150 K [@McElfreshPRB89]. As for the optical sum rule, all the weight is recovered above the CT gap, within an energy scale of $\omega_{c} \approx 3$ eV. The rapid decrease of the conductivity in the region below 1 eV is correlated to the high activation energy of about 180 meV (= 1450 -1 = 2090 K). Concomitant to this suppression, which is surprisingly ’uniform’ in the 0 to 1 eV range, one observes the development of a broad mid-IR feature and also a sharpening of the phononic features below 1000 -1. Interestingly, the position of the mid-IR band seems to be close to the semiconducting-like activation energy revealed by the $dc$ resistivity. Fig.\[f110\]b shows that a similarly large reduction in the overall intensity of Raman response takes place in an energy range of at least 0.5 eV (4000 -1). The features which become sharp with cooling are the single and multi-phonon excitations seen around 500, 1200 and 2400 -1 as well as the 2M feature at 3000 -1. In Fig. \[f111\] we show temperature dependent Raman data in two frequency regions: one below 1000 -1 (panel a) and one around 3000 -1 where the 2M feature lies (panel b). A different spectral shape than in Figs. \[f16\] and \[f110\] is seen due to resonantly enhanced side band structures (see Fig. \[f19\]). The 2M peak is weak and heavily damped at room temperature. Upon cooling we notice two main features: firstly, the spectral weight increases by almost an order of magnitude, and secondly, the 2M peak sharpens from a width of about 400 -1 at 300 K to 90 -1 FWHM at T = 10 K. Because $J / k_{B}T$ remains a large parameter even at room temperature, the magnitude of the observed effects are surprising. For example, in 2D cuprates the 2M peak remains well defined even above 600 K [@KnollPRB90]. The side bands around 3660 and 4250 -1 observed for the $\omega_{in} = 2.2$ eV also gain spectral weight, proportionally with the sharp 2M feature. Fig. \[f111\]b shows that these sidebands are situated about 650 and $2 \ \times 650$ -1 from the 3000 -1 resonance. Taking into account that strong phonon scattering characteristic of O modes is found at this frequency, one may argue that these side bands are due to coupled magnon-phonon scattering and bring evidence for spin-lattice interaction in . These energy considerations favor this scenario compared to one involving multi-magnon scattering because the magnetic continuum starts lower, at $2 \Delta_{S} = 510$ -1. The latter interpretation remains however a reasonable possibility because in these higher order processes the spectral weight can integrate from a larger part of the Brillouin zone and the boundary of the 2M continuum is dispersive. The continuum shown in Fig. \[f111\]a also gets suppressed with cooling. Our data confirms the presence of low lying states at high temperatures, observed also in NMR and $c$ axis conductivity, Refs. [@OsafunePRL99; @EisakiPhysicaC00] and Fig. \[f110\]. We observe that there is a sharp onset of scattering around 480 -1, close to twice the spin-gap energy. The 495 -1 mode has been interpreted as evidence for Raman two-magnon scattering [@SugaiPSS99]. However, the temperature dependence of this mode which follows that of the other phonons, the similar suppression with cooling seen not only below this energy but also at higher energies in the 650 to 900 -1 region and the absence of magnetic field effects contradict this proposal. The connection between the low and high degrees of freedom in Fig. \[f111\]a-b is presented in Fig. \[f112\]. The increase of the electronic Raman background intensity with heating is correlated with the damping of the 2M peak at 3000 -1. The introduced low energy states reduce the lifetime of the magnetic excitation due to additional relaxational channels provided by the small amount of ladder self-doped carriers. We note that the drastic changes with temperature take place roughly above 150 K while below this temperature the variation with temperature is much weaker. This is the temperature at which the $dc$ resistivity changes its activation energy from 2090 K to about half its value, 1345 K [@GirshScience02]. T$^{*}$ = 150 K is also the temperature at which the charge ordering in the chain structures is fully established [@FukudaPRB02; @RegnaultPRB99] suggesting an interaction between chains and ladders, possibly due to a charge transfer between these systems. It is possible that this charge transfer takes place also as function of temperature and that it gets suppressed below T$^{*}$. The Chain-Ladder Interaction in : Superstructure Effects in the Phononic Spectra -------------------------------------------------------------------------------- Raman data in reveals the presence of a very low energy excitation in parallel polarizations. At low temperatures this mode is found around 12 -1 and we observe a softening of about 20% with warming up to 300 K. The temperature dependence of the Raman spectra is shown in Fig. \[f113\] for both $(cc)$ and $(aa)$ polarizations. An excitation at similar energy is seen also in IR absorption data [@HomesPrivate] consistent with the lack of inversion symmetry in the crystal. Applied magnetic fields up to 8 T do not influence the energy of this excitation which suggests that its origin is not magnetic. This peak is absent in x = 8 and 12 crystals but it is present around 15 -1 in the compound [@GozarPRL03]. These properties along with the unusually low energy make us interpret this excitation as a phononic mode associated with the superstructure determined by the chain and the ladder systems. The chain-ladder commensurability given by the approximate relation $7\ c_{ladder} = 10\ c_{chain}$ will result in a back-folding of the phononic dispersions, which in the case of the acoustic branches will lead to a low energy mode. The high effective mass oscillator is understood in this context as a collective motion involving the large number of atoms in the big unit cell of the crystal. In Fig. \[f113\]b-c we plot the temperature dependent energy and width of this low energy phonon. The crossover below a characteristic temperature of about 120 - 150 K mentioned in the previous subsection is emphasized again by the these data. The energy of the peak increases rather uniformly with decreasing temperature from 300 to about 15 K but its FWHM shows a variation with temperature which is diminished below 150 K. The behavior of the integrated intensity of this mode is different in $(cc)$ and $(aa)$ polarizations. Fig. \[f113\]b shows that in $(cc)$ configuration a kink appears about 150 K in the temperature dependent spectral weight while a maximum is seen in the $(aa)$ polarized spectra around this temperature. In the scenario presented above the presence of the low energy mode Fig. \[f113\] is evidence of ladder-chain interaction. Such an excitation should be sensitive to disorder and even slight modifications in the crystal structure as happens if Sr is substituted by Ca/La. Symmetry arguments discussed in the next subsection confirm the requirement to consider the full crystal structure for the phononic analysis in and the fact that the disorder introduced by Ca substitution smears out the rich phononic spectra due to the superstructure. The absence of this mode in Ca substituted crystals thus supports our interpretation. Disorder Induced by Ca(La) Substitution --------------------------------------- This part deals with the effects of inter Cu-O layers cation substitution. If Sr is replaced by Ca then the nominal hole concentration in does not change, but what may happen is that the amount of holes in the chain and ladder structures gets redistributed [@OsafunePRL97; @NuckerPRB00]. Sr$^{2+}$ substitution by La$^{3+}$ reduces the amount of holes and in the chains and the ladders are at half filling. So in analyzing the spin/charge response of the 2LL’s one has to consider both the doping and the disorder effects induced by inter-layer cation replacement. An investigation of these effects is certainly worth pursuing in the context of the constraints imposed by the low dimensionality on the charge dynamics and the occurrence of superconductivity. Most of the studies in the literature have been focussed on the spin and charge dynamics in pure crystals, although cation substitution is also a source of a random potential. It is known that in 1D an arbitrary random field localizes all electronic states [@AbrikosovAP78] and, in view of the existence of collective excitations of the charge density wave type, pinning effects due to disorder change qualitatively the $dc$ and the finite frequency transport properties. X-ray structural analysis show that the ladder interatomic bonds are modulated upon Sr replacement by Ca [@OhtaJPSJ97BougerolPC00] and it was pointed out in a Raman study [@OgitaPhysicaB00] that the phononic width increases with Ca concentration in . Theoretical work shows that the gapped phases of 1D spin systems like 2LL’s or dimerized chains are stable against weak disorder and magnetic bond randomness [@HymanPRL96OrignacPRB98]. However, in the doped case, superconductivity in the $d$-channel was found to be destroyed by an arbitrarily small amount of disorder. **Ca substitution and phononic scattering –** If inhomogeneous broadening plays an important role it has to be seen in all the sharp spectroscopic features. What we try to argue in the following is that the width of both cation and the Cu-O plane modes are renormalized with Ca content. Fig. \[f114\]a shows low temperature phononic Raman spectra in the 0 - 700 -1 energy region. The data is taken in $(cc)$ polarization with the excitation energy $\omega_{in} = 2.57$ eV; the higher the incoming photon energy the more pronounced is the phononic resonant enhancement. For we observe a total of 22 clearly resolved phononic modes extending from 25 to 650 -1. For and crystals the features characteristic of O vibrations in the $400 < \omega < 700$ -1 region broaden into an unresolved band and the rich fine structure below $\omega < 400$ -1 is smeared out. Clear evidence for the interaction between the chain and the ladder structures in can be inferred from symmetry considerations alone. If these two units were considered separately a total number of six fully symmetric phonons should be observed in $(cc)$ polarization [@PopovicPRB00], three from the chain structure, $Amma$ ($D_{2h}^{17}$) space group, and three from the ladder structure, $Fmmm$ ($D_{2h}^{23}$) space group [@McCarronMRB88]. If one considers the full crystal structure, two ’options’ are available. The first one is to take into account a small displacement of the adjacent Cu-O chains with respect to each other (see Fig. 3 in Ref. [@McCarronMRB88]) and analyze the phonons within the $Pcc2$ ($C_{2v}^{3}$) space group which will give a total of 237 $A_{1}$ modes. The second one is to neglect this small displacement, as it is the case of Sr$_{8}$Ca$_{6}$Cu$_{24}$O$_{41}$ which belongs to the $Cccm$ ($D_{2h}^{20}$) centered space group [@McCarronMRB88] and this approach renders a number of 52 A$_{1g}$ modes. The 22 observed modes in show that one has to include the chain-ladder interaction and the consideration of the higher $Cccm$ symmetry is sufficient. Marked with asterisks in Fig. \[f114\] are three modes in the region between 250 and 320 -1 which show a blue shift consistent with the lower mass of Ca atoms and the reduction in the lattice constants upon Ca substitution [@KatoPhysicaC96]. Based on the energy shift and on previous phonon analysis done for the (SrCa)$_{2}$CuO$_{3}$ [@YoshidaPRB91] compound we assign the modes to Sr/Ca vibrations. The FWHM of the 255 -1 phonon in is 4 -1 as compared to 16 and 10 -1 in the x = 8 and 12 samples respectively. We observe a similar behavior in the phononic modes originating from Cu-O planes. Three prominent features are seen in the 550 – 600 -1 region for the crystal. We assign the mode with intermediate energy around 565 -1 to O$_{ladder}$ vibration. The lower and upper modes around 545 and 585 -1 have frequencies close to vibrations of the O atoms in the chains as observed in (SrCa)$_{2}$CuO$_{3}$ and CuO [@PopovicPRB00; @YoshidaPRB91] compounds. Fits for the 550 -1 band in SCCO crystals reveal that the FWHM of the 565 -1 mode increases from 9 -1 for to 27 and 22 -1 for x = 8 and 12 crystals, see the inset of Fig. \[f114\]a. This is similar to what happens to the 255 Ca/Sr mode suggesting that the crystals become again more homogeneous at higher Ca substitution level. The data for the LCCO crystal shows that in this material phonons are affected the strongest by disorder which is most likely due to the high La mass and atomic size compared to Ca or Sr atoms. **Ca substitution and magnetic scattering –** Regarding the sharp 2M Raman resonance, Fig. \[f114\]b, one can see dramatic changes taking place with Ca substitution at T = 10 K and that these changes also affect the 2M sidebands. In the FWHM is 90 -1. Ca substitution leads to hardening and to substantial broadening of the magnetic peak accompanied by a drastic decrease in its scattering intensity. One Ca atom in the formula unit of increases the spectral width by 30%, see inset of Fig. \[f114\]b. This effect can be ascribed to the intrinsic inhomogeneity rather than a marginal effect on the lattice constants and hole transfer from the chains to the ladders [@OsafunePRL97]. The FWHM in x = 8 and are about the same within the error bars which is remarkable because the latter is an undoped material so the width of the peak seems not to be related to the presence of carriers in the ladders. Comparison of our data in and SrCu$_{2}$O$_{3}$ [@GoblingPRB03], both containing 2LL’s at half filling, shows clearly that out-of-plane inhomogeneities have major impact on the magnetic properties of the ladders. By comparing Fig. \[f111\]b and \[f114\]b One can also note a resemblance between the effect of temperature in and Ca substitution in . Fig. \[f115\] shows that temperature effects in and are suppressed compared to . In this sense one could introduce an ’effective’ temperature associated with the cation substitution level. A comparison to 2D cuprates is again interesting: in the latter case the 2M is broad to start with even in pure materials, but a different number of cation types between the Cu-O layers (higher in insulating Bi$_{2}$Sr$_{2}$Ca$_{0.5}$Y$_{0.5}$Cu$_{2}$O$_{8}$ than for instance La$_{2}$CuO$_{4}$) does not lead to qualitative changes in the 2M width [@SugaiPRB90]. The data in Fig. \[f114\]b suggest that an appropriate phenomenological model to describe the ladder Hamiltonian in Ca doped crystals is $H = \sum_{leg} J^{ij}_{||} {\bf S}_i \cdot {\bf S}_j + \sum_{rung} J^{ij}_{\perp} {\bf S}_i \cdot {\bf S}_j$ where the super-exchange integrals $J^{ij}$ in the lowest order have a contribution proportional to the relative local atomic displacements ${\bf u}_{ij}$ according to $J^{ij}({\bf u}) = J_{0} + (\nabla J) {\bf u}_{ij}$. The effects of thermal fluctuations on the super-exchange integrals $J_{ij}$ can be included in a similar phenomenological approach [@NoriPRL95] which could explain the strong resemblance between the Ca substitution and temperature seen in Figs. \[f114\]b and \[f115\]. We expect the ratio $<J_{\perp}> / <J_{||}>$ to change with Ca content as structural studies show that the Cu-O bonds along the rungs are less affected by Ca substitution than the Cu-O bonds parallel to the ladder legs [@OhtaJPSJ97BougerolPC00]. Also the hardening of the magnetic peak from 3000 -1 in to about 3375 -1 in x = 8 is consistent with the reduction in the lattice constants at higher Ca substitutional level which will lead to a higher super-exchange $J$, a parameter very sensitive to the interatomic distances [@CooperPRB90]. Density-Wave Correlations in Doped Two-Leg Ladders ================================================== Density Waves: Competing Ground State to Superconductivity ---------------------------------------------------------- So far we have been investigating mainly the magnetic properties of 2LL’s around half filling factor and analyzed the effects of temperature and Sr substitution especially in terms of their influence on the high energy 2M scattering around 3000 -1. We observed that both the temperature and the isovalent cation substitution produce drastic changes in the optical and Raman spectra from far IR up to energies of several eV. These properties, along with the established metal-insulator transition found around 60% Ca doping, the occurrence of superconductivity and the similarities with 2D cuprates, nurture the hope that a study of low energy physics in may reveal universal aspects related to the nature of the ground states in low dimensional correlated spin $S = 1/2$ systems. It is the purpose of this section to bring evidence for the existence of density wave correlations in doped 2LL’s at all Ca substitution levels [@GozarPRL03]. Ground states with broken translational symmetry have been discussed in the context of low dimensional systems [@SachdevScience00]. Examples are states which display a long ranged oscillation of the charge and/or spin densities as well as ones which acquire a topological bond order due to the modulations of the inter-atomic coupling constants, for example of the super-exchange integrals. It has been indeed found that the charge density waves (CDW) and superconductivity are the predominant competing ground states and the balance between them is ultimately determined by the microscopic parameters of the theoretical models [@DagottoScience96; @DagottoRPP99]. So, what are the low energy excitations one expects from a doped 2LL? Most of the theoretical studies of 2LL’s consist of numerical evaluations, especially exact diagonalization (ED) and density matrix renormalization group techniques (DMRG), performed within the $t_{\parallel} - J_{\parallel}, t_{\perp} - J_{\perp}$ model, see Fig. \[f116\], but not taking in to account the long range Coulomb interactions. It is interesting to discuss first the cases corresponding to only one or two holes in the ladder structure. If one hole is present on a ladder rung (Fig. \[f116\]a) it can sit on a bonding or antibonding orbital. Hopping will lead to bands separated roughly by $2 t_{\perp}$ and a bandwidth proportional to $t_{\parallel}$ [@TroyerPRB96]. How tight is the charge bound to the remaining free spin? This question is connected to the problem of possible spin-charge separation. Evaluations of hole-spin correlations on a $2 \times 10$ cluster suggest that the unpaired spin remains tightly bound to the injected hole [@TroyerPRB96], so that this composite state carries both charge and spin, in this sense being similar to a quasi-particle. This is in contrast with the spin-charge separation in the 1D AF chain. If two holes are present (Fig. \[f116\]b) there appears a property which seems to be very robust for 2LL’s: pairing. The following discussion can be intuitively understood starting from the strong coupling limit but studies of finite clusters within the $t_{\parallel} - J_{\parallel}, t_{\perp} - J_{\perp}$ model show that this qualitative discussion holds to the relevant isotropic limit $J = J_{\parallel} = J_{\perp}$ and $t = t_{\parallel} = t_{\perp}$. If one additional hole is injected in the ladder, it will tend to align on the same ladder rung, see Fig. \[f116\]b, in order to minimize the magnetic energy [@DagottoScience96; @DagottoRPP99]. The lowest band will be generated by the coherent propagation of hole pairs and it is found in the spin singlet channel. At finite energies there will be continua of electronic states generated by breaking the pairs, the singlet and the triplet states being almost degenerate when the holes are far apart [@TroyerPRB96]. One can note that in the case of 2LL’s it is the purely spin-spin correlations which effectively lead to hole pairing and not an explicit hole-hole attractive interaction and also that the main energy gain due to pairing is given by the magnitude of the spin gap. The ’easy’ pairing and the kinetic energy gain of the paired holes when pairs are far apart from each other is a non-trivial difference with respect to the 2D cuprates in the sense that in the latter case evaluations prompted by the above arguments lead to macroscopic phase separation. Since the spin gap $\Delta_{S}$ is to some degree a measure of the hole binding energy it is interesting to discuss what is its evolution with doping. In the undoped case the lowest triplet excitation is the branch with a minimum at $\pi$ shown in Fig. \[f15\] and its magnitude is governed by $J_{\perp}$. The spin gap remains substantial at isotropic coupling, relevant for experiments, and in this case it is known exactly to be $J / 2$ in the model of Eq. (\[e11\]). This excitation evolves continuously with doping. For instance, calculations on a $2 \times 24$ cluster at $1/8$ doping and isotropic coupling shows that the spin gap is about $0.275 J$, about half of the value in the undoped case [@PoilblancPRL03]. Interestingly, pairing generates a different type of singlet-triplet transition [@TroyerPRB96; @PoilblancPRL03]. This excitation, present only in the doped case, will consist of breaking of a singlet hole pair into two separate quasi-particles in the triplet channel. The different kinetic energy gain of the separate holes versus the magnon in the undoped case will lead to different energies of these two types of magnons. It was argued [@TroyerPRB96] that the spin gap evolves discontinuously in 2LL’s because it is the 2$^{nd}$ type of magnon which costs less energy. Later ED and DMRG work [@PoilblancPRB00] confirmed this point and showed that in a relevant parameter range the energy of this new type of spin-gap is smaller than the pair breaking continua because a triplet can hybridize with a state formed by two holes (one in bonding and one in antibonding orbitals) forming bound $S = 1$ magnon-hole states. Once the stability of the hole pair is confirmed to exist in the relevant ranges of the microscopic parameters, it is up to the estimation of residual interactions between the hole pairs and spins to determine what kind of ground state is chosen. Superconductivity fluctuations were probed within the $t - J$ model by evaluating numerically the pair-pair correlation function, a measure of the stability of the motion of the hole pair in the spin-gapped phase. This function, which is to be evaluated in the limit of $l \rightarrow \infty$, is defined as $P(l) = \frac{1}{N} \sum_{i} <\Delta_{i}^{\dag} \Delta_{i + l}>$ where $\Delta_{i}$ is the pair destruction operator at site ’i’ given by $\Delta_{i} = \frac{1}{\sqrt{2}} ( c_{i1,\uparrow} c_{i2,\downarrow} - c_{i1,\downarrow} c_{i2,\uparrow})$ (here the ’c’ operators are defined within the subspace of no double occupancy). Early work showed an increase in the pairing tendency as the ratio $J_{\perp} / J_{\parallel}$ was increased [@DagottoPRB92RiceEL93] It has been found for a $2 \times 30$ cluster at $n = 1/8$ doping that SC correlations are dominant and they decay algebraically with $l$ [@HaywardPRL95]. The exponent was found to be smaller than one while density-density correlations were observed to decrease as $l^{-2}$ implying that SC is the dominant phase. In the same system, by using Green’s function techniques, the frequency and wavevector dependence of the superconducting gap [@PoilblancPRL03] showed a structure with nodes, much like the $d$-wave pairing symmetry in 2D cuprates. Pairing does not necessarily mean superconductivity. Another possibility is that the bound (or single) holes form a spatially ordered pattern, i.e. a CDW ground state. It has been argued from DMRG calculations that a phase diagram of the isotropic $t - J$ 2LL’s, in a relevant range given for instance by $J / t \ < \ 0.4$, will have as generic phase one with gapped spin modes and gapless charge mode [@WhitePRB02]. This ’C1S0’ phase [@BalentsPRB96] is characterized by $d$-wave like pairing and $4 k_{F}$ CDW correlations, with superconductivity being the dominant phase [@WhitePRB02]. Note that this $4 k_{F}$ CDW renders a wavelength which is half of the one in conventional Peierls transition. Phase separation will occur roughly at values $J / t \ > \ 2.5$ [@TroyerPRB96; @WhitePRB02]. These numerics also argue that besides these two phases, there are small fully gapped regions (for both spin and charge sectors), to be found generally at commensurate dopings, where a CDW occurs [@WhitePRB02]. The characteristic wavevector of this state is given by $2(k_{Fb} + k_{Fa})$ where $k_{Fb}$/$k_{Fa}$ stand for the Fermi wavevectors of the bonding/antibonding electronic orbitals, discussed in the paragraph related to the charge dynamics in a ladder with one hole. Interestingly, a finite spin gap is not found to be crucial for the existence of such a CDW so, if the spin gap determines the pairing, the hole crystal can be made either out of single hole or out of hole pairs [@WhitePRB02]. On the experimental side, in the study of low energy physics is encumbered, compared to 2D cuprates, by the following ’non-intrinsic’ facts: - The structure is quite complicated due to the presence of the chains and ladders. We found that these subsystems interact, so one expects that supermodulation will affect carrier dynamics. - has a finite hole concentration in the ladder structure to start with. Ca substitution (and maybe temperature) redistributes the charges between chains and ladders but up to now there is no accurate quantitative determination of this effect. On the contrary, there are conflicting views in the literature [@OsafunePRL97; @NuckerPRB00]. - The effect of O stoichiometry at the crystal surface may be important in accurately determine the carrier concentration; besides, fresh surfaces are not easy to obtain because these materials do not cleave in the $(ac)$ plane. The problem of what happens with the spin gap in the doped ladder is an open issue from an experimental point of view. On one hand neutron scattering finds $\Delta_{S} = 32$ meV in both [@EcclestonPRL98] and x = 11.5 [@KatanoPRL99] which says that the spin gap does not change its value. On the other hand, from the Knight shift (proportional to the uniform susceptibility) and the spin-lattice relaxation data, NMR measurements find a decrease by about 50% of the ladder spin gap [@MagishiPRB98]. Mayaffre *et al.*, by using the same technique, tried to relate directly the disappearance of the spin gap to the occurrence of superconductivity under pressure [@MayaffreScience98]. Although a finite spin gap is a central issue which underlies the up to date theories predicting that doped ladders are superconducting, it is still not quite clear what the origin of the discrepancy between the INS and NMR data is. Electromagnetic Response of Charge Density Wave Systems ------------------------------------------------------- The purpose of this section is to discuss the main properties of CDW systems and their characteristic excitations. In the CDW state a gap opens at the Fermi energy and this is observed in $dc$ transport as a metal insulator transition taking place at T$_{c}$. Due to the change in the lattice constant there also are new phononic modes allowed in the CDW state. In real systems, which are not strictly 1D, it is possible that not all the Fermi surface gets gapped, so the metallic behavior can continue below T$_{c}$, as is the case of NbSe$_{3}$. Since the CDW transition involves ionic motions, it can be directly probed by X-rays or neutron scattering [@GrunerBook]. [**Excitations out of the CDW state –**]{} One feature which can be seen in the optical absorption spectra is due to the excitations of electrons across the CDW gap $2 \Delta$. This belongs to the single particle channel. Since the Debye energy is much smaller than the Fermi energy the superconducting gaps from BCS theory are typically smaller than the gap excitations in the CDW state. For instance, in blue bronze (K$_{0.3}$MoO$_{3}$) which is one of the most studied quasi-1D CDW materials, this energy is found at about $2 \Delta = 125$ meV [@DegiorgiPRB91]. There are also collective excitations out of the condensate and they are related to the space and time variations of the complex order parameter. Excitations occur due to both phase (phasons) and amplitude (amplitudons) fluctuations. The interest is to understand the long wavelength limit of these excitations. As for the amplitude mode, its energy $\omega_{A}$ in the limit $q \rightarrow 0$ is finite. An oscillation of the gap amplitude $\delta(\Delta)$ will also lead to an oscillation of the ionic positions $\delta(u)$. The decrease in the condensation energy, $\delta(E_{cond}) = D(\epsilon_{F}) \delta(\Delta^{2}) / 2$ will be equal to the extra kinetic energy associated with ionic displacements, $M N \omega^{2}_{A} (q = 0) \delta(u^{2}) / 2$ where $M, N$ are the ionic mass and number respectively. As a result one obtains a finite value for $\omega_{A} (q \rightarrow 0)$. The situation is different for the long wavelength phase mode. Such motion is a superposition of electronic charge along with ionic oscillations which leads to a high ’effective mass’, $m^{*}$. In the $q \rightarrow 0$ limit involves a translational motion of the undistorted condensate so it will cost no energy. Its dispersion in the $q \rightarrow 0$ limit is given by $\omega^{2}_{\Phi} (q) = (m / m^{*}) v^{2}_{F} q^{2}$ [@LeeSSC74]. Since phase fluctuations involve dipole fluctuations due to the displacements of the electronic density with respect to the ions the phason is a feature which will be seen in the real part of the optical conductivity data. The amplitude mode at $q \rightarrow 0$ does not involve such displacements so it is expected to be a Raman active mode. Most interesting is that in the ideal case considered here the phase mode is current carrying and it can slide without friction [@Frohlich]. As a result this excitation will be seen as a $\delta$ function at zero frequency. The spectral weight of this peak is given by $m / m^{*}$ and this is stolen from the single particle conductivity which becomes an edge, instead of a singularity reflecting the divergence in the density of states [@LeeSSC74], see also Figs. 8 and 9 in Ref. [@DegiorgiPRB91]. The interaction with impurities or lattice commensurabilities destroys the infinite conductivity, and the phase mode will be pinned. As a result, this excitation will be shifted to finite frequencies which characterize the particular impurity potential. In Fig. \[f119\] is shown the example of the blue bronze, the pinning mode as well as the gap feature being seen around 2 and 1000 -1 respectively. [**Zero frequency and microwave transport in the CDW state –**]{} The existence of a gap and low energy collective excitations leads to several other properties which were seen in $dc$ and finite frequency (typically in the microwave region) conductivity. In a $I - V$ characteristic one can talk roughly speaking about three regimes. At low electric fields there is an Ohmic behavior and the conductivity at finite temperatures will be due to thermally excited electrons (normal carriers) out of the condensate. Above a threshold field, $E_{T}^{(1)}$, related to the magnitude of the pinning potentials, the contribution of the condensate sets in. The CDW starts moving as a whole and this motion is accomplished through distortions of the phase and/or amplitude of the condensate. At high fields, above some other threshold field $E_{T}^{(2)}$, the external forces cause a fast sliding motion of the CDW which ’ignores’ the underlying pinning potentials and the current increases very steeply (almost infinite differential conductance) for small variations of the applied voltage, see Fig. \[f119\]b. This regime is reminiscent of the ideal case where ’Fröhlich superconductivity’ should occur. The $I - V$ curve in 2$^{nd}$ and 3$^{rd}$ regimes is non-linear and temperature dependent. Notable is that for an applied $dc$ voltage, the motion of the CDW will also lead in a clean sample to a finite frequency component of the current. The fundamental frequency of this oscillatory component is directly related to the wavelength of the density wave. [**Low frequency CDW relaxation –**]{} Another low energy feature observed in many well established CDW compounds is a relaxational peak which has a strong temperature dependent energy and damping related to the $dc$ conductivity of the material. This loss peak is seen typically in the microwave region at energies much lower than the pinning frequency. For example in K$_{0.3}$MoO$_{3}$ the frequency range is 10$^{4}$ - 10$^{6}$ Hz for temperatures between 50 to 100 K while the pinned mode is roughly at $\Omega_{0} \approx 60$ GHz see Figs. \[f120\]d and Figs. \[f119\]a respectively. In Ref. [@LittlewoodPRB87] the author proposes a scenario to reconcile the observations at low and high frequencies, a summary of the results being shown in Fig. \[f120\]a-c. The interpretation of the damped excitation is that it is a longitudinal density wave relaxational mode due to the interaction with normal carriers. It is argued that this mode, which should not be seen in the transverse channel, is seen however in the dielectric response because of the non-uniform pinning which introduces disorder. By making the wavevector $k$, according to which the modes can be classified as transverse or longitudinal, a ’not so good quantum number’, disorder mixes the pure longitudinal and transverse character of the excitations. In other words, breaking of the selection rules make the longitudinal modes appear as poles, rather than zeros, of the dielectric response function. The main results of the theory in Ref. [@LittlewoodPRB87] are shown in Fig. \[f120\] where the CDW dielectric function is plotted as a function of frequency. The distribution of pinning centers (a measure of disorder) is modeled by a function $g_{n} (x) = (n^{n + 1} / n!) x^{n} \exp(-nx)$ which is peaked at $x = 1$ and satisfies $g_{n \rightarrow \infty} (x) = \delta(x - 1)$. In Fig. \[f120\]a one can see that the disorder leads to the appearance of a mode at lower frequencies which steals spectral weight from the pinning mode situated at the average frequency $\Omega_{0}$. The stronger the disorder, the higher is the spectral weight redistribution between the two modes. Panels (b) and (c) in Fig. \[f120\] show the real and the imaginary part of the CDW dielectric function for a given $n$. They are related by Kramers-Krönig relations, so the drop in $Re(\varepsilon)$ leads to a peak in $Im(\varepsilon)$. These data are plotted for several values of the relaxational time $\tau_{1}$ which mimics (through the dependence on conductivity, see the caption of Fig. \[f120\]) a linear variation in temperature. Decreasing temperature leads to a decrease in conductivity and a higher $\tau_{1}$ and to the softening of the relaxational peak which moves away from $\Omega_{0}$. [**CDW coupling to the uncondensed carriers –**]{} Here is a simplified version for the derivation of the longitudinal screening mode shown in Fig. \[f120\]. In this approach the CDW is modeled by an oscillator with a characteristic pinning frequency $\Omega_{0}$ and we neglect internal distortions. The only other ingredients of the model are the presence of a finite electron density corresponding to thermally activated quasi-particles and the assumption that the interaction between these two fluids is only $via$ an electromagnetic field. The calculation of the longitudinal CDW modes as well as the coupling to the normal, uncondensed, electrons follows almost identically the treatment of longitudinal phonons and their coupling to plasma oscillations in metals. In the following, $\vec{u}$ is a uniform displacement of the CDW (in a real crystal this will be within a volume determined by the longitudinal and transverse correlation lengths), $\rho_{c}$ and $m^{*}$ are the CDW charge and mass densities and $\gamma_{0}$ is an intrinsic damping coefficient. The time derivatives for oscillations at a given frequency $\omega$ are replaced by $\partial / \partial t \rightarrow -i \omega $. The derivation can be made using the general relations of the Born and Huang model [@BornHuang]: $$\begin{aligned} - \omega^{2} \vec{u} = - \Omega_{0}^{2} \vec{u} + i \omega \gamma_{0} \vec{u} + \frac{\rho_{c}}{m^{*}} \vec{E} \label{e14} \\ \vec{P} = \rho_{c} \vec{u} + \frac{\varepsilon_{\infty} - 1}{4 \pi} \vec{E} \label{e15}\end{aligned}$$ Here $\varepsilon_{\infty}$ takes care of the background carrier contributions arising from interband transitions. In the absence of carriers, neglecting the damping and using the electrostatic approximation ($\nabla \times \vec{E} = 0$ which means that the field is purely longitudinal and as a result $\vec{E} = \vec{E}_{L}$), these equations allow us to determine the characteristic transverse and longitudinal frequencies. The equation $- \omega^{2} \vec{u}_{T} = - \Omega_{0}^{2} \vec{u}_{T}$ (because $\vec{E}_{T} = 0$) allows the identification $\Omega_{0} = \Omega_{T}$, i.e. the frequency of the transverse mode. The longitudinal modes will generate a finite electrostatic field. Eq. (\[e15\]) and Gauss’ law $\nabla (\vec{E} + 4 \pi \vec{P}) = 0$ lead to $\nabla (4 \pi \rho_{c} \vec{u}_{L} + \varepsilon_{\infty} \vec{E}) = 0$ so $\vec{E} = - 4 \pi \rho_{c} \vec{u}_{L} / \varepsilon_{\infty}$. Plugging this relation in Eq. (\[e15\]) one obtains $- \omega^{2} \vec{u}_{L} = - \Omega_{0}^{2} \vec{u}_{L} - 4 \pi \rho_{c}^{2} / \varepsilon_{\infty} m^{*} \vec{u}_{L}$ which gives the frequency of the longitudinal mode $\Omega_{in} = \sqrt{\Omega_{0}^{2} + \Omega_{p}^{2} / \varepsilon_{\infty}}$ where the plasma frequency is given by $\Omega_{p}^{2} = 4 \pi \rho_{c}^{2} / m^{*}$. What is the dynamics of the CDW in an external field $E_{0}$ of frequency $\omega$? In the transverse channel the force in the right hand side of Eq. (\[e14\]) will be $\rho_{c} E_{0} / m^{*}$ leading to $\vec{u}_{T} = [(\rho_{c} / m^{*}) / (- \omega^{2} + \Omega_{0}^{2} - i \omega \gamma_{0})] \vec{E}_{0}$. Using Eq. (\[e15\]), the relation $\varepsilon = 1 + 4 \pi \chi$, where $\chi = P / E$, as well as the fact that the conductivity is given by $\varepsilon (\omega) = 1 + 4 \pi i \sigma / \omega$, one obtains for the collective contribution to the dielectric function and the real part of the conductivity: $$\varepsilon_{CDW} (\omega) = \frac{\Omega_{p}^{2}}{\Omega_{0}^{2} - i \omega \gamma_{0} - \omega^{2}} \ \ \ \ \ \ \ \ \sigma_{CDW} (\omega) = \frac{1}{4 \pi} \frac{- i \omega \Omega_{p}^{2}}{\Omega_{0}^{2} - i \omega \gamma_{0} - \omega^{2}} \label{e16}$$ These equations will render a peak at the pinning frequency $\Omega_{0}$ in both $\varepsilon (\omega)$ and $\sigma (\omega)$. We deal now with the dynamics of the longitudinal modes in the presence of carriers. One has to worry in this case about the associated internal fields and screening effects. One can derive a relation between the CDW displacement $\vec{u}_{L}$ and the local field which should become $\vec{E} = - 4 \pi \rho_{c} \vec{u}_{L} / \varepsilon_{\infty}$ in the limit of zero $dc$ conductivity. The only difference now is that the first Maxwell equation changes to $\nabla (\vec{E} + 4 \pi \vec{P}) = \rho_{qp}$, where $\rho_{qp}$ is the quasi-particle density. The continuity equation $- i \omega \rho_{qp} + \nabla \vec{j} = 0$ and Ohm’s law $\vec{j} = \sigma_{qp} \vec{E}$ lead to the relation $i \omega \rho = \sigma_{qp} \nabla \vec{E}$ so, using Gauss’ law, one obtains $\nabla (4 \pi \sigma_{qp} \vec{E} - i \omega \vec{E} - 4 \pi i \omega \vec{P}) = 0$. Inserting the expression for polarization from Eq. (\[e15\]) and taking into account that we deal with longitudinal fields one obtains: $$\vec{E} = \frac{4 \pi i \omega \rho_{c}}{4 \pi \sigma_{qp} - i \omega \varepsilon_{\infty}} \vec{u}_{L} \label{e17}$$ Obviously, for $\sigma_{qp} = 0$, Eq. (\[e17\]) gives the result of obtained in the previous paragraph in the absence of carriers. For calculating the longitudinal response, one has thus to replace $\vec{E}$ in (\[e14\]) with the sum of the external field $\vec{E}_{0}$ and the polarization field given by (\[e17\]) obtaining a linear relation between $\vec{u}_{L}$ and $\vec{E}_{0}$. Using (\[e15\]) one obtains the CDW contribution to the longitudinal dielectric function $\varepsilon_{L}$, which is relevant for Raman scattering, as: $$\varepsilon_{L} (\omega) = \frac{\Omega_{p}^{2}}{\Omega_{0}^{2} - \omega^{2} - i \gamma_{0} \omega - \frac{i \omega \Omega_{p}^{2}}{4 \pi \sigma_{qp} - i \omega \varepsilon_{\infty}}} \label{e18}$$ In the limit of high frequencies this function has a pole at $\sqrt{\Omega_{0}^{2} + \Omega_{p}^{2} / \varepsilon_{\infty}}$ corresponding to the CDW plasmon and which is the energy of the longitudinal collective mode. In the limit of low frequencies and neglecting the intrinsic damping $\gamma_{0}$, Eq. (\[e18\]) reduces to the following relaxational mode: $$\varepsilon_{L} (\omega) = \frac{A}{1 - i \omega \tau} \ \ \ \mathrm{with} \ \ A = \frac{\Omega_{p}^{2}}{\Omega_{0}^{2}} \ \ \ \mathrm{and} \ \ \ \Gamma = \frac{1}{\tau} = 4 \pi \sigma_{qp} \frac{\Omega_{0}^{2} }{\Omega_{p}^{2}} = 4 \pi \sigma_{qp} \frac{1}{\varepsilon_{0} - \varepsilon_{\infty}} \label{e19}$$ Equations (\[e16\]) and (\[e19\]) describe the features seen in Fig. \[f120\]. The proportionality in (\[e19\]) between $\Gamma$ and the $dc$ conductivity is the result of normal carrier backflow which screens the collective polarization and dissipates energy, suffering lattice momentum relaxation. Density Waves in ----------------- ### Low energy transport and Raman In Fig. \[f121\]a we show the components of the dielectric response $\varepsilon = \varepsilon_{1} + i \varepsilon_{2}$ as a function of frequency (in log scale) for several temperatures [@GirshScience02]. The imaginary part shows strongly damped, inhomogeneously broadened peaks whose energies are temperature dependent. These relaxational modes lead to variations in the real part of the dielectric function $\varepsilon_{1}$ up to 300 K and even above. This data resembles with the dielectric response measured in the CDW compound K$_{0.3}$MoO$_{3}$ which is shown in Fig. \[f120\]. Fig. \[f121\]b shows Raman data in a higher temperature range. Similarly to Fig. \[f121\]a we observe an overdamped feature which moves to lower frequencies with cooling. This excitation disappears below our lower energy cut-off of about 1.5 -1 (equivalent to 50 GHz or 0.185 meV) below about T = 200 K. The Raman response function can be well fitted with the expression: $$\chi'' (\omega, T) = A(T) \frac{\omega \Gamma}{\omega^{2} + \Gamma^{2}} \label{e110}$$ The temperature dependence of the peak intensity is shown in the inset of Fig. \[f121\]. $A(T)$ decreases by about 60% from 300 to 640 K. The temperatures shown in this figure include laser heating effects and they were determined from the ratio of Stokes anti-Stokes spectra for each temperature. The data in Fig. \[f121\]a also allow the extraction of a characteristic transport relaxational time $\tau(T)$ at every temperature by a fit to a relaxational type behavior. Using this result, in the entire temperature range the dielectric response between 20 Hz and 10$^{6}$ Hz from Fig. \[f121\] can be scaled on a universal generalized Debye relaxational curve given by: $$\varepsilon(\omega) = \varepsilon_{\infty} + \frac{\varepsilon_{0} - \varepsilon_{\infty}}{1 + [i \omega \tau(T)]^{1 - \alpha}} \label{e111}$$ The parameter $\alpha$ characterizes the width of the distribution of relaxation times. The equation for the conventional Debye relaxation has $\alpha = 0$. The fit to Eq. (\[e111\]) is shown in Fig. \[f122\] where the real and imaginary part of $\varepsilon$ is plotted as a function of the dimensionless parameter $\omega \tau$. The parameter $\alpha$ determined from the fit is $\alpha = 0.42$ The temperature dependencies of the relaxational frequencies extracted from the Raman data, $\Gamma(T)$, and from the microwave conductivity data, $\tau^{-1} (T)$, are plotted as a function of inverse temperature in Fig. \[f123\]. On the same plot we show the Arhenius behavior of the $dc$ conductivity. The $dc$ conductivity in this figure shows activated behavior and the break around $T^{*} = 150$ K points to the existence of two regimes. At high temperatures the activation energy we obtained is $\Delta_{dc}^{T > T^{*}} = 2078$ K, consistent with previous results [@McElfreshPRB89]. A value $\Delta_{dc}^{T < T^{*}} = 1345$ K is obtained at low temperatures. In this figure we observe that the relaxational frequencies have an activated behavior and that the corresponding activation energies match those of the conductivity both above T$^{*}$ (the Raman data) and below T$^{*}$ (the microwave transport data). This characteristic temperature at which the $dc$ activation changes was discussed also in the end of section 1.3.1 where we noted that it was related to the increase of the electronic Raman continuum, to the variation of the 2M scattering width and also to the temperature dependent intensity of the chain superstructure peaks seen by X-ray scattering. The inset in Fig. \[f123\] shows the $dc$ conductivity as a function of the applied field. The arrows mark two threshold fields. Below $E_{T}^{(1)} \approx 0.2$ V/cm the conductivity obeys Ohm’s law and it has the Arhenius temperature dependence shown in the main panel. For electric fields above $E_{T}^{(1)}$ the $I - V$ characteristics change from linear to approximately quadratic. At much higher fields, above 50 V/cm, there is a second threshold which marks a very sharp rise of the current. The differential conductivity in this regime is very high, more than 10$^{5}$ $\Omega^{-1} cm^{-1}$, an estimate limited by contact effects and most likely carried by inhomogeneous filamentary conduction. We turn now to the interpretation of the data shown in Figs. \[f121\], \[f122\] and \[f123\]. We remark that the energy range of the relaxational peaks seen in Fig. \[f121\] is much lower than the thermal energy or the magnetic and $dc$ activation gaps. Therefore, this is incompatible with single-particle type excitation and suggest that the low energy charge dynamics is driven by correlated collective behavior. We identify this strongly temperature dependent feature to be a CDW relaxational mode in the longitudinal channel, screened due to the interaction with thermally excited quasiparticles, as described in the previous section. We note that electronic Raman scattering can probe directly the longitudinal channel [@MilesBook] because the Raman response function, $\chi''(\omega)$, is proportional to Im$[1 / \varepsilon (\omega)]$, a quantity proportional to $\varepsilon_{L}$ from Eq. (\[e18\]). We can support in what follows this assignment by quantitative comparison with this simple two-fluid model and by the results of the non-linear conductivity measurements as a function of electric field. Microwave and millimeter wave spectroscopy [@KitanoEL01] supports our assignment. In the end of this chapter, we discuss recent (and direct) evidence for the existence of CDW correlations in provided by X-ray measurements [@PeteNature04]. The immediate question prompted by our claim, which essentially ascribes to a common origin our observations in Fig. \[f121\] and the properties of K$_{0.3}$MoO$_{3}$ (an established CDW material) shown in Figs. \[f119\] and \[f120\], is: If we observe a property related to the pinning of an existent CDW, where is the pinned phase mode? A microwave experiment performed by Kitano *et al.* reported a relatively small and narrow peak between 30 and 70 GHz in the $c$-axis conductivity which was observed up to moderately high temperatures [@KitanoEL01]. The authors attributed this resonance to a collective excitation and speculated about a possible CDW origin. It turns out that our data along with the results of Kitano *et al.* as well as results of reflectivity measurements form a basis on which these results can be analyzed quantitatively. In Fig. \[f124\] are shown the main result in [@KitanoEL01] and the plot of -Im$[1 / \varepsilon (\omega)]$ obtained by our Kramers-Krönig analysis of ’raw’ reflectivity data, see Ref. [@OsafunePRL97]. We believe that the microwave resonance in the 30 to 70 GHz range in Fig. \[f124\]a corresponds to the average pinning frequency of the CDW in . Along with a plasma edge $\Omega_{p} \approx 3300$ -1 extracted from the loss function (see Fig. \[f124\]b) and using Eq. (\[e16\]) which gives $\varepsilon_{0} - \varepsilon_{\infty} = \Omega_{p}^{2} / \Omega_{0}^{2}$, one obtains for the low frequency dielectric function values of the order of 10$^{6}$, consistent with the experimental observations in Fig. \[f121\]. The two-fluid model described in the previous section, see Eq. (\[e19\]), predicts that the relaxational energy is proportional to the activated $dc$ conductivity. Indeed, the Arhenius behavior of the relaxational energies, extracted both from Raman and transport measurements in Fig. \[f121\], shows from fits with $e^{-\Delta /k_{B} T}$ activation energies similar to those of $dc$ conductivity. Moreover, we remark that the similarity is not only up to a proportionality factor, but the calculated theoretical values for $\tau^{-1} (T)$ according to Eq. (\[e19\]) using the measured values of $\varepsilon_{0}$ and $\sigma_{qp}$ are in agreement with the experiment. This can be seen in Fig. \[f123\] where the calculated values (the shaded area whose thickness takes into account the error bars in the determination of the $dc$ value of the dielectric function $\varepsilon_{1}$) match the measured $\tau^{-1}$ (blue dots). The non-linear transport data shown in the inset of Fig. \[f124\] for T = 100 K further confirm the existence of density wave correlations in . The three regimes observed are typical for systems in which the CDW is pinned by impurities [@GrunerBook; @GrunerRMP88]. Below $E_{T}^{(1)}$ the pinned CDW does not contribute to transport and $\sigma_{dc}$ is governed by the quasiparticle response. Around this value of the field there is an onset of the CDW conductivity due to the relatively slow sliding of the condensate. In this 2$^{nd}$ regime the predominant damping mechanism is the screening of internal electric fields produced by local CDW deformations by backflow quasi-particle currents. The 3$^{rd}$ regime defined by fields $E > E_{T}^{(2)}$, indicates a regime of free sliding CDW, the Fröhlich superconductivity, also observed in K$_{0.3}$MoO$_{3}$, see Fig. \[f19\]. In this case the velocity of the condensate is so high that it does not feel the background quasi-particle damping. The overall consistency among the measured temperature dependencies of the dielectric function, $dc$ conductivity and relaxational energies demonstrates the applicability of the hydrodynamic model description for the low energy carrier dynamics in a CDW ground state. However, there are several issues which have to be mentioned. One difference with respect to what happens in well established CDW systems is that the observed relaxational peak in Raman response is at higher energies than the pinned mode at $\Omega_{0}$. This may be because there is a broad distribution of pinning frequencies and the origin of the Raman relaxational peak is in the high energy side of this distribution. Up to date there are no measurements of the pinned phase mode at or above 300 K. Another issue is that although the absolute values of $\tau^{-1} (T)$ calculated according to Eq. (\[e19\]) are in agreement with the experiment, the same is not true for the Raman relaxation frequencies $\Gamma (T)$. The calculated values are about 50 times smaller than the measured ones. A reduction in the density wave amplitude, as suggested by the decrease in the peak intensity, inset of Fig. \[f121\]b, would produce a concomitant increase in $\Gamma$. Further enhancement in the scattering rate may come from additional relaxational channels due to low lying states which are seen at temperatures higher than about 150 K by magnetic resonance [@TakigawaPRB98], c-axis conductivity (Fig. \[f110\]) or Raman scattering (Fig. \[f111\]). The existence of density wave correlations in at temperatures of the order of 650 K gives this compound a distinctive property compared to classical CDW systems. These high temperatures suggest that in this case it is not the phonons which support the CDW but rather the strong magnetic exchange $J \approx 1300$ K may play an important role in the charge and spin dynamics. One aspect mentioned in the previous section was that hole pairing in 2LL’s is a robust feature due to the AF exchange correlations. In this respect, an interesting question is: What is the fundamental current carrying object? Is it due to single or paired electrons? Helpful in this regard would be to try to measure current oscillations and interference effects (For a description see Chapter 11 in Ref. [@GrunerBook]). In fact this is probably the only prominent ’classical’ transport signature of a CDW state which has not been checked yet in and it would be an interesting project. ### Soft X-ray scattering from The most direct way to measure CDW ordering is by neutron or X-ray scattering because they can measure directly super-lattice peaks associated with the distortions of the lattice or electronic clouds. In conventional CDW materials this is the case and the electron-phonon interaction causes atomic displacements and local electronic density modulations of the order of the atomic numbers. However, up to date, conventional hard X-ray experiments (using photons with typical energies of the order of tens of keV) failed to detect carrier ordering in the ladder structure of compounds. Is there any way to observe weak charge modulations which do not involve detectable distortions in the structural lattice? One way to enhance the scattering amplitude from the doped holes is by exploiting those changes in the optical properties of the materials which occur as a result of doping. This often involves, as is the case for cuprates, using incident photons with energies about two orders of magnitude smaller than in conventional X-ray experiments. A real space charge modulation will lead to a proportional change in the Fourier transformed density which in turn is proportional to the dielectric susceptibility of the material, $\chi(k, \omega)$. The X-ray scattering amplitude is determined by the electronic density and as a result will scale proportionally to $\chi(k, \omega)$. It turns out that in 2D cuprates [@PeteScience02] and ladders [@NuckerPRB00] there are features seen in the X-ray absorption spectra (XAS) which arise directly as a result of hole doping. The situation is simpler in 2D cuprates and it can be illustrated for La$_{2}$CuO$_{4 + \delta}$: For the insulating compounds the oxygen K-edge around 540 eV (which marks the beginning of a continuum of excitations consisting of electron removals from O$1s$ orbitals), has also a prepeak at 538 eV which, due to hybridization, corresponds to intersite O$1s$ $\rightarrow$ Cu$3d$ transitions. If holes enter O$2p$ orbitals, there will be another prepeak appearing at 535 eV due to the fact that additional O valence states are available to be filled by the excited O1s electron. The spectral weight of this carrier induced feature is stolen from the 538 eV prepeak. It is clear that the opening of a new absorption channel at 535 eV will change the optical properties at this energy, in particular of the susceptibility $\chi (k, \omega)$. This also means that X-ray scattering amplitude for 535 eV incident photons will be enhanced with respect to the non-resonant case by factor proportional to the ’susceptibility contrast’ which can be defined as the percentage change of the susceptibility in the doped versus undoped case [@PeteScience02]. Note that this enhancement applies only to the signal from the doped carriers. In the XAS spectra have the same general characteristics but the situation is more complicated because the mobile carrier absorption feature is split into chain and ladder features [@NuckerPRB00]. However these excitations can be resolved and they are shown in Fig. \[f125\]b. This figure shows the characteristic energies of the oxygen K-edge. The carrier prepeaks are resolved by using different polarizations of the incoming photon fields and one can see that the ladder absorption at 528.6 eV occurs at about 0.5 eV higher energy than the corresponding feature in the chains, consistent with the XAS study in Ref. [@NuckerPRB00]. A 2D scan in reciprocal space for incident photon energies of 528.6 eV is shown in Fig. \[f125\]a. In this figure the momentum transfer $Q = (2 \pi / a \ H, 2 \pi / b \ K, 2 \pi / c_{L} \ L_{L})$ is in ladder reciprocal units along the $c$-axis. The vertical line is due to specular reflection from the surface and the displacement from $H = 0$ is due to crystal miscut, the normal to the surface making a finite angle with respect to the $c$-axis. A superlattice reflection at $(0, 0, 0.2)$ indicates a a charge modulation of 5 ladder units. In terms of the large crystal structure this momentum transfer corresponds to $L = (c / c_{L}) \ L_{L} = 1.4$, where $c$ and $c_{L}$ are the lattice constants corresponding to the big unit cell and ladder unit cell satisfying $c = 7 c_{L} = 27.3$ Å [@McCarronMRB88]. This Bragg reflection is a true superlattice peak since it does not have the periodicity of the 27.3 Å unit cell and it should not be confused with the five-fold modulation in the chain structures [@FukudaPRB02]. The $(0, 0, 0.2)$ reflection has an unusual excitation profile. The resonance is shown in Fig. \[f125\]b where the energy dependence is plotted along with the absorption spectra. One can notice that this reflection is seen only in resonance with the ladder absorption at 528.6 eV, being absent for all other energies, including the oxygen K-edge. This proves two main aspects: The Bragg peak arises solely from the doped *ladder* holes, and it cannot be due to any structural modulation which would track *all* the features in the O$K$ absorption. The superlattice peak width in $k$ space gives the correlation lengths $\xi_{c} = 255$ Å and $\xi_{a} = 274$ Å indicating that the order is two dimensional. This observation is very interesting given the fact that magnetic properties due to the different exchange parameters (Cu-O-Cu bonds making 90$^\circ$ or 180$^\circ$ degrees along the $a$ and $c$ axes respectively, see Fig. \[f11\]) as well as the $dc$ transport remain anisotropic, highlighting the importance of inter-ladder Coulomb interactions. This X-ray scattering study confirms the transport data shown in the previous section in establishing the existence of charge density modulations in doped 2LL’s. The findings are consistent with the predictions of a crystalline order of ladder holes as a competing state to superconductivity [@DagottoPRB92RiceEL93; @WhitePRB02]. The absence of structural distortions argues that it is not the conventional electron-phonon interactions, but many-body electronic effects which drive the transition. One question to address is whether the CDW correlations exist in Ca substituted crystals. This is the topic of the next section where, based on the similarities with the Raman data in we argue that fluctuations of the density wave order persist at high Ca concentrations and high temperatures. Signatures of Collective Density Wave Excitations in Doped . Low Energy Raman Data. ----------------------------------------------------------------------------------- In Fig. \[f126\]a we show low frequency Raman response in x = 12 at several temperatures. The $(cc)$ polarized spectra above 300 K are dominated by a quasi-elastic peak, very similar to the one in , see Fig. \[f121\]. The solid lines are fits using the same Eq. (\[e110\]) as in Fig. \[f121\]. A small contribution of the background, as shown in the inset, was subtracted. The polarization and doping dependence of this relaxational feature are shown in Fig. \[f126\]b-e. We note that the quasi-elastic feature is present only in $(cc)$ polarization and we find it in for all Ca concentrations studied (x = 0, 8 and 12). This low energy excitation is absent however in which contains no holes per formula unit, confirming the fact that it is due to the presence of doped carriers. We confirmed also that there is no influence of magnetic fields either on this feature or on the modes seen in panels (a) and (c) at 12 and 15 -1 respectively. This supports the assignment of these modes, shown also in Fig. \[f113\] for , to a phonon. Interestingly, it turns out that the extracted temperature dependent relaxational energy $\Gamma (T)$ for x = 12 reveals, similarly to in Fig. \[f123\], an activated behavior of the form $\Gamma (T) \propto \exp(-\Delta /k_{B} T)$. Moreover, the activation energies are found to be about the same: $\Delta \approx 2100$ and $2070$ K in and x = 12 , respectively, see Fig. \[f127\]c. While this energy is close to the activation energy of the $dc$ conductivity in , in x = 12 the temperature dependence of the conductivity is far from exponential, and this can be seen comparing panels (a) and (b) of Fig. \[f127\]. In fact, the behavior shown in panel (b) is very similar to the one in underdoped 2D cuprates: there is a low temperature insulating and a high temperature metallic behavior, in this latter regime the resistivity growing linearly with temperature [@OsafunePRL99; @BalakirevCM98]. In the previous paragraphs we argued that the quasi-elastic Raman scattering in is a signature of collective CDW dynamics. The main argument in this respect was the Arhenius behavior of the scattering rate with the activation given by the $dc$ transport. The low energy scale and the strong similarity between the Raman results in $x = 0$ compared to $x = 8$ and 12 allow us to claim that collective density wave excitations are also present at all Ca substitutional levels. Confirmation of this scenario comes also from more recent transport and optical conductivity data of Vuletić *et al.* [@VuleticPRL03] who observe the persistence of the microwave relaxational mode in x = 3 and 9 . The authors of this work argue however that Ca substitution suppresses the CDW phase and long range order does not exist above x = 10. In this respect we argue that the feature observed in the Raman data in Fig. \[f126\] at quite high temperatures in x = 12 is due to local fluctuations of the CDW order. How can one reconcile the observation of the same activation energy for $\Gamma (T)$ with the fact that in the insulating regime $\sigma_{dc}$ in x = 12 is not activated and, moreover, it turns metallic at high temperatures, a behavior clearly not consistent with the prediction of Eq. (\[e19\])? One possible explanation suggested by the $c$-axis optical conductivity data is the following: in one can observe a relatively broad mid-IR peak with an onset around 140 meV, see Fig. \[f110\] and Refs. [@OsafunePRL97; @EisakiPhysicaC00]. In this peak continues to be present [@OsafunePRL97] and remains a distinct feature although there is a large spectral weight transfer to low energies. We propose that the common mid-IR feature is responsible for the similarly activated behavior of the relaxation parameter $\Gamma (T)$ and observe that the energy scale of this peak (which is also seen in high T$_{c}$ cuprates) is set by the ladder AF exchange energy of about 135 meV. In this perspective, a speculative explanation for non-Fermi-liquid like metallic $dc$ conductivity at high Ca substitution levels could be based on a collective density wave contribution. Ca substitution introduces disorder that could lead to a much broader distribution of pinning frequencies which may extend to very low energies, towards the $dc$ limit, rendering a Fröhlich type component contributing to $\sigma_{dc}$. Intuitively one can imagine that the current carrying objects are not quasi-particles but (because of a small CDW correlation length) ’patches’ of holes organized in a density wave order. Another more conventional scenario for the metallic behavior in x = 12 could be based on an anisotropic and partially gapped Fermi surface in the context of higher dimensionality of the electronic system. The soft X-ray study described before, see Fig. \[f125\], shows that the CDW correlations are two dimensional in and recent low frequency dielectric response measurements [@VuleticCM04] were able to track down the relaxational peak in a configuration with the electric field parallel not only to the ladder legs but also to the rung direction. One should keep in mind however that the transport along the rung and leg directions is different, as is proven by the ratio of the $a$ to $c$-axis conductivities, $\rho_{a} / \rho_{c} \approx 10$, for a large range of Ca dopings. This can also be related to the fact that we do not observe in Fig. \[f127\]a the screened longitudinal CDW relaxational mode in $(aa)$ polarization although the hole ordering is two dimensional. Additional support for this conjecture comes from an angle resolved photoemission study [@TakahashiPRB97] which shows that while for the gap is finite, for Sr$_{5}$Ca$_9$Cu$_{24}$O$_{41}$ the density of states rises almost to the chemical potential and also from the fact that it is known that the low energy optical spectral weight transfer is enhanced with further increase in Ca substitution [@OsafunePRL97]. In this picture, the insulating behavior in x = 12 below 70 K can be understood in terms of carrier condensation in the density wave state which leads to a completely gapped Fermi surface. In order to explain the similar relaxation rates $\Gamma (T)$ for and x = 12 one has to invoke however a strongly momentum dependent scattering rate and coupling of the condensate to normal carriers. Irrespective of the exact microscopic model, the low energy properties of crystals bring challenging and unresolved aspects. Moreover, the proof for existence of CDW correlations along with strong similarities between local structural units and transport properties in Cu-O based ladders and underdoped high-T$_{c}$ materials suggest that carrier dynamics in 2D Cu-O sheets at low hole concentration could be also governed by a collective density wave response. **Summary** ============ In this chapter we focussed on magnetic and electronic properties of two-leg ladder materials. We observed at high frequencies (3000 -1) in the compound a two-magnon (2M) resonance characteristic of an undoped ladder which we analyze in terms of symmetry, relaxation and resonance properties. Our findings regarding the spectral properties of this excitation were contrasted to 2M Raman measurements in other magnetic crystals and existing theoretical calculations, emphasizing the sharpness of the 2M peak in the context of increased quantum fluctuations in one-dimension. This comparison made us suggest that the spin-spin correlations in an undoped two leg ladder may have a modulated component besides the exponential decay characteristic of a spin liquid ground state. We found that the 2M peak resonates with the Mott gap determined by O$2p$ $\rightarrow$ Cu$3d$ transitions, following the behavior of the optical conductivity in the 2-3 eV region. Interplane Sr substitution for Ca in introduces strong disorder leading to inhomogeneous broadening of the 2M resonance in the undoped system. The doped holes in the spin liquid ground state further dilute the magnetic correlations, suppressing considerably the spectral weight of this excitation. crystals at high Ca concentrations are superconducting under pressure and hole pairing was proposed to be a robust feature of doped ladders. The measured dielectric response in the microwave region, the low energy Raman data, the non-linear transport properties along with soft X-ray scattering allowed us to conclude that the ground state in for a wide range of Ca concentrations ($x \leq 12$) is characterized by charge density wave correlations. This state seems to be driven not by phonons but by Coulomb forces and many-body effects. We highlighted the similarity in the finite frequency Raman response as opposed to the very different behavior of the $dc$ resistivity between undoped and doped ladders. We found that at high Ca concentrations, although the resistivity shows a crossover between insulating and linear in temperature metallic regime, the carrier relaxation is characterized by the same large activation energy ($\approx 2000$ K) which determines the Arhenius behavior of the CDW compound . This observation prompted us to suggest an unconventional metallic transport driven by collective electronic response. [**Acknowledgments –**]{} We acknowledge collaboration with P. Abbamonte, B. S. Dennis, M. V. Klein, P. Littlewood, A. Rusydi, and T. Siegrist. The ladder crystals were provided by H. Eisaki, N. Motoyama, and S. Uchida. [99.]{} E-mail: girsh@bell-labs.com E. M. McCarron [*et al.*]{}, Mater. Res. Bull. [**23**]{} (1988) 1355. T. Siegrist [*et al.*]{}, Mater. Res. Bull. [**23**]{} (1988) 1429. E. Dagotto and T. M. Rice, Science [**271**]{} (1996) 618. E. Dagotto, Rep. Prog. Phys. [**62**]{} (1999) 1525. E. Dagotto , J. Riera, and D. Scalapino, Phys. Rev. B [**45**]{} (1992) 5744; T. M. Rice, S. Gopalan, and M. Sigrist, Europhys. Lett. [**23**]{} (1993) 445. T. Osafune [*et al.*]{}, Phys. Rev. Lett. [**82**]{} (1999) 1313. M. Uehara [*et al.*]{}, J. Phys. Soc. Jpn. [**65**]{} (1996) 2764. S. Maekawa, Nature [**273**]{} (1996) 1515. S. Sachdev, Science [**288**]{} (2000) 475. T. Osafune [*et al.*]{}, Phys. Rev. Lett. [**78**]{} (1997) 1980. N. Nücker [*et al.*]{}, Phys. Rev. B [**62**]{} (2000) 14384. M. Kato [*et al.*]{}, Physica C [**258**]{} (1996) 284. Y. Mizuno [*et al.*]{}, Physica C [**282**]{} (1997) 991. P. W. Anderson, Exchange in insulators, Ch. 2 in Magnetism, Vol. 1, ed. Rado and Suhl, Academic Press (1963). M. Takigawa [*et al.*]{}, Phys. Rev. B [**57**]{} (1998) 1124. T. Fukuda [*et al.*]{}, Phys. Rev. B [**66**]{} (2002) 012104. R. S. Eccleston [*et al.*]{}, Phys. Rev. Lett. [**81**]{} (1998) 1702. L. P. Regnault [*et al.*]{}, Phys. Rev. B [**59**]{} (1999) 1055. H. Kitano [*et al.*]{}, Europhys. Lett. [**56**]{} (2001) 434. K. Magishi [*et al.*]{}, Phys. Rev. B [**57**]{} (1998) 11533. M. Azuma [*et al.*]{}, Phys. Rev. Lett. [**73**]{} (1994) 3463. T. Barnes [*et al.*]{}, Phys. Rev. B [**47**]{} (1993) 3196. L. D. Fadeev and L. A. Takhtajan, Phys. Lett. [**85A**]{} (1981) 375. S. R. White [*et al.*]{}, Phys. Rev. Lett. [**73**]{} (1994) 886. J. Oitmaa [*et al.*]{}, Phys. Rev. B [**54**]{} (1996) 1009. M. Matsuda [*et al.*]{}, J. of Applied Phys. [**87**]{} (2000) 6271. W. J. Buyers [*et al.*]{}, Phys. Rev. Lett. [**56**]{} (1986) 371. C. Knetter [*et al.*]{}, Phys. Rev. Lett. [**87**]{} (2001) 167204. F. D. M. Haldane, Physics Letters [**93A**]{} (1983) 464; Phys. Rev. Lett. [**50**]{} (1983) 1153. S. Trebst [*et al.*]{}, Phys. Rev. Lett. [**85**]{} (2000) 4373; S. Trebst, PhD Thesis, Bonn University (2002). G. Blumberg [*et al.*]{}, Phys. Rev. B [**53**]{} (1996) R11930. S. Sugai [*et al.*]{}, Phys. Rev. B [**42**]{} (1990) 1045. P. A. Fleury and R. Loudon, Phys. Rev. [**166**]{} (1968) 514. B. S. Shastry and B. I. Shraiman, Phys. Rev. Lett. [**65**]{} (1990) 1068; B. S. Shastry and B. I. Shraiman, Int. J. of Mod. Phys. B, [**5**]{} (1991) 365. S. Sugai and M. Suzuki, Phys. Status Solidi (b) [**215**]{} (1999) 653. A. Gozar [*et al.*]{}, Phys. Rev. Lett. [**87**]{} (2001) 197202. M. Windt [*et al.*]{}, Phys. Rev. Lett. [**87**]{} (2001) 127002. T. Nunner [*et al.*]{}, Phys. Rev. B [**66**]{} (2002) 180404. K. P. Schmidt [*et al.*]{}, Phys. Rev. Lett. [**90**]{} (2003) 167201. S. Brehmer [*et al.*]{}, Phys. Rev. B [**60**]{} (1999) 329. A. A. Katanin and A. P. Kampf, Phys. Rev. B [**60**]{} (2002) R100403 and references therein. C. M. Canali and S. M. Girvin, Phys. Rev. B [**45**]{} (1992) 7127; A. W. Sandvik [*et al.*]{}, Phys. Rev. B [**57**]{} (1998) 8478. R. R. P. Singh [*et al.*]{}, Phys. Rev. Lett. [**62**]{} (1989) 2736. K. P. Schmidt, C. Knetter and G. S. Uhrig, Europhys. Lett. [**56**]{} (2001) 877. A. Gößling [*et al.*]{}, Phys. Rev. B [**67**]{} (2003) 052403. J. M. Tranquada [*et al.*]{}, Nature [**429**]{} (2004) 534. A. V. Chubukov and D. M. Frenkel, Phys. Rev. Lett. [**74**]{} (1995) 3057; A. V. Chubukov and D. M. Frenkel, Phys. Rev. B [**52**]{} (1995) 9760. T. Tohyama [*et al.*]{}, Phys. Rev. Lett. [**89**]{} (2002) 257405; H. Onodera, T. Tohyama and S. Maekawa, Physica C [**392-396**]{} (2003) 203. P. J. Freitas and R. R. P. Singh, Phys. Rev. B [**62**]{} (2000) 14113. A. Gozar, Phys. Rev. B [**65**]{} (2002) 176403. H. Eisaki [*et al.*]{}, Physica C [**341-348**]{} (2000) 363. M. W. McElfresh [*et al.*]{}, Phys. Rev. B [**40**]{} (1989) 825. P. Knoll [*et al.*]{}, Phys. Rev. B [**42**]{} (1990) 4842. C. Homes, private communications. A. Gozar [*et al.*]{}, Phys. Rev. Lett. [**91**]{} (2003) 087401. G. Blumberg [*et al.*]{}, Science [**297**]{} (2002) 584. A. A. Abrikosov and I. A. Ryzhkin, Adv. Phys. [**27**]{} (1978) 147. T. Ohta [*et al.*]{}, J. Phys. Soc. Jpn. [**66**]{} (1997) 3107; C. Bougerol-Chaillout [*et al.*]{}, Physica C [**341-348**]{} (2000) 479. N. Ogita [*et al.*]{}, Physica B [**281&282**]{} (2000) 955. E. Orignac [*et al.*]{}, Phys. Rev. B [**57**]{} (1998) 5812; R. A. Hyman [*et al.*]{}, Phys. Rev. Lett. [**76**]{} (1996) 839. Z. V. Popović [*et al.*]{}, Phys. Rev. B [**62**]{} (2000) 4963. M. Yoshida [*et al.*]{}, Phys. Rev. B [**44**]{} (1991) 11997. F. Nori [*et al.*]{}, Phys. Rev. Lett. [**75**]{} (1995) 553. S. L. Cooper [*et al.*]{}, Phys. Rev. B [**42**]{} (1990) R10785. M. Troyer, H. Tsunetsugu and T. M. Rice, Phys. Rev. B [**53**]{} (1996) 251. D. Poilblanc, D. J. Scalapino and S. Capponi, Phys. Rev. Lett. [**91**]{} (2003) 137203 and references therein. D. Poilblanc [*et al.*]{}, Phys. Rev. B [**62**]{} (2000) R14633. D. Poilblanc [*et al.*]{}, Phys. Rev. Lett. [**75**]{} (1995) 926. S. R. White, I. Affleck and D. J. Scalapino, Phys. Rev. B [**65**]{} (2002) 165122. L. Balents and M. P. A. Fisher, Phys. Rev. B [**53**]{} (1996)12133. S. Katano [*et al.*]{}, Phys. Rev. Lett. [**82**]{} (1999) 636. H. Mayaffre [*et al.*]{}, Science [**279**]{} (1998) 345. G. Grüner, Density waves in solids. Perseus, Cambridge, MA (1994). H. Fröhlich, Proc. Roy. Soc. London [**A223**]{} (1954) 296. L. Degiorgi [*et al.*]{}, Phys. Rev. B [**44**]{} (1991) 7808. P. A. Lee, T. M. Rice and P. W. Anderson, Solid State Commun. [**14**]{} (1974) 703. G. Grüner, Rev. Mod. Phys. [**60**]{} (1988) 1129. R. J. Cava [*et al.*]{}, Phys. Rev. B [**30**]{} (1984) 3228. P. B. Littlewood, Phys. Rev. B [**36**]{} (1987) 3108. M. Born and K. Huang, Dynamical theory of crystal lattices. Oxford (1954). M. V. Klein, Chap. 4 in Light Scattering in Solids I, (Ed. M. Cardona) Springer-Verlag (1983). Abbamonte P [*et al.*]{} Nature [**431**]{} (2004) 1078. P. Abbamonte [*et al.*]{}, Science [**297**]{} (2002) 581 and references therein. Balakirev F F [*et al.*]{} (1998) http://xxx.lanl.gov/abs/cond-mat/9808284 preprint. T. Vuletić [*et al.*]{}, Phys. Rev. Lett. [**90**]{} (2003) 257002. Vuletić T [*et al.*]{} (2004) http://xxx.lanl.gov/abs/cond-mat/0403611 preprint. T. Takahashi, Phys. Rev. B [**56**]{} (1997) 7870.
--- author: - 'V. Vialov, T. Shilkin[^1]' title: | Estimates of Solutions\ to the Perturbed Stokes System --- *Dedicated to 90-th anniversary of Olga Alexandrovna Ladyzhenskaya* Introduction ============ Let $B^+_R :=\{ ~x\in \mathbb R^n:~ |x|<R, x_n>0~\}$ be a half-ball in $\mathbb R^n$, $n\ge 2$, and assume $Q_R^+= B^+_R\times (-R^2, 0)$. For any $x\in \Bbb R^n$, $x=(x_1, \ldots, x_{n-1}, x_n)$ we denote by $x'\in \Bbb R^{n-1}$ the vector $x':=(x_1, \ldots, x_{n-1})$. Denote $S_R := \{~x'\in \Bbb R^{n-1}: |x'|<R~\}$ and assume $\ph: \bar S_R\to \Bbb R$ is a sufficiently smooth function. In this paper we obtain local estimates for the following system which we call the Perturbed Stokes system: $$\left\{ \quad \gathered \partial_t v \ - \ \hat \Delta_\ph v \ + \ \hat \nabla_\ph p \ = \ f \\ \hat \nabla_\ph \cdot v \ = \ 0 \endgathered\right. \qquad\mbox{in} \quad Q^+_R. \\ \label{Perturbed_Stokes}$$ Here $v$, $f:Q^+_R\to \mathbb R^n$ are vector fields, $p:Q_R^+\to \Bbb R$ is a scalar functions, $\hat \Delta_\ph$ and $\hat \nabla_\ph$ are the differential operators with variable coefficients defined via a function $\ph$ by formulas $$\gathered \hat \Delta_\ph v \ := \ \Delta v- 2v_{, {\alpha}n}\ph_{,{\alpha}} + v_{, nn} |\nabla'\ph|^2 - v_{,n} \Delta'\ph, \\ \hat \nabla_\ph \cdot v \ := \ {\mathop{\mathrm{div }}}v - v_{{\alpha}, n} \ph_{,{\alpha}}, \\ \hat\nabla_\ph p \ := \ \nabla p - p_{,n} \left( \begin{array}c \nabla'\ph \\ 0 \end{array}\right). \endgathered \label{Definition_of_operators}$$ Here we assume summation from 1 to $n-1$ over repeated Greek indexes and $\nabla'$ and $\Delta'$ denote the gradient and Laplacian with respect to $(x_1, \ldots, x_{n-1})$ variables. We will also make use of the differential operator $$\hat \nabla_\ph v = \nabla v -v_{,n}\otimes \left( \begin{array}c \nabla'\ph \\ 0 \end{array}\right), \label{New_grad}$$ where for any $a$, $b\in \Bbb R^n$ the symbol $a\otimes b$ denotes the $n\times n$- matrix with components $(a_ib_j)$, $i$, $j=1, \ldots, n$. In this paper we study the problem assuming $v$ satisfies the slip boundary condition on the plane $\{ x_n=0\}$: $$v|_{x_n=0}=0. \label{Dir_BC}$$ The Perturbed Stokes system arises as a reduction of the usual Stokes system in a domain near a point belonging to the curved part of the boundary if the latter is a graph of $\ph$. Namely, assume $(u,q, \tilde f)$ satisfy the Stokes system in ${\Omega}_R\times (-R^2,0)$, $$\left\{ \quad \gathered \gathered {\partial}_t u - \Delta u +\nabla q \ = \ \tilde f \\ {\mathop{\mathrm{div }}}u = 0 \endgathered \quad \mbox{in}\quad {\Omega}_R\times (-R^2,0). \\ \endgathered\right. \label{Stokes_system-1}$$ We assume ${\Omega}_R$ is described in the appropriate Cartesian coordinate system by relations $${\Omega}_R\ = \ \Big\{ ~y\in \mathbb R^n: |y'|<R, \ \ph(y')< y_n<\ph(y')+\sqrt{R^2-|y'|^2} \ \Big\},$$ and we impose the slip boundary condition on $u$: $$u|_{x_n=\ph(x')} =0. \label{Dir_BC-1}$$ In this paper we assume $\ph$ is of class $W^3_\infty$ (i.e. its second derivatives are Lipschitz continuous) and the Cartesian coordinate system is chosen in such a way that the following relations hold $$\ph(0)=0, \qquad \nabla \ph(0)=0, \qquad \| \ph\|_{W^3_\infty( S_R)}\le \mu. \label{mu}$$ Now we apply the diffeomorphism flattering the boundary, or, in other words, we introduce new coordinates $x=\psi(y)$ by formulas $$\psi: {\Omega}_R \to B^+_R, \qquad x= \psi(y) \ = \ \left( \begin{array}c y' \\ y_n-\ph(y') \end{array} \right), \label{Definition_of_psi}$$ $$y\in {\Omega}_R \quad \Longleftrightarrow \quad x\in B_R^+.$$ Denote $$\gathered v:= u\circ \psi^{-1} , \qquad p:= q\circ \psi^{-1} . \endgathered$$ Then for $x=\psi(y)$ we have relations $$\gathered \nabla q(y) = \hat \nabla_\ph p(x), \quad \Delta u (y) = \hat \Delta_\ph v (x), \quad {\mathop{\mathrm{div }}}u(y) = (\hat\nabla_\ph \cdot v)(x). \endgathered$$ Hence the Stokes system , in ${\Omega}_R\times (-R^2,0)$ in $y$-variables transfers to the Perturbed Stokes system , in $Q^+_R$ in $x$-variables. Now we introduce some functional spaces: assume $1\le s,l<+\infty $. Assume ${\Omega}\subset \Bbb R^n$, $Q_T ={\Omega}\times (0,T)$ and let $L_{s,l}(Q_T)$ be the anisotropic Lebesgue space equipped with the norm $$\|f\|_{L_{s,l}(Q_T)}:= \Big(\int_0^T\Big(\int_{\Omega}|f(x,t)|^s~dx\Big)^{l/s}dt\Big)^{1/l} ,$$ and denote $$\gathered W^{1,0}_{s,l}(Q_T)\equiv L_l(0,T; W^1_s({\Omega}))= \{ \ u\in L_{s,l}(Q_T): ~\nabla u \in L_{s,l}(Q_T) \ \}, \\ W^{2,1}_{s,l}(Q_T) = \{ \ u\in W^{1,0}_{s,l}(Q_T): ~\nabla^2 u, \ {\partial}_t u \in L_{s,l}(Q_T) \ \}. \\ \endgathered$$ We equip these spaces with the following norms: $$\gathered \| u \|_{W^{1,0}_{s,l}(Q_T)}= \| u \|_{L_{s,l}(Q_T)}+ \|\nabla u\|_{L_{s,l}(Q_T)}, \\ \| u \|_{W^{2,1}_{s,l}(Q_T)}= \| u \|_{W^{1,0}_{s,l}(Q_T)}+ \| \nabla^2 u \|_{L_{s,l}(Q_T)}+\|{\partial}_t u\|_{L_{s,l}(Q_T)}. \\ \endgathered$$ We also denote by $W^{-1}_{s}({\Omega})$ the conjugate space to $\overset{\circ}{W}{^1_{s'}}({\Omega})$ equipped with the norm $$\| f\|_{W^{-1}_s({\Omega})} = \sup\limits_{w\in \overset{\circ}{W}{^1_{s'}}({\Omega}), \ \| w\|_{W^1_{s'}}({\Omega})\le 1 } |\langle f, w\rangle |$$ and we denote by $L_l(0,T; W^{-1}_s({\Omega}))$ the space of measurable functions $f:[0,T]\to W^{-1}_{s}({\Omega})$ such that the following norm is finite: $$\| f\|_{L_l(0,T; W^{-1}_s({\Omega}))} = \Big( ~\int_0^T \| f(\cdot, t)\|_{W^{-1}_s({\Omega})}^l~dt ~\Big)^{1/l}.$$ \[Strong\_Solutions\] Assume $1<s,l <+\infty$ and $f\in L_{s,l}(Q^+_R)$. We say that the functions $(v,p)$ are [*the strong solution*]{} of the problem , , if they belong to the spaces $$v\in W^{2,1}_{s,l}(Q^+_R), \qquad p\in W^{1,0}_{s,l}(Q^+_R),$$ satisfy the equations a.e. in $Q^+_R$ and satisfy the boundary conditions in the sense of traces. \[Generalized\_Solutions\] Assume $1< s,l <+\infty$ and $f\in L_l(-R^2, 0; W^{-1}_s(B^+_R))$. We say that the functions $(v,p)$ are [*the generalized solution*]{} of the problem , , if they belong to the spaces $$v\in W^{1,0}_{s,l}(Q^+_R), \qquad p\in L_{s,l}(Q^+_R),$$ $(v,p)$ satisfy in the sense of distributions and $v$ satisfies the boundary condition in the sense of traces. Note that though $\hat \Delta_\ph$ and $\hat \nabla_\ph$ are the operators with variable coefficients, the function $\ph$ is independent of $x_n$ and thus these operators possess the properties $$\gathered \int\limits_{B^+_R} \hat \Delta_\ph v\cdot w ~dx = - \int\limits_{B^+_R} \hat \nabla_\ph v: \hat \nabla_\ph w ~dx = \int\limits_{B^+_R} v\cdot \hat \Delta_\ph w ~dx, \\ \int\limits_{B^+_R} \hat \nabla_\ph p \cdot w~dx = - \int\limits_{B^+_R} p \hat \nabla_\ph \cdot w~dx \endgathered$$ for any $v\in W^2_s(B^+_R)$, $p\in W^1_s(B^+_R)$, $w\in C_0^\infty(B^+_R)$. Hence for equations with variable coefficients there is no problem to define solutions “in the sense of distributions” in the usual way (similar to PDEs with constant coefficients) by putting all differential operators $\hat \Delta_\ph$ and $\hat \nabla_\ph$ on a smooth test function. Remark also that if $(v,p)$ is a generalized solution to , then the following identity holds in $\mathcal D'(Q^+_R)$ (i.e. in the sense of distributions): $$\gathered {\partial}_t v \ = \ f \ + \ {\mathop{\mathrm{div }}}\Big( \nabla v -p\Bbb I\Big) \ + \\ + \ \frac {{\partial}} {{\partial}x_n}\Big( -2v_{,{\alpha}}\ph_{\alpha}+v_{,n}|\nabla'\ph|^2 - v\Delta'\ph + p\Big( \begin{array}c \nabla'\ph \\ 0 \end{array}\Big)\Big). \endgathered$$ This identity implies that ${\partial}_t v \in L_l(-R^2,0;W^{-1}_{s}(B^+_R))$ and the estimate $$\gathered \| {\partial}_t v\|_{L_l(-R^2, 0; W^{-1}_s(B^+_R))} \ \le \\ \le \ C~\Big( \| f\|_{L_{l}(-R^2, 0; W^{-1}_s(B^+_R))}+ \| v\|_{W^{1,0}_{s,l}(Q^+_R)}+ \| p\|_{L_{s,l}(Q^+_R)}\Big) \endgathered \label{Negative_time_derivative}$$ holds. In particular, from that it is possible to choose the representative of $v$ so that $$\forall~w\in \overset{\circ}{W}{^1_{s'}}({\Omega})\quad t \ \mapsto \ \int\limits_{B^+_R} v(x,t)\cdot w(x)~dx \ \mbox{is continuous on} \ [-R^2,0].$$ Hence we can assume that every generalized solution $(v,p)$ satisfies the integral identity $$\gathered \int\limits_{B^+_R} v(x,t)\cdot \eta(x,t)~dx~\Big|_{t=-R^2}^{t=0} \ + \ \int\limits_{Q^+_R}( -v\cdot {\partial}_t \eta + \hat \nabla_\ph v: \hat \nabla_\ph \eta)~dxdt \ = \\ = \ \int\limits_{-R^2}^0 \langle f (t), \eta(t)\rangle ~dt \ + \ \int\limits_{Q^+_R} p \hat\nabla_\ph \cdot \eta~dxdt \endgathered \label{Integral_Identity}$$ for any $\eta\in C^\infty(\bar Q_R^+)$ such that $\eta|_{{\partial}B^+_R\times (-R^2,0)}=0$. In the paper we explore the following notations - ${\partial}{\Omega}$ is a boundary of a domain ${\Omega}\subset \Bbb R^n$ - ${\partial}'Q^+_R = ({\partial}B^+_R\times (-R^2,0)) \cup (B^+_R \times \{ t=-R^2\}) $ - We assume summation from 1 to $n$ over repeated Latin indexes and summation from 1 to $n-1$ over repeated Greek indexes. - The indexes after comma imply the derivatives with respect to the corresponding spatial variables. - $a\cdot b= a_ib_i$ is the scalar product of vectors $a$, $b\in \Bbb R^n$ - $A: B= A_{ij}B_{ij}$ is the scalar product of matrices $A$, $B\in \Bbb M^{n\times n}$ Main Results ============ In this section we formulate four theorems which are main results of the present paper. At the end of this section we give some comments to these results. \[Theorem1\] Assume ${\Omega}\subset \Bbb R^n$ is a bounded domain with smooth boundary which is diffeomorphic to a ball and denote $Q_T={\Omega}\times (0,T)$. Suppose $s$, $l \in (1,\infty)$. There is a positive constant $\mu_1$ (depending on ${\Omega}$, $T$, $s$, $l$, $n$) such that for any function $\ph\in W^3_\infty( {\Omega})$ which is independent on $x_n$-variable and satisfies the condition $$\| \ph\|_{W^3_\infty( {\Omega})} \le \mu_1 \label{Smallness_mu}$$ and for any $f$ and $g$ satisfying conditions $$f\in L_{s, l}(Q_T), \label{External force}$$ $$g \in W^{1,0}_{s,l}(Q_T), \label{Assumptions_1}$$ $${\partial}_t g \in L_{s,l}(Q_T), \label{Assumptions_2}$$ $$\int\limits_{{\Omega}} g(x,t)~dx =0 ,\qquad \mbox{a.e. } t\in(0,T), \qquad g(\cdot,0)=0, \label{Assumptions_3}$$ the problem $$\left\{ \quad \gathered \gathered {\partial}_t u - \hat{\Delta}_{\ph} u + \hat{\nabla}_{\ph} q = f \\ \hat{\nabla}_{\ph} \cdot u = g\\ \endgathered \quad \text{in} \quad Q_T, \\ u|_{{\partial}{\Omega}\times (0,T)} = 0, \quad u|_{t=0} = 0, \endgathered \right. \label{Nonzero_div}$$ has the unique solution $ u \in W^{2,1}_{s,l}(Q_T)$, $q \in W^{1,0}_{s,l}(Q_T)$, $\int_{\Omega}q(x,t)~dx=0$, for a.e. $t\in (0,T)$ and the estimate $$\gathered \| u \|_{W^{2,1}_{s,l} (Q_T)} + \| \nabla q\|_{L_{s,l}(Q_T)} \le \\ \le C_*\left(\|f\|_{L_{s,l}(Q_T)}+ \| g \|_{W^{1,0}_{s,l}(Q_T)} + \| {\partial}_t g\|_{L_{s,l}(Q_T)}^{1/s } \| {\partial}_t g \|_{L_{l}(0, T; W^{-1}_s({\Omega}))}^{1/s'}\right) \endgathered \label{Perturbed_Stokes_Nonzero}$$ holds with some constant $C_*>0$ depending only on ${\Omega}$, $T$, $n$, $s$, $l$. \[Theorem\_Local\_Estimate\] Suppose $s$, $l \in (1,\infty)$, and $0<r<R$ are fixed. There exists a positive constant $\mu_2$ (depending only on $n$, $s$, $l$, $r$, $R$) such that if $\ph\in W^3_\infty( S_{R}) $ satisfies with $\mu\le \mu_2$ then for any $f\in L_{s,l}(Q^+_R)$, and any strong solution $v \in W^{2,1}_{s,l}(Q^+_R)$, $p \in W^{1,0}_{s,l}(Q^+_R)$ to the system , in $Q^+_R$, the following local estimate holds: $$\gathered \| v \|_{W^{2,1}_{s,l} (Q^+_r)} + \| \nabla p \|_{L_{s,l} (Q^+_r)} \le \\ \le C \left(\| f \|_{L_{s,l}(Q^+_R)}+ \| \nabla v \|_{L_{s,l} (Q^+_R)} + \inf\limits_{b\in L_l(-R^2,0)}\| p- b \|_{L_{s,l} (Q^+_R)} \right), \endgathered \label{Local_Estimate}$$ where $b$ is a function of $t$–variable and the constant $C$ depends only on $n$, $s$, $l$, $r$, $R$. \[Theorem2\] Suppose $s$, $l \in (1,\infty)$, and $0<r<R$ are fixed. There exists a positive constant $\mu_3$ (depending only on $n$, $s$, $l$, $r$, $R$) such that if $\ph\in W^3_\infty( S_{R}) $ satisfies with $\mu\le \mu_3$ then for any $f\in L_{s,l}(Q^+_R)$ and any generalized solution $ v \in W^{1,0}_{s,l}(Q^+_R)$, $p \in L_{s,l}(Q^+_R)$ to the system , in $Q^+_R$ the following inclusions hold: $v \in W^{2,1}_{s,l}(Q^+_r)$, $p \in W^{1,0}_{s,l}(Q^+_r)$. \[Theorem3\] Suppose $s$, $l$, $m \in (1,\infty)$, $m\ge s$ and $0<r<R$ are fixed. There exists a positive constant $\mu_4$ (depending only on $n$, $s$, $l$, $r$, $m$, $R$) such that if $\ph\in W^3_\infty( S_{R}) $ satisfies with $\mu\le \mu_4$ then for any $f\in L_{m,l}(Q^+_R)$ and any generalized solution $v \in W^{2,1}_{s,l}(Q^+_R)$, $p \in W^{1,0}_{s,l}(Q^+_R)$ to the system , in $Q^+_R$ we have the inclusions $ v \in W^{2,1}_{m,l}(Q^+_r)$, $\nabla p \in L_{m,l}(Q^+_r) $ and the following local estimate holds: $$\gathered \| v \|_{W^{2,1}_{m,l} (Q^+_r)} + \| \nabla p \|_{L_{m,l} (Q^+_r)} \le \\ \le C \left(\| f \|_{L_{m,l}(Q^+_R)}+ \| \nabla v \|_{L_{s,l} (Q^+_R)} + \inf\limits_{b\in L_l(-R^2,0)}\| p- b \|_{L_{s,l} (Q^+_R)} \right) \endgathered \label{Local_Estimate2}$$ with some constant $C$ depending only on $n$, $s$, $l$, $m$, $r$, $R$. [**Remark.**]{} The constants $\mu_i$ controlling the smallness of the $W^3_\infty$–norm of the function $\ph$ in Theorems \[Theorem\_Local\_Estimate\]—\[Theorem3\] depend on the domain (or on the size of the half-cylinders $Q^+_r$ and $Q^+_R$). Nevertheless, for applications to the investigation of the Stokes and the Navier-Stokes systems near the point at the curved part of the boundary this is not a serious obstacle (in contrast with the smoothness assumption that $\ph$ is of class $W^3_\infty$) because of the following scaling property of the Perturbed Stokes system: if $(v,p, f, \ph)$ satisfy in the cylinder $Q^+_R$ with $\ph$ satisfying then the functions $$\gathered v^R(x,t)= Rv(Rx, R^2t), \quad p^R(x,t)=R^2p(Rx,R^2t), \\ f^R(x,t) = R^3f(Rx,R^2t), \quad \ph^R(x')=\frac 1R\ph(Rx') \endgathered \label{Scaling}$$ satisfy the Perturbed Stokes system in $Q^+$ and from Taylor decomposition of the function $\ph^R$ one can obtain for $R\le 1$ $$\ph^R(0)=0, \qquad \nabla'\ph^R(0) = 0, \qquad \| \ph^R\|_{W^3_\infty( S_1)} \ \le \ \mu R.$$ Hence, one can take canonical domain (say, $Q_R^+=Q_1^+$, $Q_r^+=Q_{1/2}^+$) and compute the constants $\mu^*_i = \mu_i$ for these particular domains. We emphasize that $\mu_i^*$ are constants depending only on $n$, $s$, $l$, $m$. Then we consider the Stokes system , near a point of the $W^3_\infty$–smooth boundary without any restrictions on the curvature of the boundary (i.e. the constant $\mu$ in can be arbitrary large). After that we choose $R$ in so small that the following estimates hold: $$\mu R \le \mu^*_i, \qquad i= 2,3,4. \label{mu_star}$$ Making change of variables we obtain functions $(v,p,f,\ph)$ which satisfy the Perturbed Stokes system , in $Q^+_R$. At this step our Perturbed Stokes system is not a small perturbation of the usual Stokes system (i.e. so far smallness conditions of Theorems \[Theorem\_Local\_Estimate\]–\[Theorem3\] are not satisfied). Then we make the scaling and obtain functions $(v^R, p^R, f^R, \ph^R)$ which satisfy the Perturbed Stokes system in $Q^+$ and also satisfy the smallness assumptions . So, we can apply results of Theorems \[Theorem\_Local\_Estimate\]–\[Theorem3\] to the functions $(v^R, p^R, f^R, \ph^R)$. Then we recover information about the original functions $(v,p)$. Now we give some comments to Theorems \[Theorem1\] — Theorem \[Theorem3\]. Theorem \[Theorem1\] in the case of the Stokes system (i.e. for $\ph\equiv 0$) was proved in [@FilShil]. The generalization in the case of a “small perturbation” of the Stokes system is quite obvious. The proof is presented in Section \[Section\_T\_1\]. Theorem \[Theorem\_Local\_Estimate\] presents a local estimate for strong solutions to the Perturbed Stokes system. In the case of the usual Stokes system near a plane part of the boundary such estimates were originally proved in [@Seregin_ZNS271]. In [@Solonnikov_ZNS288] the same estimates were proved for solutions to the Stokes system near curved part of the boundary. In our approach Theorem \[Theorem\_Local\_Estimate\] follows from Theorem \[Theorem1\] by arguments presented in [@FilShil]. We reproduce these arguments in Section \[Section\_Local\_Estimate\] just for completeness. In Theorem \[Theorem2\] we prove that any generalized solution is actually a strong one. In the case of the Stokes system this result originally was proved in [@Seregin_ZNS370]. In Section \[Section\_T2\] to obtain similar result for the Perturbed Stokes system we use new approach based on the estimates obtained in Theorem \[Theorem1\]. Probably this section contains main novelty of the present paper. Finally, in Section \[Section\_T3\] we obtain improved local estimate of solution to the Perturbed Stokes system. Estimate in Theorem \[Theorem3\] turns out to be the crucial step in investigation of boundary regularity of solutions to the nonlinear Navier-Stokes system, see [@Seregin_JMFM] and [@SSS]. For the Stokes system this estimate was originally obtained in [@Seregin_ZNS271] in the case of plane boundary, and after that in [@Solonnikov_ZNS288] in the case of curved boundary. In our approach we obtain the corresponding estimate for solutions to the Perturbed Stokes system (under certain conditions that guarantee smallness of the “perturbation”) as a direct consequence of our Theorems \[Theorem2\] and \[Theorem\_Local\_Estimate\]. Proof of Theorem \[Theorem1\] {#Section_T_1} ============================= We will derive Theorem \[Theorem1\] from the following result \[Theorem1-1\] Suppose $s$, $l \in (1,\infty)$. Assume $a_{ijkl}$, $b_{ijk}$, $c_{ij}$, $d_{,ij}\in L_\infty( Q_T)$ and consider the problem $$\left\{ \quad \gathered \gathered \partial_t w_i - a_{ijkm} w_{j,km} + b_{ijk} w_{j,k} + c_{ij}w_j+ \ d_{ij} q_{,j} \ = \ f_i, \\ {\mathop{\mathrm{div }}}w \ = \ 0, \endgathered \qquad\mbox{in}\quad Q_T, \\ w|_{t=0}=0,\qquad w|_{{\partial}{\Omega}\times(0,T)}=0. \qquad\quad \endgathered \right. \label{Perturbed_Stokes_Nonzero-1}$$ There is a constant $\mu_0>0$ (depending on ${\Omega}$, $T$, $s$, $l$, $n$) such that if the coefficients $a_{ijkl}$, $b_{ijk}$, $c_{ij}$, $d_{ij}$ satisfy the estimate $$\sup\limits_{z\in \bar Q_T} \Big(~ |a_{ijkm}(z)- \dl_{ij}\dl_{km}| + |d_{ij}(z)-\dl_{ij}| + |b_{ijk}(z)| + |c_{ij}(z)|~\Big) \ \le \ \mu_0, \label{Smallness}$$ for all $i,j, k, m=1, \ldots, n$, then for any $f$ satisfying conditions the problem (\[Perturbed\_Stokes\_Nonzero-1\]) has the unique solution $ w \in W^{2,1}_{s,l}(Q_T)$, $q \in W^{1,0}_{s,l}(Q_T)$, $\int\limits_{\Omega}q~dx =0$ a.e. $t\in {\Omega}$, and the estimate $$\gathered \| w \|_{W^{2,1}_{s,l} (Q_T)} + \| q\|_{W^{1,0}_{s,l}(Q_T)} \ \le \ C ~\|f\|_{L_{s,l}(Q_T)}. \endgathered \label{Estimate_1-1}$$ holds with some constant $C>0$ depending only on ${\Omega}$, $T$, $n$, $s$, $l$. [**Proof of Theorem \[Theorem1-1\]:**]{} Denote by $$\gathered \mathcal H := \Big\{ ~(w,q)\in W^{2,1}_{s,l}(Q_T)\times W^{1,0}_{s,l}(Q_T): \ {\mathop{\mathrm{div }}}w=0 \ \mbox{a.e. in }Q_T, \\ w|_{{\partial}{\Omega}\times (0,T)}=0, \ w|_{t=0}=0, \ \int\limits_{\Omega}q~dx=0 \ \mbox{a.e. }t\in (0,T)~\Big\} \endgathered$$ the Banach space equipped with the norm $$\| (w,q)\|_{\mathcal H} \ := \ \| w\|_{W^{2,1}_{s,l}(Q_T)}+\| q\|_{W^{1,0}_{s,l}(Q_T)}.$$ For any $f\in L_{s,l} (Q_T)$ denote by $(w, q)\in \mathcal H$ the unique strong solution to the Stokes system: $$\left\{ \quad \gathered {\partial}_t w - \Delta w +\nabla q = f, \\ {\mathop{\mathrm{div }}}w=0, \\ w|_{t=0}=0, \qquad w|_{{\partial}{\Omega}\times(0,T)} = 0, \endgathered \right. \label{Stokes}$$ and consider the bijective operator $$\mathcal A_0: \mathcal H \to L_{s,l}(Q_T), \qquad \mathcal A_0(w,q):=f.$$ Then we know (see [@Solonnikov_UMN], Theorem 1.1) that there is a positive constant $C_*$ such that $$\gathered C_*~\| (w,q)\|_{\mathcal H} \ \le \ \| \mathcal A_0(w,q)\|_{L_{s,l}(Q_T)} \endgathered$$ for any $(w,q)\in \mathcal H$. Hence the linear operator $\mathcal A_0$ is invertible and its inverse operator is bounded from $L_{s,l}(Q_T)$ to $\mathcal H$. Consider now the operator $$\mathcal A_1: \mathcal H \to L_{s,l}(Q_T)$$ determined by the system . The system can be reduced to the system with the right-hand side $\tilde f$, where $$\tilde f_i = f_i+ (a_{ijkl} - \dl_{ij}\dl_{kl})w_{j,kl}- b_{ijk}w_{j,k}-c_{ij}w_j -(d_{ij}-\dl_{ij})q_{,j},$$ and due to conditions $$\| \tilde f - f\|_{L_{s,l}(Q_T)} \ \le C\mu_0 \| (w,q)\|_{\mathcal H}, \label{Est_f}$$ Then for every $f\in L_{s,l}(Q_T)$ we have $$\gathered \| (\mathcal A_1 -\mathcal A_0)\mathcal A_0^{-1}f\|_{L_{s,l}(Q_T)} = \|\tilde f-f \|_{L_{s,l}(Q_T)} \ \le \\ \le \ C\mu_0 \| (w,q)\|_{\mathcal H} \ \le \ \frac{C\mu_0}{C_*} \|f\|_{L_{s,l}(Q_T)}. \endgathered$$ Choosing now $\mu_0< \frac{C_*}{2C}$ we obtain $\| (\mathcal A_1-\mathcal A_0)\mathcal A_0^{-1}\|_{L_{s,l}(Q_T)\to L_{s,l}(Q_T)}\le \frac 12$ and hence there exists $\mathcal A_1^{-1}:L_{s,l}(Q_T)\to \mathcal H$ which is a bounded operator. Theorem \[Theorem1-1\] is proved.  $\square$ [**Proof of Theorem \[Theorem1\]:**]{} Let ${\EuScript{L}}_\ph = \nabla \psi$ where $\psi$ is introduced in . Note that ${\EuScript{L}}_\ph$ is a smooth matrix and it is non-degenerate. Denote $w := {\EuScript{L}}_\ph u$. Then $$\hat \nabla_\ph \cdot u \ = \ {\mathop{\mathrm{div }}}w \quad \mbox{a.e. in}\quad Q_T$$ and the system can be reduced to the form $$\left\{ \quad \gathered \gathered \partial_t w - {\EuScript{L}}_\ph \hat\Delta_\ph ({\EuScript{L}}^{-1}_\ph w) + {\EuScript{L}}_\ph \hat \nabla_\ph q \ = \ {\EuScript{L}}_\ph f \\ {\mathop{\mathrm{div }}}w \ = \ g \endgathered \qquad \mbox{in}\quad Q_T, \\ w|_{t=0}=0, \qquad w|_{{\partial}{\Omega}\times(0,T)} = 0. \qquad\quad \endgathered \right. \label{L_krivoe}$$ Note that this is a system of type but with non-zero divergence. The coefficients $a_{ijkm}$, $b_{ijk}$, $c_{ij}$, $d_{ij}$ arising in this system depend on the derivatives of $\ph$ of the first, second and third orders, due to the condition they are bounded and satisfy the conditions . Using the result of paper [@FilShil] (see section 4, estimate (4.1) in the cited paper) we can find the function $W\in W^{2,1}_{s,l}(Q_T)$ such that $$\gathered {\mathop{\mathrm{div }}}W \ = \ g\quad\mbox{a.e. in}\quad Q_T, \\ W|_{{\partial}{\Omega}\times (0,T)}=0, \qquad W|_{t=0}=0, \\ \| W\|_{W^{2,1}_{s,l}(Q_T)} \ \le \ C\Big(\| g\|_{W^{1,0}_{s,l}(Q_T)}+ \| {\partial}_t g\|_{L_{s,l}(Q_T)}^{1/s}\|{\partial}_t g\|_{L_l(0,T; W^{-1}_s({\Omega}))}^{1/s'}\Big). \endgathered$$ Then we consider the problem $$\left\{ \quad \gathered \gathered \partial_t \tilde w - {\EuScript{L}}_\ph \hat\Delta_\ph ({\EuScript{L}}^{-1}_\ph \tilde w) + {\EuScript{L}}_\ph \hat \nabla_\ph q \ = \ \tilde f \\ {\mathop{\mathrm{div }}}\tilde w \ = \ 0 \endgathered \qquad \mbox{in}\quad Q_T, \\ \tilde w|_{t=0}=0, \qquad \tilde w|_{{\partial}{\Omega}\times(0,T)} = 0, \qquad\quad \\ \tilde f \ := \ {\EuScript{L}}_\ph f - \Big( {\partial}_t W - {\EuScript{L}}_\ph \hat\Delta_\ph ({\EuScript{L}}^{-1}_\ph W)\Big) \ \in \ L_{s,l}(Q_T), \endgathered \right.$$ which has the unique solution $(\tilde w, q) \in W^{2,1}_{s,l}(Q_T)\times W^{1,0}_{s,l}(Q_T)$ due to Theorem \[Theorem1-1\]. Now we take  $w := \tilde w + W $ and see that $(w,q)\in W^{2,1}_{s,l}(Q_T)\times W^{1,0}_{s,l}(Q_T) $ satisfy all equations in . The uniqueness of this solution follows from Theorem \[Theorem1-1\], and the estimate $$\gathered \| w\|_{W^{2,1}_{s,l}(Q_T)}+\| q\|_{W^{1,0}_{s,l}(Q_T)} \ \le \\ \le \ C\Big(\| f\|_{L_{s,l}(Q_T)}+ \| g\|_{W^{1,0}_{s,l}(Q_T)}+ \| {\partial}_t g\|_{L_{s,l}(Q_T)}^{1/s}\|{\partial}_t g\|_{L_l(0,T; W^{-1}_s({\Omega}))}^{1/s'}\Big) \endgathered$$ follows from the corresponding estimates of $\tilde w$ and $W$. From this estimate taking into account $u= {\EuScript{L}}^{-1}_\ph w$ and $W^3_\infty$–smoothness of $\ph$ we obtain . Note that only here we need $W^3_\infty$–smoothness for the function $\ph$. Theorem \[Theorem1\] is proved.   $\square$ Proof of Theorem \[Theorem\_Local\_Estimate\] {#Section_Local_Estimate} ============================================= The estimate follows from the estimate by the arguments used in the paper [@FilShil]. We reproduce this proof here just for the sake of completeness. Within this section $C$ denotes positive constants which can depend only on $n$, $r$, $R$, $s$, $l$ and can be different from line to line. Take arbitrary $\rho_1$, $\rho_2$ such that $$\begin{array}c r \le\rho_1<\rho_2\le R- \frac{1}{10}(R-r). \end{array}$$ Consider a cut-off function $\zeta\in C_0^\infty( Q^+_R)$ such that $$\gathered 0\le \zeta\le 1 \ \mbox{ in } \ Q^+_R, \quad\zeta\equiv 1 \ \mbox{ in } \ Q^+_{\rho_1}, \quad \zeta\equiv 0 \ \mbox{ in } \ Q^+_R\setminus Q^+_{\rho_2}, \\ \| \nabla^k\zeta \|_{L_\infty(Q^+_R)}\le \frac {C}{(\rho_2-\rho_1)^{k}},\quad k=1,2, \\ \| {\partial}_t\zeta \|_{L_\infty(Q^+_R)}\le \frac {C}{\rho_2-\rho_1}, \quad \| {\partial}_t\nabla \zeta \|_{L_\infty(Q^+_R)}\le \frac {C}{(\rho_2-\rho_1)^2}. \endgathered$$ Let $(v,p)$ be a solution to the system , . Fix arbitrary function $b\in L_l(-R^2, 0)$ of $t$–variable and denote $\bar p:=p-b$. Let ${\Omega}$ be a smooth domain such that  $B^+_{R-\frac{1}{10}(R-r)}\subset {\Omega}\subset B^+_R$. Consider functions $u:=\zeta v$, $q:=\zeta \bar p$. Then $(u,q)$ is a solution to the initial-boundary problem of type , but in domain ${\Omega}\times (-R^2,0)$ instead of ${\Omega}\times (0,T)$ and with “right hand sides” $f$, $g$ in equal to $\tilde f$, $\tilde g$, where $$\tilde f= \zeta f+ v ({\partial}_t \zeta- \hat\Delta_\ph \zeta) - 2(\hat \nabla_\ph v)\hat \nabla_\ph \zeta+ \bar p\hat \nabla_\ph \zeta , \quad\tilde g=v \cdot \hat \nabla_\ph \zeta .$$ Applying the estimate to the functions $(u,q,\tilde f, \tilde g)$ and taking into account that $\zeta\equiv 1$ on $Q_{\rho_1}^+$, $\frac 1{\rho_2-\rho_1}\ge C$, we obtain $$\gathered \| v \|_{W_{s,l}^{2,1} (Q_{\rho_1}^+)}^s \le C\| f\|_{L_{s,l}(Q^+_R)}^s + \frac C{(\rho_2-\rho_1)^{2s}} \Big(\| v \|_{W_{s,l}^{1,0}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s\Big) + \\ +C\Big( \| \nabla (v\cdot\hat \nabla_\ph \zeta) \|_{L_{s,l}(Q^+_R)}^s + \| {\partial}_t (v\cdot\hat \nabla_\ph \zeta) \|_{L_{s,l}(Q^+_R)}\| {\partial}_t (v\cdot\hat \nabla_\ph \zeta) \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s-1}\Big) . \endgathered$$ Taking into account estimates $$\gathered \| \nabla (v\cdot\hat \nabla_\ph \zeta) \|_{L_{s,l}(Q^+_R)}^s\le \frac C{(\rho_2-\rho_1)^{2s}}\|v \|_{W^{1,0}_{s,l}(Q^+_R)}^s, \\ \| {\partial}_t (v\cdot\hat \nabla_\ph \zeta) \|_{L_{s,l}(Q^+_R)}\le \frac C{(\rho_2-\rho_1)^2} \Big(\| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})} + \| v \|_{L_{s,l}(Q^+_R)}\Big), \\ \| {\partial}_t (v\cdot\hat \nabla_\ph \zeta) \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s-1}\le \frac C{(\rho_2-\rho_1)^{2s-2} } \Big(\| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s-1} + \| v \|_{L_{s,l}(Q^+_R)}^{s-1}\Big), \endgathered$$ we get $$\gathered \| v \|_{W_{s,l}^{2,1} (Q_{\rho_1}^+)}^s \le C\| f\|_{L_{s,l}(Q^+_R)}^s \\ + \frac C{(\rho_2-\rho_1)^{2s}} \Big(\| v \|_{W^{1,0}_{s,l}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s+ \| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s}\Big) \\ + \frac C{(\rho_2-\rho_1)^{2s}} \| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})}\Big( \| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s-1}+ \| v \|_{L_{s,l}(Q^+_R)}^{s-1}\Big) . \endgathered \label{To_chto_nado}$$ Estimating the last term in the right-hand side of via the Young inequality $ab\le {\varepsilon}a^s+C_{\varepsilon}b^{s'}$ we obtain the estimate $$\gathered \frac C{(\rho_2-\rho_1)^{2s}} \| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})}\Big( \| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s-1}+ \| v \|_{L_{s,l}(Q^+)}^{s-1}\Big) \le \\ \le {\varepsilon}\| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})}^s + \frac{C_{\varepsilon}}{(\rho_2-\rho_1)^{2ss'}}\Big( \| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s}+ \| v \|_{L_{s,l}(Q^+_R)}^{s}\Big) , \endgathered$$ where the constant ${\varepsilon}>0$ can be chosen arbitrary small. Therefore, $$\begin{gathered} \| v \|_{W_{s,l}^{2,1} (Q_{\rho_1}^+)}^s \le C\| f\|^s_{L_{s,l}(Q^+_R)}+ {\varepsilon}\| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})}^s + \\ + \frac {C_{\varepsilon}}{(\rho_2-\rho_1)^{2ss'}} \Big(\| v \|_{W^{1,0}_{s,l}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s+ \| {\partial}_t v \|_{L_l(-R^2,0; W^{-1}_s(B^+_R))}^{s}\Big) ,\end{gathered}$$ and by virtue of $$\gathered \| v \|_{W_{s,l}^{2,1} (Q_{\rho_1}^+)}^s \le {\varepsilon}\| {\partial}_t v \|_{L_{s,l}(Q^+_{\rho_2})}^s \\ + \frac {C_{\varepsilon}}{(\rho_2-\rho_1)^{2ss'}} \Big(\| f\|_{L_{s,l}(Q^+_R)}^s + \| v \|_{W^{1,0}_{s,l}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s \Big) . \endgathered \label{inequality this implies}$$ Now let us introduce the monotone function $\Psi(\rho) := \| v \|_{W_{s,l}^{2,1} (Q_\rho^+)}^s$, and the constant $$A:=C_{\varepsilon}\left(\| f\|_{L_{s,l}(Q^+_R)}^s + \| v \|_{W^{1,0}_{s,l}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s\right).$$ The inequality implies that $$\begin{array}c \Psi (\rho_1) \le {\varepsilon}\Psi(\rho_2)+\frac {A} {(\rho_2-\rho_1)^{\alpha}}, \qquad \forall ~\rho_1, \ \rho_2: \quad R_1 \le \rho_1<\rho_2\le R_0, \end{array} \label{Giaquinta's lemma}$$ for ${\alpha}=2s s' $ and for $R_1=r$, $R_0=R-\frac 1{10}(R-r)$. Now we shall take an advantage of the following lemma (which can be easily proved by iterations if one takes $\rho_k:=R_0-2^{-k}(R_0-R_1)$): Assume $\Psi$ is a nondecreasing bounded function which satisfies the inequality for some ${\alpha}>0$, $A>0$, and ${\varepsilon}\in (0,2^{-{\alpha}})$. Then there exists a constant $B$ depending only on ${\varepsilon}$ and ${\alpha}$ such that $$\Psi(R_1)\le \frac {B\, A}{(R_0-R_1)^{\alpha}} .$$ \[Giaquinta\_lemma\] Fixing ${\varepsilon}= 2^{-4 ss'}$ in and applying Lemma \[Giaquinta\_lemma\] to our function $\Psi$, we obtain the estimate $$\| v \|_{W_{s,l}^{2,1} (Q_{r}^+)} \le C_* \Big(\| f\|_{L_{s,l}(Q^+_R)}^s+\| v \|_{W^{1,0}_{s,l}(Q^+_R)}^s + \|\bar p\|_{L_{s,l}(Q^+_R)}^s \Big).$$ Then from we obtain that $\hat \nabla_\ph p\in L_{s,l}(Q_r^+)$. Taking into account and $\|\ph\|_{W^3_\infty(S_R)}\le \mu_2$ we get $$\nabla p\in L_{s,l}(Q_r^+), \quad \|\nabla p\|_{L_{s,l}(Q_r^+)} \ \le \ c\Big( \| v \|_{W_{s,l}^{2,1} (Q_{r}^+)} + \| f\|_{L_{s,l} (Q_{r}^+)}\Big).$$ Theorem \[Theorem\_Local\_Estimate\] is proved. $\square$ Proof of Theorem \[Theorem2\] {#Section_T2} ============================= For the presentation convenience we fix $R=1$ and $r =\frac 12$. The extension of our proof to the case of general $0<r<R$ is straightforward. Let $\rho_m \to +0$ be an arbitrary sequence. Extend all functions $v$, $p$, $f$ from $Q^+$ to the set $B^+\times \Bbb R$ by zero. For any extended function $v$ denote by $v^m$ the mollification of the function $v$ with respect to $t$ variable: $$v^m(x,t) := (\om_{\rho_m}* v)(x,t) \equiv {\int\limits}_{{\mathbb{R}}} \omega_{\rho_m} ( t - \tau ) v(x,\tau)\,d\tau,$$ where $ \omega_{\rho}(t) = \frac1{\rho}\omega(t/\rho )$, and $\omega \in C^\infty_0(-1, 1)$ is a smooth kernel normalized by the identity $\int_0^1 \om(t)dt = 1$. As $v \in W^{1,0}_{s,l}( Q^+)$, $p \in L_{s,l}( Q^+)$, $f \in L_{s,l}( Q^+)$ we have $$\gathered v^m \to v \quad \text{ in } W^{1,0}_{s,l}( Q^+), \quad p^m \to p \quad \text{ in } L_{s,l}( Q^+),\\ f^m \to f \quad \text{ in } L_{s,l}( Q^+). \endgathered \label{wsl2}$$ Let us fix arbitrary $\dl\in (0,\frac 1{12})$. Then for any $\rho_m< \dl$ and any $\eta\in C^\infty(\bar Q^+)$ $${\partial}_t(\om_{\rho_m}* \eta) (x,t) = (\om_{\rho_m}* {\partial}_t \eta)(x,t), \quad \forall~x\in B^+, \ t\in (-1+\dl, -\dl).$$ Let us take in $\eta= \om_{\rho_m}* \tilde \eta$ where $\tilde \eta\in C^\infty(\bar Q^+)$ is an arbitrary function vanishing on ${\partial}B^+\times (-1,0)$ and on $B^+\times (-1, -1+\dl)$ and $B^+\times (-\dl, 0)$. Using the property of convolution $$\gathered \int\limits_{Q^+} g\cdot (\om_{\rho_m} * h)~dxdt \ = \ \int\limits_{Q^+} (\om_{\rho_m} *g)\cdot h~dxdt, \\ \forall~\rho_m<\dl, \ g\in L_1(Q^+), \ h\in C^\infty(\bar Q^+): \ \supp h\subset \bar B^+\times [-1+\dl, -\dl], \endgathered$$ and taking into account the fact that convolution with respect to $t$ commutes with the differential operators $\Delta_\ph$, $\nabla_\ph$, we obtain the identity $$\gathered -~ \int\limits_{Q^+} v^m\cdot ({\partial}_t \tilde \eta + \hat \Delta_\ph \tilde \eta)~dxdt \ = \ \int\limits_{Q^+} (f^m\cdot \tilde \eta + p^m \hat\nabla_\ph \cdot \tilde \eta)~dxdt \endgathered \label{Integral_Identity-1}$$ which holds for all $\tilde \eta\in C^\infty(\bar Q^+)$ such that $\tilde \eta|_{x_n=0}=0$ and $\tilde \eta$ vanishes on $B^+\times (-1, -1+\dl)$, $B^+\times (-\dl, 0)$, and near the set ${\partial}'B^+\times (-1,0)$, where ${\partial}'B^+:=\{ x\in \Bbb R^n: |x|=R, x_n>0\}$. Let $\zeta \in C^\infty(\bar{Q}^+)$ be a cut-of function vanishing in $ Q^+ \setminus Q^+_{5/6}$ and such that $\zeta \equiv 1$ in $Q^+_{2/3}$. Denote $u^m := \zeta v^m$, $q^m := \zeta p^m$. Then from we obtain that $(u^m, q^m)$ satisfy the integral identity $$\gathered -~ \int\limits_{B^+\times (-1,-\dl)} u^m\cdot ({\partial}_t \eta + \hat \Delta_\ph \eta)~dxdt \ = \ \int\limits_{B^+\times (-1,-\dl)} (f_0^m\cdot \eta + q^m \hat\nabla_\ph \cdot \eta)~dxdt \endgathered$$ for any $\eta\in C^\infty(\bar B^+\times [-1,-\dl])$ such that $\eta|_{{\partial}B^+\times (-1, -\dl )}=0$ and $\eta|_{B^+\times \{t=-\dl \}}=0$. Here by $f_0^m$ we denote the expression $$\gathered f_0^m = f^m \zeta - v^m {\partial}_t \zeta + v^m \hat{\Delta}_{\ph} \zeta - 2 \hat{\nabla}_{\ph} v^m \hat{\nabla}_{\ph} \zeta - p^m \hat{\nabla}_{\ph} \zeta. \endgathered \label{f_0^m}$$ Moreover, $u^m$ also satisfies the identity $$\hat \nabla_\ph \cdot u^m \ = \ g^m \quad \mbox{a.e. in}\quad Q^+,$$ where we denote $$g^m = v^m \cdot \hat{\nabla}_{\ph} \zeta.$$ Assume ${\Omega}\subset \Bbb R^3$ is a smooth domain such that $B^+_{5/6}\subset {\Omega}\subset B^+$ and denote $\tilde Q^+:={\Omega}\times (-1,0)$. As $v^m$ is smooth with respect to $t$ variable for each fixed $m \in {\mathbb{N}}$ the functions $f^m$, $g^m$ possess the properties $$f^m_0 \in L_{s,l}(\tilde Q^+), \quad g^m \in W^{1,0}_{s,l}(\tilde Q^+), \quad {\partial}_t g^m \in L_{s,l}(\tilde Q^+), \quad \int\limits_{{\Omega}} g^m(x,t)~dx=0.$$ From Theorem \[Theorem1\] we obtain that for any $m\in \Bbb N$ there exists a strong solution $\tilde u^m\in W^{2,1}_{s,l}(\tilde Q^+)$, $\tilde q^m\in W^{1,0}_{s,l}(\tilde Q^+)$ to the problem $$\left\{ \quad \gathered \gathered {\partial}_t \tilde u^m - \hat{\Delta}_{\ph} \tilde u^m + \hat{\nabla}_{\ph} \tilde q^m = f_0^m \\ \hat{\nabla}_{\ph} \cdot \tilde u^m = g^m\\ \endgathered \quad \text{ in } Q^+, \\ \tilde u^m|_{{\partial}' \tilde Q^+} = 0. \endgathered \right. \label{wsl3}$$ Note that as $\zeta\equiv 1$ in $Q^+_{2/3}$, we have the identity $g^m\equiv 0$ in $Q^+_{2/3}$. So, functions $(\tilde u^m, \tilde q^m)$ satisfy all assumptions of Theorem \[Theorem\_Local\_Estimate\] in $Q^+_{2/3}$ and hence by Theorem \[Theorem\_Local\_Estimate\] with $r=\frac 12$, $R=\frac 23$ we obtain the estimate $$\gathered \| \tilde{u}^m \|_{W^{2,1}_{s,l}(Q^+_{1/2})} + \| \nabla \tilde{q}^m \|_{L_{s,l}(Q^+_{1/2})} \ \le \\ \le \ C~ {\left}( \| f^m_0 \|_{L_{s,l}(Q^+_{2/3})} + \| \tilde u^m \|_{W^{1,0}_{s,l}(Q^+_{2/3})} + \| \tilde q^m -b \|_{L_{s,l}(Q^+_{2/3})} {\right}) \endgathered \label{wsl4}$$ where the constant $C$ does not depend neither on $m$ nor on $\dl$ and $b\in L_l(-\frac 49,0 )$ is arbitrary. As every strong solution of the Perturbed Stokes system is a generalized one, from we obtain that $(\tilde u^m, \tilde q^m)$ satisfy the integral identity $$\gathered -~ \int\limits_{\tilde Q^+}\tilde u^m\cdot ({\partial}_t \eta + \hat \Delta_\ph \eta)~dxdt \ = \ \int\limits_{\tilde Q^+} (f_0^m\cdot \eta + \tilde q^m \hat\nabla_\ph \cdot \eta)~dxdt \endgathered$$ for all $\eta\in C^\infty(\overline{ \tilde Q^+})$ such that $\eta|_{{\partial}{\Omega}\times (-1, 0 )}=0$ and $\eta|_{{\Omega}\times \{t=0 \}}=0$. Hence the differences $w^m:=u^m-\tilde u^m$, $\pi^m:=q^m-\tilde q^m$ are a generalized solution to the Perturbed Stokes system in ${\Omega}\times (-1, -\dl)$ satisfying the integral identity $$\gathered -~ \int\limits_{{\Omega}\times (-1,-\dl)} w^m\cdot ({\partial}_t \eta + \hat \Delta_\ph \eta)~dxdt \ = \ \int\limits_{{\Omega}\times (-1,-\dl)} \pi^m \hat\nabla_\ph \cdot \eta~dxdt, \\ \hat \nabla_\ph \cdot w^m =0 \quad \mbox{a.e. in}\quad {\Omega}\times (-1,-\dl) \endgathered \label{Integral_Identity-2}$$ for any $\eta\in W^{2,1}_{s', l'}({\Omega}\times (-1,-\dl))$ such that $\eta|_{{\partial}{\Omega}\times (-1, -\dl )}=0$ and $\eta|_{{\Omega}\times \{t=-\dl \}}=0$. Denote $\ka=\min\{s,l\}>1$. As $u^m$, $\tilde u^m\in L_{s,l}(\tilde Q^+)$ and $q^m$, $\tilde q^m\in L_{s,l}(\tilde Q^+)$ we have $w^m=u^m-\tilde u^m\in L_\ka(\tilde Q^+)$ and $\pi^m =q^m-\tilde q^m \in L_\ka(\tilde Q^+)$. Hence $|w^m|^{\ka-2}w^m\in L_{\ka'}(\tilde Q^+)$, and using Theorem \[Theorem1\] we can find functions $\eta\in W^{2,1}_{\ka'}({\Omega}\times (-1,-\dl))$ and $\kappa \in W^{1,0}_{\ka'}({\Omega}\times (-1,-\dl))$ such that $$\left\{\quad \gathered \gathered {\partial}_t \eta + \hat{\Delta}_{\ph} \eta + \hat{\nabla}_{\ph} \kappa = |w^m|^{\ka-2} w^m,\\ \hat{\nabla}_{\ph} \cdot \eta = 0, \endgathered \quad \text{in} \quad {\Omega}\times (-1, -\dl), \\ \eta|_{{\partial}{\Omega}\times (-1,\-\dl)}=0, \qquad \eta|_{t=-\dl}= 0. \endgathered \right.$$ Substituting this $\eta$ as a test function into the identity we obtain $w^m =0$ in ${\Omega}\times (-1, -\dl)$. Hence $u^m =\tilde u^m\in W^{2,1}_{s,l}({\Omega}\times (-1, -\dl))$. Hence from we obtain $$\int\limits_{{\Omega}\times (-1,-\dl)} \pi^m \hat\nabla_\ph \cdot \eta~dxdt \ = \ 0, \quad \forall ~\eta\in L_{l'}((-1,-\dl); \overset{\circ}{W}{^1_{s'}}({\Omega})). \label{Pressure}$$ Correcting, if necessary, function $\tilde q^m$ by a constant, we can assume that $\int\limits_{\Omega}\pi^m~dx =0$ for a.e. $t\in (-1,-\dl)$. As $\pi_m \in L_\ka({\Omega})$ for a.e. $t\in (-1,-\dl)$, we have $|\pi_m|^{\ka-2}\pi^m \in L_{\ka'}({\Omega})$ for a.e. $t\in (-1,-\dl)$. Taking into account the identity $\hat\nabla_\ph \cdot \eta = {\mathop{\mathrm{div }}}({\EuScript{L}}_\ph\eta)$ where ${\EuScript{L}}_\ph$ is smooth invertible matrix and using results of [@Bogovskii] for a.e. $t$ we can find $\eta(\cdot,t) \in \overset{\circ}{W}{^1_{\ka'}}({\Omega})$ such that $$\left\{ \quad \gathered {\mathop{\mathrm{div }}}({\EuScript{L}}_\ph\eta) = |\pi_m|^{\ka-2}\pi^m -(|\pi_m|^{\ka-2}\pi^m)_{\Omega}, \quad \mbox{a.e. } t\in (-1, -\dl), \\ \| \eta\|_{W^1_{\ka'}({\Omega})} \le C\|\pi^m\|_{L_{\ka}({\Omega})}^{\ka-1}. \endgathered \right.$$ From the last estimate we see that $\eta \in L_{\ka'}((-1,-\dl); \overset{\circ}{W}{^1_{\ka'}}({\Omega}))\subset L_{l'}((-1,-\dl); \overset{\circ}{W}{^1_{s'}}({\Omega}))$. Substituting this $\eta$ into the identity , we obtain $\pi^m = 0$. This implies $q^m = \tilde q^m +const$ and we obtain the inclusion $q^m\in W^{1,0}_{s,l}({\Omega}\times (-1, -\dl))$. Moreover, from we obtain $$\gathered \| {u}^m \|_{W^{2,1}_{s,l}(B^+_{1/2}\times (-\frac {1}{4}, -\dl))} + \| \nabla {q}^m \|_{L_{s,l}(B^+_{1/2}\times (-\frac {1}{4},-\dl))} \ \le \\ \le \ C~ {\left}( \| f^m_0 \|_{L_{s,l}(Q^+_{2/3})} + \| u^m \|_{W^{1,0}_{s,l}(Q^+_{2/3})} + \| q^m -b \|_{L_{s,l}(Q^+_{2/3})} {\right}) \endgathered$$ where $C$ is independent on $m$ and $\dl$. Using identities $u^m=\zeta v^m$, $q^m=\zeta p^m$, $\zeta\equiv 1$ on $Q_{2/3}$ and the expression for $f^0_m$ we arrive at the estimate $$\gathered \| v^m \|_{W^{2,1}_{s,l}(B^+_{1/2}\times (-\frac {1}{4}, -\dl))} + \| \nabla p^m \|_{L_{s,l}(B^+_{1/2}\times (-\frac {1}{4},-\dl))} \ \le \\ \le \ C~ {\left}( \| f^m \|_{L_{s,l}(Q^+_{2/3})} + \| v^m \|_{W^{1,0}_{s,l}(Q^+_{2/3})} + \| p^m \|_{L_{s,l}(Q^+_{2/3})} {\right}). \endgathered$$ Making use of we obtain $$\begin{array}c v\in W^{2,1}_{s,l}\left(B^+_{1/2}\times (-\frac {1}{4}, -\dl)\right), \quad p\in W^{1,0}_{s,l}\left(B^+_{1/2}\times (-\frac {1}{4},-\dl)\right), \end{array}$$ and the estimate $$\gathered \| v \|_{W^{2,1}_{s,l}(B^+_{1/2}\times (-\frac {1}{4}, -\dl))} + \| \nabla p \|_{L_{s,l}(B^+_{1/2}\times (-\frac {1}{4},-\dl))} \ \le \\ \le \ C~ {\left}( \| f \|_{L_{s,l}(Q^+_{2/3})} + \| v \|_{W^{1,0}_{s,l}(Q^+_{2/3})} + \| p \|_{L_{s,l}(Q^+_{2/3})} {\right}) \endgathered$$ holds for any $\dl\in (0, \frac 1{12})$ with $C$ independent on $\dl$. The last inequality provides the required properties of $(v,p)$. Theorem \[Theorem2\] is proved. $\square$ Proof of Theorem \[Theorem3\] {#Section_T3} ============================= As usually, for the presentation convenience we fix $R=1$ and $r =\frac 12$. For any $k=0,1,\ldots$ denote $s_k=\frac{ns}{n-ks}$ if $n>ks$ and $\frac{ns}{n-ks}<m$ and $s_k=m$ otherwise. Denote also $N = \min\{ k\in \Bbb N: s_k = m\}$ and $\rho_k = \frac 12 + \frac 1{2^{k+1}}$. Using Theorem \[Theorem\_Local\_Estimate\] and Theorem \[Theorem2\] we see that if $$(v,p)\in W^{1,0}_{s_k, l}(Q^+_{\rho_k})\times L_{s_k,l}(Q^+_{\rho_k})$$ is a generalized solution of the problem , in $Q^+_{\rho_k}$, then $(v,p)\in W^{2,1}_{s_k, l}(Q^+_{\rho_{k+1}})\times W^{1,0}_{s_{k},l}(Q^+_{\rho_{k+1}})$ and the following estimate holds: $$\gathered \| v \|_{W^{2,1}_{s_k, l}(Q^+_{\rho_{k+1}})} + \| \nabla p \|_{L_{s_k, l}(Q^+_{\rho_{k+1}})} \ \le \\ \le \ C~\Big( \| f\|_{L_{m ,l}(Q^+)} + \| v \|_{W^{1,0}_{s_k, l}(Q^+_{\rho_k})} + \| p - b \|_{L_{s_k, l}(Q^+_{\rho_k})} \Big), \endgathered \label{1}$$ where $b\in L_l(-1,0)$ is an arbitrary function of $t$-variable. Moreover, due to the imbedding $W^1_{s_k}(B^+_{\rho_{k+1}})\hookrightarrow L_{s_{k+1}}(B^+_{\rho_{k+1}})$ we obtain the estimate $$\gathered \| v \|_{W^{1,0}_{s_{k+1}, l}(Q^+_{\rho_{k+1}})} + \| p \|_{L_{s_{k+1}, l}(Q^+_{\rho_{k+1}})} \ \le \\ \le \ C~\Big(\| v \|_{W^{2,1}_{s_k, l}(Q^+_{\rho_{k+1}})} + \| p \|_{W^{1,0}_{s_k, l}(Q^+_{\rho_{k+1}})}\Big). \endgathered \label{2}$$ Iterating and from $k=0$ to $k=N$ we finally obtain the estimate $$\gathered \| v \|_{W^{2,1}_{s_N, l}(Q^+_{1/2})} + \|\nabla p \|_{L_{s_N, l}(Q^+_{1/2})} \ \le \\ \le \ C^N~ \Big( \| f\|_{L_{m ,l}(Q^+)} + \| v \|_{W^{1,0}_{s_0, l}(Q^+)} + \| p - b \|_{L_{s_0, l}(Q^+)} \Big). \endgathered$$ This estimate is equivalent to . Theorem \[Theorem3\] is proved.  $\square$ [99]{} , [*On solution of some problems of vectoral analysis related to ${\mathop{\mathrm{div }}}$ and $\operatorname{grad}$ operators*]{}, Proc. of S.L. Sobolev Seminar [**1**]{} (1980), 5-40. , [*On the Stokes problem with non-zero divergence*]{}, Zap. Nauchn. Semin. POMI [**370**]{} (2009), 184-202. , [*Some estimates near the boundary for solutions to the non-stationary linearized Navier-Stokes equations*]{}, Zap. Nauchn. Semin. POMI [**271**]{} (2000), 204-223. , Local regularity of suitable weak solutions to the Navier-Stokes equations near the boundary, Journal of Mathematical Fluid Mechanics [**4**]{} (2002) no.1, 1-29. , [*A note on local boundary regularity for the Stokes system*]{}, Zap. Nauchn. Semin. POMI [**370**]{} (2009), 151-159. , [*Boundary partial regularity for the Navier-Stokes equations*]{}, Zap. Nauchn. Semin. POMI [**310**]{} (2004), 158-190. , [*Estimates of solutions of the Stokes equations in Sobolev spaces with a mixed norm*]{}, Zap. Nauchn. Semin. POMI [**288**]{} (2002), 204-231. , [*On the estimates of solutions of nonstationary Stokes problem in anisotropic S.L. Sobolev spaces and on the estimate of resolvent of the Stokes problem*]{}, Uspekhi Matematicheskih Nauk, [**58**]{} (2003) no.2 (350) 123-156. [^1]: The second author is supported by RFBR, grant 11-01-00324
--- abstract: 'Basing on Ginzburg-Landau approach we generalize the Kittel theory and derive the interpolation formula for the temperature evolution of a multi-domain polarization profile $\mathbf{P}(x,z)$. We resolve the long-standing problem of the near-surface polarization behavior in ferroelectric domains and demonstrate the polarization vanishing instead of usually assumed fractal domain branching. We propose an effective scaling approach to compare the properties of different domain-containing ferroelectric plates and films.' author: - 'Igor A. Luk’yanchuk' - Laurent Lahoche - Anaïs Sené title: Universal Properties of Ferroelectric Domains --- Design of ferroelectric devices necessitates taking into account such finite size effects as the formation of polarization-induced surface charges that, in turn, produce the energy consuming electrostatic depolarizing fields (see Ref.[@2005_Dawber] for review). As a result, regular periodic structures of $180^{0}$ domains that alternates the surface charge distribution, firstly proposed by Landau and Lifshitz[@1935_Landau; @Landau8] and by Kittel[@1946_Kittel] for ferromagnetic systems, can be formed in uniaxial easy-axis (natural or stress-induced) ferroelectric plates or films as an effective mechanism to confine the depolarization field to the near-surface layer and reduce its energy (Fig.\[Variat\_Fig1\]a). The energy balance between the field-penetration depth ($\sim $ domain width $d$) and domain wall (DW) concentration ($\sim d^{-1}$) leads to the famous square-root Kittel dependence of $d$ on the film thickness $2a_{f}$ [1946\_Kittel,2000\_Bratkovsky,Guerville,2007Catalan]{}: $$d=\sqrt{\gamma \,(\epsilon _{\perp }/\epsilon _{\parallel })^{1/2}\,(2a_{f}\,\xi _{0x})}\,,\quad \gamma ={\frac{2\sqrt{2}\pi ^{3}}{% 21\zeta (3)}}\simeq 3.53, \label{Kittel}$$where $\epsilon _{\parallel }$ and $\epsilon _{\perp }$ are the longitudinal and transversal dielectric constants and $\xi _{0x}$ is the transverse coherence length (roughly equal to the DW thickness). a\)     ![(a)Multi-domain texture of ferroelectric polarization in uniaxial ferroelectric film, sandwiched by two paraelectric (dead)-layers. The emerging depolarization electric field is provided by alternating polarization-induced surface charges and confined in the near-surface layer of thickness, comparable with domain width $d$. (b) Elliptical functions $y=% \mathrm{sn}(x,m)$ for different parameters $m$ that we use to model the domain profile at different $t$. (c) Phase diagram of domain states as function of sample thickness $2a_f$ and reduced critical temperature $% t=T/T_{c0}-1$. Polarization profiles of hard and soft domains were obtained by numerical solution of equations (\[Equations\])-([perd]{}). We assume that $\varkappa _{\parallel }\simeq 500$, $\protect% \varepsilon _{\perp }\simeq 100$, $\protect\varepsilon_p\ll \protect% \varepsilon _{\perp },\varkappa _{\parallel }$, $\protect\xi_{0x}\simeq 1nm$ and $a_p\simeq30nm$. []{data-label="Variat_Fig1"}](Variat_Fig1a "fig:"){width="3.8cm"}   b) ![(a)Multi-domain texture of ferroelectric polarization in uniaxial ferroelectric film, sandwiched by two paraelectric (dead)-layers. The emerging depolarization electric field is provided by alternating polarization-induced surface charges and confined in the near-surface layer of thickness, comparable with domain width $d$. (b) Elliptical functions $y=% \mathrm{sn}(x,m)$ for different parameters $m$ that we use to model the domain profile at different $t$. (c) Phase diagram of domain states as function of sample thickness $2a_f$ and reduced critical temperature $% t=T/T_{c0}-1$. Polarization profiles of hard and soft domains were obtained by numerical solution of equations (\[Equations\])-([perd]{}). We assume that $\varkappa _{\parallel }\simeq 500$, $\protect% \varepsilon _{\perp }\simeq 100$, $\protect\varepsilon_p\ll \protect% \varepsilon _{\perp },\varkappa _{\parallel }$, $\protect\xi_{0x}\simeq 1nm$ and $a_p\simeq30nm$. []{data-label="Variat_Fig1"}](Variat_Fig1b "fig:"){width="3.2cm"} c) ![(a)Multi-domain texture of ferroelectric polarization in uniaxial ferroelectric film, sandwiched by two paraelectric (dead)-layers. The emerging depolarization electric field is provided by alternating polarization-induced surface charges and confined in the near-surface layer of thickness, comparable with domain width $d$. (b) Elliptical functions $y=% \mathrm{sn}(x,m)$ for different parameters $m$ that we use to model the domain profile at different $t$. (c) Phase diagram of domain states as function of sample thickness $2a_f$ and reduced critical temperature $% t=T/T_{c0}-1$. Polarization profiles of hard and soft domains were obtained by numerical solution of equations (\[Equations\])-([perd]{}). We assume that $\varkappa _{\parallel }\simeq 500$, $\protect% \varepsilon _{\perp }\simeq 100$, $\protect\varepsilon_p\ll \protect% \varepsilon _{\perp },\varkappa _{\parallel }$, $\protect\xi_{0x}\simeq 1nm$ and $a_p\simeq30nm$. []{data-label="Variat_Fig1"}](Variat_Fig1c "fig:"){width="6.3cm"} . Consider the standard geometry [@2000_Bratkovsky] when the uniaxial ferroelectric film is sandwiched by electroded paraelectric passive layers of width $a_{p}$ and permittivity $\varepsilon _{p}$. The multi-domain state should exist in certain intervals of film thickness $2a_{f}$ as shown in phase diagram in Fig.\[Variat\_Fig1\]c and defined by the condition that delineates the applicability of Eq.(\[Kittel\]) and of our further consideration: $$\xi _{0x}<d(2a_{f})<a_{p} \label{condition}$$the dependence $d(2a_{f})$ being given by (\[Kittel\]). We also assume the most realistic case $\varepsilon _{p}\ll \varepsilon _{_{\perp }}<\varepsilon _{_{\parallel }}$ that gives $d\ll a_{f}$. At this stage the properties of domain structure do not depend on $a_{p}$, $\varepsilon _{p}$ and electrodes. For thicker films, when $d(2a_{f})$ approaches to $a_{p}$ the emergent depolarizing field interacts with screening electrodes, Eq.([Kittel]{}) is not valid anymore, $d$ growth exponentially with $a_{p}^{-2}$ [@2000_Bratkovsky] and domains practically emerge from the sample. However in free standing electrodless sample ($a_p\rightarrow \infty$) Kittel domains can exist in a wider interval of $2a_f$ unless another restricting mechanism of the internal free charges screening does not came into the play. For thinner films we are turning to the region of little-studied atomic-size (microscopic) domains [@Bratkovsky2006]. While domain structures should play a crucial role in the properties of thin ferroelectric films, only a few theoretical analytical studies of their temperature dependence have been performed. In particularly the mostly used Kittel approach [@Landau8; @1946_Kittel; @2000_Bratkovsky] in which the domain texture is considered as a set of up- and down- oriented (hard) domains, having a flat polarization profile $\mathbf{P}(x,z)=\pm P_{0}$, DW are supposed to be infinitely thin and boundary effects on the ferroelectric-paraelectric interface are neglected, is valid only far below the transition temperature $T_c$. Although the more general consideration, proposed by Chensky and Tarasenko (CT)[@1982_Chensky] (see also [Guerville,2004\_Stephanovich]{}) is based on Ginzburg-Landau equations coupled with electrostatic equations is valid in the whole temperature interval, only the solution close to $T_c$ was found. It is the objective of the present communication to establish the approach that permits to model the temperature evolution of domain structure. Basing on CT equations we derive the analytical expression (\[varsolut\]) for domain polarization profile that is valid in the whole temperature interval and includes the Kittel (at low $T$) and CT (at $T=T_c$) solutions as particular cases. Then, we deduce universal scaling relations between parameters of the multi-domain state that should be useful in treatment of experimental data. Our approach is complimentary to the frequently used first-principia simulations (see e.g. [@Bo-Kuai_2006]), that reproduce the domain structure but give no general vision and parameter dependence of the results. Deducing the CT equations we are basing on the Euler-Lagrange variational formalism, that permits also to obtain the correct boundary conditions as variation of surface terms. The generating energy functional is written as [@Landau8]: $$F=\int \widetilde{\Phi }(\mathbf{P},\mathbf{E})dxdz,\quad \widetilde{\Phi }(% \mathbf{P},\mathbf{E})=\widetilde{\Phi }(\mathbf{P},0)-\mathbf{EP}-\frac{1}{% 8\pi }\mathbf{E}^{2} \label{Functional}$$where $\mathbf{E}=(E_{x},E_{z})$, $\mathbf{P}=(P_{x},P_{z})$ and the field-independent part $$\widetilde{\Phi }(\mathbf{P},0)=\frac{4\pi }{\varepsilon _{\perp }}\frac{1}{2% }P_{x}^{2}+\frac{4\pi }{\varepsilon _{i\parallel }}\frac{1}{2}P_{zi}^{2}+% \frac{4\pi }{\varkappa _{\parallel }}f(P)$$includes the transversal $P_{x}$, and non-polar longitudinal $P_{zi}$ noncritical contributions ($\varepsilon _{\perp }$,$\varepsilon _{i\parallel }\gg 1$). The nonlinear Ginzburg-Landau energy depends on the spontaneous $z$-oriented polarization $P$ (assuming that $P_{z}=P_{zi}+P$) and is written as: $$f(P)=\frac{t}{2}P^{2}+\frac{1}{4}P_{0}^{-2}P^{4}+\frac{\xi _{0x}^{2}}{2}% \left( \partial _{x}P\right) ^{2}+\frac{\xi _{0z}^{2}}{2}\left( \partial _{z}P\right) ^{2} \label{GLD}$$where the reduced temperature $t$ is expressed via the bulk critical temperature as: $t=T/T_{c0}-1$, parameter $\varkappa _{\parallel }$ is expressed via paraelectric Curie constant $C$ and via longitudinal zero-temperature permittivity $\varepsilon _{\parallel }$ in (\[Kittel\]) as: $\varkappa _{\parallel }=C/T_{c0}\simeq 2 \varepsilon _{\parallel }$ , and coefficient $P_0$ is roughly equal to the saturated bulk polarization at $T\ll T_c$ The variation of (\[Functional\]) with respect to $P$ and the electrostatic potential $\varphi $ ($\mathbf{E}=-\nabla \varphi $) and excluding of the non-essential variables $P_{x}$ and $P_{zi}$ gives the system of required equations that describe the ferroelectric transition taking into account the depolarizing field: $$\begin{gathered} (t-\,\xi _{0x}^{2}\partial _{x}^{2}-\xi _{0z}^{2}\partial _{z}^{2})P+(P/P_{0})^{2}P=-\frac{\varkappa _{\parallel }}{4\pi }\partial _{z}\varphi , \label{Equations} \\ (\varepsilon _{i\parallel }\partial _{z}^{2}+\varepsilon _{\perp }\partial _{x}^{2})\varphi =4\pi \partial _{z}P. \notag\end{gathered}$$These equations should be completed by the Poisson equation for paraelectric media in which ferroelectric film is embedded: $$(\partial _{z}^{2}+\partial _{x}^{2})\varphi ^{(p)}=0, \label{fip}$$and by boundary conditions at the Para-Ferro interface$$\varepsilon _{i\parallel }\partial _{z}\varphi -\varepsilon _{p}\partial _{z}\varphi ^{(p)}=4\pi P,\quad \varphi =\varphi ^{(p)},\quad \partial _{z}P=0. \label{bcfi2}$$that are also obtained as result of variation of (\[Functional\]) [remark]{}. Periodic conditions $$P(x,z)=P(x+2d,z)\quad \varphi (x,z)=\varphi (x+2d,z) \label{perd}$$with variational parameter $d$ are imposed to describe the periodicity of domain structure. A simplification can be achieved if present the initial functional ([Functional]{}) using the dimensionless (prime) variables: $$\begin{aligned} z &=&a_{f}\,z^{\prime },\quad x=\tau ^{-1/2}\xi _{0x}\,x^{\prime },\quad t=\tau \,t^{\prime }, \label{sc1} \\ P &=&\tau ^{1/2}P_{0}\,P^{\prime },\quad \varphi =\frac{1}{\varkappa _{\parallel }}\,\tau ^{3/2}\,a_{f}P_{0}\,\varphi ^{\prime }, \notag \\ F &=&\frac{a_{f}\,\xi _{0x}}{\varkappa _{\parallel }}\tau ^{3/2}\,P_{0}{}^{2}\,F^{\prime }\,\,\,\,\,\,\,\,\,\, \notag\end{aligned}$$with $$\tau =\left( \frac{\varkappa _{\parallel }}{\varepsilon _{\perp }}\right) ^{% \frac{1}{2}}\frac{\xi _{0x}}{a_{f}}\ll 1$$in truncated form, $$\begin{aligned} F^{\prime } &=&\int [4\pi \left( \frac{1}{2}t^{\prime }P^{\prime 2}+\frac{1}{% 4}P^{\prime 4}+\frac{1}{2}\left( \partial _{x}^{\prime }P^{\prime }\right) ^{2}\right) \notag \label{sfn} \\ &&-\frac{1}{8\pi }(\partial _{x}^{\prime }\varphi ^{\prime })^{2}+P^{\prime }\partial _{z}^{\prime }\varphi ^{\prime }]dx^{\prime }dz^{\prime }\end{aligned}$$that was obtained after neglecting the small terms $$\widehat{A}_{1}=(\frac{\varepsilon _{\perp }}{\varkappa _{\parallel }})^{1/2}% \frac{\xi _{0z}}{a_{f}}(\partial _{z}^{\prime }P^{\prime })^{2},\,\,\,\,% \widehat{A}_{2}=\frac{\varepsilon _{i\parallel }}{\varkappa _{\parallel }}(% \frac{\varkappa _{\parallel }}{\varepsilon _{\perp }})^{1/2}\frac{\xi _{0x}}{% a_{f}}(\partial _{z}^{\prime }\varphi ^{\prime })^{2} \label{small}$$(justification is given in Appendix) and minimizing over $P_{x},P_{zi}$. The Euler-Lagrange variation of (\[sfn\]) over $P^{\prime }$ and $\varphi ^{\prime }$ gives the corresponding dimensionless equations: $$\begin{gathered} (t^{\prime }-\,\partial _{x}^{\prime 2})P^{\prime }+P^{\prime 3}=-\frac{1}{% 4\pi }\,\partial _{z}^{\prime }\varphi ^{\prime }, \label{ne1} \\ \partial _{x}^{\prime 2}\varphi ^{\prime }=4\pi \,\partial _{z}^{\prime }P^{\prime }, \label{ne2}\end{gathered}$$and boundary conditions at $z^\prime_-=0$ and at $z^\prime_+=2a_f^\prime=2$: $$P^{\prime }=0,\quad \varphi ^{\prime }=\varphi ^{\prime (p)}. \label{nbc}$$ that are simpler then conditions (\[bcfi2\]) since the order of ([Equations]{}) was reduced by neglecting (\[small\]). We stress here that these conditions are *derived* from functional (\[sfn\]) as variational surface terms. Passage to dimensionless variables is the powerful tool that permits to study the various properties of ferroelectric domains even without solution the differential equations. Note first that equations (\[ne1\],\[ne2\]) contain only one driving variable - the dimensionless temperature $t^{\prime }$. Therefore the “master” temperature dependence of any physical parameter calculated from (\[ne1\],\[ne2\]) can be re-scaled for any other ferroelectric sample, using the relations (\[sc1\]). We derive now such “master” variational solution of equations (\[ne1\],[ne2]{}) for domain profile $P^{\prime }(x^{\prime },z^{\prime },t^{\prime })$ valid in the whole temperature interval. Note first that these equations can be solved analytically close to the transition to a multi-domain ferroelectric state [@1982_Chensky; @2004_Stephanovich] that occurs at: $$t_{c}^{\prime }=-\pi ,\qquad t_{c}=-2\pi \sqrt{\frac{\varkappa _{\parallel }% }{\varepsilon _{\perp }}}{\frac{\xi _{0x}}{2a_{f}}} \label{tcc}$$(in dimensionless and dimensional variables), when polarization has the sinusoidal (soft) distribution: $$P^{\prime }(x^{\prime },z^{\prime })=A(t^{\prime })\sin {\frac{\pi x^{\prime }}{d_{c}^{\prime }}}\sin {{\pi z^{\prime }}} \label{sinprof}$$with the half-period $d_{c}^{\prime }=\sqrt{2\pi }$ (that is expressed as (\[Kittel\]) in dimensional variables but with $\gamma =\pi $ and $\epsilon _{\parallel }=\varkappa _{\parallel }/2$). At lower temperatures domain walls become sharper due to the admixture of higher harmonics. At lower temperatures the domains recover the (hard) Kittel-like profile. To account for both these cases by the unique interpolation formula we shall exploit the depicted in Fig. \[Variat\_Fig1\]b periodical elliptical sinus function $y=\mathrm{sn}(x,m)=\mathrm{sn}(x+4K,m)$, frequently used to describe the incommensurate phases [@Sannikov]. The 1/4 of the elliptical sinus period is given by the tabled first kind elliptical integral $K(m)$ [@Abramowitz]. The useful property of $\mathrm{sn}(x,m)$ is that, depending on the parameter $0<m<1$ it recovers the all described above domain regimes: from the soft one (\[sinprof\]) at $m=0$ when $% \mathrm{sn}(x,m)\rightarrow \sin x$ (like in Eq. \[sinprof\]) to the hard (Kittel-like) one at $m\sim 1$ when $\mathrm{sn}(x,m)\rightarrow $ step-wise function. After some algebra (justification is given in Appendix) we arrive to the following variational expression: $$P^{\prime }=A(t^{\prime })\,\,\mathrm{sn}\left[ \frac{4K_{1}(t^{\prime })}{% 2d^{\prime }(t^{\prime })}\,x^{\prime },\,m_{1}(t^{\prime })\right] \,% \mathrm{sn}\left[ K_{2}(t^{\prime })\,z^{\prime },m_{2}(t^{\prime })\right] \label{varsolut}$$where the temperature dependencies of parameters $m_{1}(t)$ and $m_{2}(t)$, elliptic integrals $K_{1}(t)$ and $K_{2}(t)$, amplitude $A(t)$ and domain lattice half-period $d(t)$ are presented in Fig.\[FigParam\] and for practical use are approximated as:$$\begin{gathered} A^{\prime }(t^{\prime })\simeq \sqrt{\,t\,\tanh 0.35(t^{\prime }-t_{c}^{\prime })},\quad d^{\prime }(t^{\prime })\simeq 2.6 \label{approx} \\ K_{12}(t^{\prime })\simeq 0.85\sqrt{-t^{\prime} },\quad m_{12}(t^{\prime }) \simeq \tanh 0.27(t_{c}^{\prime }-t^{\prime }) \notag\end{gathered}$$ a)![Temperature dependencies of parameters of Eq.(\[varsolut\]): (a) elliptic arguments $m_{1}$ and $m_{2}$, elliptic integrals $K_{1}$ and $K_2$ , (b) domain amplitude $A$ and domain lattice period $d^\prime$. All the variables are dimensionless.[]{data-label="FigParam"}](Variat_Fig2b "fig:"){width="5cm"} b)![Temperature dependencies of parameters of Eq.(\[varsolut\]): (a) elliptic arguments $m_{1}$ and $m_{2}$, elliptic integrals $K_{1}$ and $K_2$ , (b) domain amplitude $A$ and domain lattice period $d^\prime$. All the variables are dimensionless.[]{data-label="FigParam"}](Variat_Fig2c "fig:"){width="5cm"} Formula (\[varsolut\]) satisfies the boundary conditions $$P^{\prime }(x^{\prime },z^{\prime })=P^{\prime }(x^{\prime }+2d^{\prime },z^{\prime }),\quad P^{\prime }(x^{\prime },0)=P^{\prime }(x^{\prime },2)=0, \label{obc}$$recovers the soft domain structure (\[sinprof\]) at $t_{c}^{\prime }$ when $m_{12}(t_{c}^{\prime })=0$, $A(t^{\prime })\sim (t_{c}^{\prime }-t^{\prime })^{1/2}$ and the Kittel-like structure at low $t_{c}^{\prime }$ when $% m_{12}(t_{c}^{\prime })\rightarrow 1$, $A(t^{\prime })\simeq (-t^{\prime })^{1/2}$, and gives the domain profile at arbitrary $t^{\prime }$. Parameters $K_{12}(t^{\prime })$ determine the space scale of polarization variation: in dimensional variables the characteristic domain wall thickness is $\xi_{x}(t)=\xi_{0x}/(-t)^{1/2}$ whereas the thickness of the near-surface layer where $P(z)$ restores its equilibrium value is $\sim d/(-t)^{1/2}\cdot(\varkappa_\parallel/\epsilon_\perp)^{1/2}$ (i.e. $\sim d$ at low $t$). Variation and vanishing of polarization at the sample surface modifies the initial assumption of the Kittel model that polarization is permanent inside domain and resolves the long-standing paradox [@Landau8; @StrukovLevanyuk] according to which the permanent domain polarization should be reoriented close to sample surface by its own depolarization field that exists in the near-surface layer. As it follows from our calculations, the nonuniform distribution of polarization pumps the depolarization charge $\rho(r) \sim \text{div} \mathbf{P}$ from the sample surface inside the near-surface layer $\sim d$, reducing the unfavorable depolarization field (justification is given in Appendix) and its energy $\mathcal{E}_d\sim{E}^2/4\pi\sim4\pi{P}^2$. The price of this - the dumping of the condensation energy $\mathcal{E}_c\sim 4\pi{P}^2/\epsilon_\parallel$ is not so high because $\epsilon_\parallel \gg 1$. That’s why we believe that the near-surface polarization vanishing is more effective mechanism to overcome the Kittel paradox in ferroelectrics and reduce the near-surface depolarization energy then the usually assumed [@Landau8; @StrukovLevanyuk] but rarely observed fractal branching of alternatively oriented permanent-polarization domains near the sample surface. Polarization decay at the surface is the consequence of the boundary condition $P^\prime=0$ of simplified equations (\[ne1\])-(\[nbc\]). The validity of this effect is illustrated in Fig. \[FigDomains\] where we compare the numerical solution of simplified equations (\[ne1\])-(\[nbc\]) (Fig. \[FigDomains\]b) with that for the complete set of CT equations (Fig. \[FigDomains\]a). Clearly the tendency of polarization vanishing is conserved for the case of general solution in Fig. \[FigDomains\]a, although the “real” boundary condition $\partial _{z}P=0$ (\[bcfi2\]) is satisfied exactly at the surface. Interesting to note that the precursor of the competitive surface domain branching is also seen at Fig. [FigDomains]{}ab as ripples at the domain end-points. The corresponding variational solution (\[varsolut\]) at Fig. \[FigDomains\]c is more smooth, but correctly represents the properties of numerical profile. a)![Polarization of Kittel domain. (a) Numerical solution of complete CT equations (\[Equations\])-(\[perd\]). (b) Numerical solution of simplified equations (\[ne1\])-(\[nbc\]). (c) Interpolation formula (\[varsolut\]) []{data-label="FigDomains"}](Variat_Fig3a "fig:"){width="2.5cm"} b)![Polarization of Kittel domain. (a) Numerical solution of complete CT equations (\[Equations\])-(\[perd\]). (b) Numerical solution of simplified equations (\[ne1\])-(\[nbc\]). (c) Interpolation formula (\[varsolut\]) []{data-label="FigDomains"}](Variat_Fig3b "fig:"){width="2.5cm"}c)![Polarization of Kittel domain. (a) Numerical solution of complete CT equations (\[Equations\])-(\[perd\]). (b) Numerical solution of simplified equations (\[ne1\])-(\[nbc\]). (c) Interpolation formula (\[varsolut\]) []{data-label="FigDomains"}](Variat_Fig3c "fig:"){width="2.5cm"} We present now several remarkable conclusions about the physical properties of the multi-domain state which can be obtained only from the scaling properties (\[sc1\]), without solution of CT equations (\[Equations\])-(\[perd\]). \(i) Any transverse length parameter scales as $\tau ^{-1/2}\xi _{0x}$. This, in particular, justifies the Kittel formula (\[Kittel\]) for the domain width $d$ even beyond the flat domain approximation. A convincing demonstration of the validity of this scaling law was reported recently for various ferroelectric and ferromagnetic materials [@2007Catalan]. The temperature dependence $d(t)$ can be incorporated into (\[Kittel\]) as dependence $\gamma=\gamma(t)$. Meanwhile, the results shown in Fig. [FigParam]{}b as well as finite-element simulations [@Guerville] indicate that the dependence $d(t)$ is very weak and hence one can extend the parameter $\gamma \simeq 3.53$ from (\[Kittel\]) to any temperature.This, in particular, implies the low temperature hysteresis related with motion of DW. \(ii) The temperature $t$ scales as $\tau $. Thus, to compare the domain-provided physical properties of different plates or films (even constructed from different materials) it should be instructive to trace their temperature dependencies using the re-scaled coordinate $t/\tau $. \(iii) All the domain-related properties and, in particular, the transition temperature $t_{c}$ (\[tcc\]) and the soft-to-hard domain crossover temperature $t^{\ast }\sim 10t_{c}$ scale as $1/2a_{f}$ with plate (film) width, as illustrated in Fig. \[Variat\_Fig1\]c. The temperature interval for the existence of soft-domains $\Delta t=t_{c}-t^{\ast }$ growth dramatically with decreasing film thickness and one can expect that for thin films with $2a_{f}<100nm$ only soft domains with a gradual polarization distribution are possible. Summarizing we conclude that domains in *any* ferroelectric sample and at *any* temperature can be easily obtained from interpolation formulas (\[varsolut\],\[approx\]) applying the scaling relations (\[sc1\]). This can be especially helpful to treat the experimental data, involving the local field distribution of polarization inside domains like ESR or Raman spectroscopy, TEM domain imagery etc. We demonstrated that depending on the temperature and sample width domains can have soft (gradual) or hard (Kittel) profile. In any case polarization has the tendency to vanish at sample surface. Basing on universal scaling relations (\[sc1\]) we have demonstrated how the physical properties of the different multi-domain films can be compared and mapped onto each other. We hope that such method will give the power tool for analysis and systematization of numerous experimental data for thin ferroelectric films. This work was supported by the Region of Picardy, France, by STREP “Multiceral”(NMP3-CT-2006-032616) and by FP7 IRSES program “Robocon”. We thank to Prof. M. G. Karkut for the useful discussions. **APPENDIX** (EPAPS document) We present here the technical derivation of (i) simplified equations and corresponding boundary conditions from the generating Euler-Lagrange functional (ii) interpolation formula for domain polarization (iii) justification of simplification of generating functional. We use the defined in the article dimensionless variables, omitting the prime index. *(i) Derivation of simplified equations and boundary conditions from the Euler-Lagrange functional* Euler-Lagrange variation of the simplified dimensionless functional(\[sfn\]) that describes ferroelectric phase in a infinite thin plate (film) located at $0<z<2$: over polarization $P$ and potential of electric field $\varphi $ gives: $$\begin{gathered} \delta F=\int \left[ \begin{array}{c} 4\pi \left( tP\delta P+P^{3}\delta P+\left( \partial _{x}P\right) (\partial _{x}\delta P\right) \\ -\frac{1}{4\pi }(\partial _{x}\varphi )(\partial _{x}\delta \varphi )+\delta P\partial _{z}+\delta P\partial _{z}\varphi +P\partial _{z}\delta \varphi% \end{array}% \right] dxdz \\ =% \begin{array}{c} 4\pi \int \delta P\left( tP+P^{3}-\partial _{x}^{2}P+\frac{1}{4\pi }\partial _{z}\varphi \right) dxdz \\ -\int \delta \varphi \left( \frac{1}{4\pi }\partial _{x}^{2}\varphi -\partial _{z}P\right) dxdz+\left[ \int P\delta \varphi dx\right] _{z=0}^{z=2}% \end{array}% =0\end{gathered}$$Two first (volume) terms provide the corresponding dimensionless equations (\[ne1\])-(\[ne2\]) whereas the third (surface) term gives the boundary condition (\[nbc\])that should be completed by condition of continuity of potential at $z=0$ and at $z=2$. *(ii) Derivation of interpolation formula for domain polarization* Although the nonlinear equations (\[ne1\])-(\[nbc\]) can not be solved exactly we shall look for their $x$-periodic domain solution in the variational form $$\begin{aligned} P &=&f(z)\,\mathrm{sn}\left[ \frac{4K(m_{1})}{2d}\,x,\,m_{1}\right] , \label{variat} \\ \qquad f(0) &=&f(2)=0,\text{ \ }P(x,z)=P(x+d,z) \notag\end{aligned}$$considering $m_{1}$, $d$ and the function $f(z)$ as variational parameters that minimize (\[sfn\]) Substitution of (\[variat\]) back into (\[sfn\]) and integration over domain period gives: $$\begin{aligned} &&\int [-\frac{1}{8\pi }(\partial _{x}\varphi )^{2}+P\partial _{z}\varphi ]dxdz \\ &&\overset{(\ref{ne2})}{=}\int [-\frac{1}{8\pi }(\partial _{x}\varphi )^{2}-% \frac{1}{4\pi }\varphi \partial _{x}^{2}\varphi ]dxdz \notag \\ &&\overset{(\ref{ne2})}{=}2\pi \int \left( \partial _{z}f\right) ^{2}\,(2d)^{2}\,\delta (m_{1})dz \notag\end{aligned}$$and $$\begin{aligned} &&\int 4\pi \left( \frac{1}{2}tP^{2}+\frac{1}{4}P^{4}+\frac{1}{2}\left( \partial _{x}P\right) ^{2}\right) dxdz \\ &=&4\pi \int \left( \begin{array}{c} \frac{1}{2}tf(z)^{2}\,\alpha (m_{1})+\frac{1}{4}f(z)^{4}\,\eta (m_{1}) \\ +\frac{1}{2}f(z)^{2}\,\frac{4K(m_{1})}{\left( 2d\right) ^{2}}\beta (m_{1})% \end{array}% \right) dz\end{aligned}$$Now the functional depends only on variable $z$: $$F=4\pi \int \left[ \begin{array}{c} \frac{1}{2}\,\left( \alpha (m_{1})t+\,\frac{4K(m_{1})}{\left( 2d\right) ^{2}}% \beta (m_{1})\right) f(z)^{2} \\ +\frac{1}{4}\,\eta (m_{1})f(z)^{4}+\frac{1}{2}\delta (m_{1})\,(2d)^{2}\left( \partial _{z}f\right) ^{2}% \end{array}% \right] dz \label{Fz}$$where the coefficients are expressed via complete elliptic integrals of the first and second kind $K(m)$, $E(m)$ as:$$\begin{aligned} \alpha (m) &=&<\mathrm{sn}^{2}(x,m)> \\ &=&\frac{1}{m}\left[ 1-\frac{E(m)}{K(m)}\right] \notag \\ \eta (m) &=&<\mathrm{sn}^{4}(x,m)> \\ &=&\frac{1}{3m}\left[ 2\left( 1+m\right) \alpha (m)-1\right] \notag \\ \delta (m) &=&\frac{\left\langle \mathrm{S}^{2}\left( x,m\right) \right\rangle }{\left[ 4K(m)\right] ^{2}} \\ &=&\frac{8}{m\left[ 4K(m)\right] ^{2}}\sum_{l=1,3,5}^{\infty }\left[ \frac{1% }{l}\frac{q^{l/2}(m)}{1-q^{l}(m)}\right] ^{2} \notag \\ \beta (m) &=&4K(m)\left\langle \left( \mathrm{sn}^{\prime }u\right) ^{2}\right\rangle \\ &=&4K(m)\frac{1}{3}\left[ 2-\left( 1+m\right) \alpha (m)\right] \notag\end{aligned}$$Here $q(m)=e^{-\frac{K(1-m)}{K(m)}\pi }$, $\mathrm{S}\left( x,m\right) =\int^{x}\mathrm{sn}\left( u,m\right) du$ and $\left\langle \ldots \right\rangle $ is the average over the period. Dependencies $\alpha (m)$, $\beta (m)$, $\eta (m)$ and $\delta (m)$ are presented in Fig \[Variat\_Fig4\]. ![ Coefficients $\protect\alpha(m)$, $\protect\beta(m)$, $\protect% \gamma(m)$ and $\protect\delta(m)$ that enter into the variational functional (\[Fz\])[]{data-label="Variat_Fig4"}](Variat_Fig4){width="5cm"} The variational Euler-Lagrange minimum of (\[Fz\]) is given by the function:$$f(z)=A(t,m_{1},m_{2})\,\mathrm{sn}\left[ K(m_{2})\,z,m_{2}\right] \label{ff}$$with $$A(t,m_{1},m_{2})=2d\left( t,m_{1},m_{2}\right) K(m_{2})\,\sqrt{2\frac{\delta (m_{1})}{\eta (m_{1})}m_{2}} \label{Amp}$$that matches the boundary conditions $f(0)=f(2)=0$ providing that the dependence $d(t,m_{1},m_{2})$ is fixed by biquadratic equation: $$\begin{array}{c} (2d)^{4}\delta (m_{1})(1+m_{2})K^{2}(m_{2})+(2d)^{2}\alpha (m_{1})t \\ +4K(m_{1})\beta (m_{1})% \end{array}% =0$$Substitution of (\[ff\]) back into (\[Fz\]) gives:$$\begin{aligned} F(m_{1},m_{2}) &=&-4\pi \frac{1}{4}\eta (m_{1})\int f^{4}(z)\,dz \label{last} \\ &=&-\frac{1}{2}\pi \eta (m_{1})A^{4}(t,m_{1},m_{2})\eta (m_{2}) \notag\end{aligned}$$Collecting all the results, we present the final variational solution ([varsolut]{}). *(iii) Justification of simplification of generating functional* The simplified functional (\[sfn\]) was obtained by neglecting the terms (\[small\]). Using profile $P^{ }(x^{},z^{ })$ from (\[varsolut\]) we can now justify that contribution of these terms is indeed small by noting that their action is concentrated in the near-surface layer of thickness $\xi _{r}^{ }\sim 1/K_{2}(t)\sim |t|^{-1/2}$. We will consider only the Kittel regime far from $t_c=-\pi$. The soft regime close to $t_c$ was already considered in [Stephano]{} . The relative contribution of the first term to $F$ is estimated as $$\int \widehat{A}_{1}dx^{{}}dz^{{}}/F\sim (\frac{\varepsilon _{\perp }}{% \varkappa _{\parallel }})^{1/2}\frac{\xi _{0z}}{a_{f}}\xi _{r}\sim (\frac{d}{% a_{f}})^{2}|t|^{-1/2}\ll 1$$that is small for the Kittel domains with $d\ll a_{f}$. Note, however, that this criteria is not satisfied for monodomain polarization profile that formally is achieved when $d\rightarrow \infty $. This means that dimensionless equations (\[ne1\],\[ne2\]) can not be applied for monodomain x-independent solution, that however is unstable towards domain formation anyway. Another term $\widehat{A}_{2}$ is related with the energy of the depolarizing electric field $E_{z}$. According to (\[ne2\]), this field can be calculated from the polarization profile (\[varsolut\]) as: $$E_{z}(x,z)=-\partial _{z}\varphi =-4\pi \partial _{z}^{2}\int \int^{xx_{1}}P(x_{2},z)dx_{2}dx_{1}. \label{Fld}$$It follows that the depolarization field $E_{z}$ periodically alternates in $% x-$direction in anti-phase with $P$ and is located in the near-surface layer of thickness $\xi _{r}$. It vanishes at the surface and in the bulk. Estimating the maximal value of $E_{z}$ at $x\sim \xi _{r}/2$ as $E_{z\max }^{{}}\sim A$ we have: $$\int \widehat{A}_{2}dxdz/F\sim \frac{\varepsilon _{i\parallel }}{\varkappa _{\parallel }}(\frac{\varkappa _{\parallel }}{\varepsilon _{\perp }})^{1/2}% \frac{\xi _{0x}}{a_{f}}\xi _{r}\sim \frac{\varepsilon _{i\parallel }}{% \varkappa _{\parallel }}(\frac{d}{a_{f}})^{2}|t|^{-1/2}\ll 1.$$ The physical meaning of this estimation is discussed in the main text of the article. [99]{} M. Dawber, K. M. Rabe and J. F. Scott, Rev. Mod. Phys., **77**, 1083 (2005) L. Landau and E. Lifshitz, Phys. Z. Sowjet. **8**, 153 (1935) L. D. Landau and E. M. Lifshitz, *Electrodynamics of Continuous Media* (Elsevier, New York,1985) C. Kittel, Phys. Rev. **70**, 965 (1946) A. M. Bratkovsky and A. P. Levanyuk, Phys. Rev. Lett. **84**, 3177 (2000) F. De Guerville, I. Lukyanchuk, L. Lahoche, and M. El Marssi, Mat. Sci. and Eng., B**120**, 16 (2005) G. Catalan, J. F. Scott, A. Schilling and J. M. Gregg, J. Phys: Cond. Matter **19** 022201 (2007) A.M. Bratkovsky, A.P. Levanyuk, Integrated Ferroelectrics, **84**, 3 (2006) E. V. Chensky and V. V. Tarasenko, Sov. Phys. JETP **56**, 618 (1982)\[Zh. Eksp. Teor. Fiz. **83**, 1089 (1982)\] V. A. Stephanovich, I. A. Luk’yanchuk and M. G. Karkut, Phys. Rev. Lett., **94**, 047601 (2005) Bo-Kuai Lai, I. Ponomareva, I. I. Naumov et al. Phys. Rev. Lett. **96**, 137602 (2006) D. G. Sannikov, “Phenomenological Theory of the Incommensurate-Commensurate Phase Transition” p. 43 in *Incommensurete Phases in Dielectrics I. Fundamentals*, ed. by R. Blinc and A. P. Levanyuk, Elsevier. Sci. Publ., Amsterdam, 1986 Abramowitz M., Stegun I.A. (eds.) *Handbook of mathematical functions* (10ed., NBS, 1972) B. A. Strukov and A. P. Levanyuk, *Ferroelectric Phenomena in Crystals* (Springer, Berlin,1998) Frequently used more general boundary condition $\partial _{z}P=\lambda^{-1} P$ is obtained when polarization is constrained by additional surface contribution $\sim \lambda^{-1} \int P^2 dx$ to free energy (\[Functional\]). We neglect this term here. V. A. Stephanovich et al., Phys. Rev. Lett., **94**, 047601 (2005) A.M. Bratkovsky, A.P. Levanyuk, Appl. Phys. Lett., **186**, 171 (2006)
--- abstract: 'This is a survey paper on the theory of scattered spaces in Galois geometry and its applications.' address: | Michel Lavrauw\ Università degli Studi di Padova, Italy, [[michel.lavrauw@unipd.it](michel.lavrauw@unipd.it)]{} author: - Michel Lavrauw date: 'January 27, 2016' title: Scattered Spaces in Galois Geometry --- Introduction and motivation {#lavrauw:sec:introduction} =========================== Given a set $\Omega$ and a set $S$ of subsets of $\Omega$, a subset $U\subset \Omega$ is called [*scattered*]{} with respect to $S$ if $U$ intersects each element of $S$ in at most one element of $\Omega$. In the context of Galois Geometry this concept was first studied in 2000 [@BaBlLa2000], where the set $\Omega$ was the set of points of the projective space ${\mathrm{PG}}(11,q)$ and $S$ was a $3$-spread of ${\mathrm{PG}}(11,q)$. The terminology of a [*scattered space*]{}[^1] was introduced later in [@BlLa2000]. The paper [@BaBlLa2000] was motivated by the theory of blocking sets, and it was shown that there exists a $5$-dimensional subspace, whose set of points $U \subset \Omega$ is scattered with respect to $S$, which then led to an interesting construction of a $(q+1)$-fold blocking set in ${\mathrm{PG}}(2,q^4)$. The notion of “being scattered" has turned out to be a useful concept in Galois Geometry. This paper is motivated by the recent developments in Galois Geometry involving scattered spaces. The first part aims to give an overview of the known results on scattered spaces (mainly from [@BaBlLa2000], [@BlLa2000], and [@Lavrauw2001]) and the second part gives a survey of the applications. Notation and terminology ======================== A [*$t$-spread*]{} of a vector space $V$ is a partition of $V\setminus \{0\}$ by subspaces of constant dimension $t$. Equivalently, a $(t-1)$-spread of ${\mathrm{PG}}(V)$ (the projective space associated to $V$) is a set of $(t-1)$-dimensional subspaces partitioning the set of points of ${\mathrm{PG}}(V)$. Sometimes the shorter term [*spread*]{} is used, when the dimension is irrelevant or clear from the context. Two spreads $S_1$, $S_2$ of ${\mathrm{PG}}(V)$ (respectively $V$) are called [*equivalent*]{} if there exists a collineation $\alpha$ of ${\mathrm{PG}}(V)$ (respectively, an element $\alpha$ of ${\mathrm{\Gamma L(V)}}$) such that $S_1^\alpha = S_2$. A standard construction of a spread (going back to Segre [@Segre1964]) is the following. We sketch the construction in the context of the larger framework of [*field reduction*]{} techniques on which we will elaborate in Section \[subsec:linear\_sets\]. Consider any ${\mathbb{F}}_q$-vector space isomorphism from ${\mathbb{F}}_{q^t}\rightarrow {\mathbb{F}}_q^t$, and extend this to an isomorphism $\varphi$ between the ${\mathbb{F}}_q$-vector spaces ${\mathbb{F}}_{q^t}^r$ and ${\mathbb{F}}_q^{rt}$. For each nonzero vector $v\in V(r,q^t)$ consider the vector space $S_v=\{\varphi(\lambda v)~:~\lambda \in {\mathbb{F}}_{q^t}\}$. One easily verifies that the set ${\mathcal{D}}_{r,t,q}:=\{S_v~:~ v \in V(r,q^t)\}$ defines a $t$-spread in $V(rt,q)$. A spread $S$ is called [*Desarguesian*]{} if $S$ is equivalent to ${\mathcal{D}}_{r,t,q}$ for some $r$, $t$ and $q$. Let $D$ be any set of subspaces in $V(n,q)$. A subspace $W$ of $V$ is called [*scattered with respect to $D$*]{} if $W$ intersects each element of $D$ in at most a $1$-dimensional subspace. Equivalently, a subspace of a projective space ${\mathrm{PG}}(V)$ is called scattered with respect to a set of subspaces $D$ of ${\mathrm{PG}}(V)$ if it intersects each element of $D$ in at most a point. In this paper $D$ will typically be a spread. We note that we will often switch between vector spaces and projective spaces, assuming that the reader is familiar with both terminologies. To avoid overcomplicating the notation, we will use the same symbol (for instance $D$) for subset of the subspaces of a projective space and its associated object in the underlying vector space. We will make sure that there is no ambiguity concerning vector space dimension and projective dimension. If $D$ is any set of subspaces of a vector space or a projective space, and $W$ is a subspace of the same space, then by ${\mathcal{B}}_D(W)$ we denote the set of elements of $D$ which have a nontrivial intersection with $W$. If there is no confusion possible, then we also use the simplified notation ${\mathcal{B}}(W)$. Scattered spaces {#lavrauw_section:basics} ================ We start this section with a number of examples illustrating some of the difficulties that arise in the study of scattered spaces with respect to spreads. \[lavrauw\_example:easy\] 1. If $D$ is a spread of lines in ${\mathrm{PG}}(3,q)$ then every line not contained in the spread is scattered w.r.t. $D$. Also, since $|D|=q^2+1$ and a plane contains $q^2+q+1$ points, no plane of ${\mathrm{PG}}(3,q)$ is scattered w.r.t. $D$. 2. If $D$ is a spread of planes in ${\mathrm{PG}}(5,q)$ then every line not contained in an element of the spread is scattered w.r.t. $D$. Also since $|D|=q^3+1$ and a solid (a 3-dimensional projective space) contains $q^3+q^2+q+1$ points, no solid of ${\mathrm{PG}}(3,q)$ is scattered w.r.t. $D$. The existence of a scattered plane is not immediately clear, but this will follow from one of the results in the next sections (Theorem \[general lower bound\]). 3. If $D$ is a spread of lines in ${\mathrm{PG}}(5,q)$ then it is easy to see that the dimension of a scattered subspace of ${\mathrm{PG}}(5,q)$ w.r.t. $D$ cannot exceed 3 (a solid). Also any such spread allows a scattered line (trivial), and a scattered plane (Theorem \[general lower bound\]). On the other hand, the existence of a scattered solid depends on the spread. We will see that a Desarguesian spread does not allow scattered solids (by Theorem \[general upper bound\]), but there are spreads which do (by Theorem \[scattering spread\]). Maximally vs maximum scattered {#lavrauw_section:max_vs_max} ------------------------------ It is important to observe the distinction between the following two definitions. A subspace $U$ is called [*maximally scattered*]{} w.r.t. a spread $D$ if $U$ is not contained in a larger scattered space. A subspace $U$ is called [*maximum scattered*]{} w.r.t. a spread $D$ if any scattered space $T$ w.r.t. $D$ satisfies $\dim T \leq \dim U$. As we will see in the following example, there exist maximally scattered spaces which are not maximum scattered. Consider the irreducible polynomial $f(x)=x^6+x^4+x^3+x+1\in {\mathbb{F}}_2[x]$, put ${\mathbb{F}}_{2^6}={\mathbb{F}}_2[x]/(f(x))$ and consider the set of subspaces $D=\{S_{u,v}~:~ u,v \in {\mathbb{F}}_2^6\}$, where $S_{u,v}=\langle (M^k u, M^k v)~:~k\in \{0,1,\ldots, 5\}\rangle$ of $V(12,2)$, where $M$ is the companion matrix of $f(x)$. This is the standard construction (by field reduction see Section \[subsec:linear\_sets\]) of a Desarguesian spread, in this case a Desarguesian $6$-spread in $V(12,2)$. The subspace $U_5$ spanned by the rows of the matrix $$\left [ \begin{array}{cccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &1 &1 & 0 \\ 0 &1 & 0 & 0 & 0 & 0 & 0 & 0 &1 & 0 & 0 & 0 \\ 0 & 0 &1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 &1 &1 \\ 0 & 0 & 0 &1 & 0 &1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 &1 &1 &1 &1 &1 &1 &1 & 0 \\ \end{array} \right ]$$ is a maximally scattered 5-dimensional subspace of $V(12,2)$. If $\varphi: {\mathbb{F}}_{2^6}\rightarrow {\mathbb{F}}_2^6$ is any vector space isomorphism, then the subspace $U_6:=\{(\varphi(\alpha),\varphi(\alpha^2))~:~ \alpha \in {\mathbb{F}}_2^6\}$ is maximum scattered. It follows that $U_5$ is maximally scattered but not maximum scattered. This example was constructed using the GAP-package FinInG (see [@gap], [@fining]). A lower bound on the dimension of maximally scattered spaces {#lavrauw_section:lower} ------------------------------------------------------------ It is obvious that every line of a projective space, which is not contained in an element of a spread $D$, is scattered with respect to $D$, so a maximally scattered space has vector dimension at least $2$. The following theorem gives a lower bound on the dimension of a maximally scattered space in terms of the dimension of the space and the dimension of the spread elements. Its proof (see [@BlLa2000]) is purely combinatorial and gives a method to extend a scattered space in case the bound is not attained. \[general lower bound\][[@BlLa2000 Theorem 2.1]]{}\ If $U$ is a maximally scattered subspace w.r.t. a $t$-spread in $V(rt,q)$, then $\dim U \geq \lceil (rt-t)/2\rceil +1$. Theorem \[general lower bound\] implies the existence of a scattered plane w.r.t. a plane spread in ${\mathrm{PG}}(5,q)$, answering one of the questions from Example \[lavrauw\_example:easy\]. An upper bound on the dimension of scattered spaces {#lavrauw_section:upper} --------------------------------------------------- Let $S$ be a $(t-1)$-spread in $\mathrm{PG}(rt-1,q)$. The number of spread elements is $(q^{rt}-1)/(q^t -1)$ $ = q^{(r-1)t} + q^{(r-2)t}+ \ldots + q^t +1$. Since a scattered subspace can contain at most one point of every spread element, the number of points in a scattered space must be less than or equal to the number of spread elements. This gives the following trivial upper bound. \[general upper bound\][[@BlLa2000 Theorem 3.1]]{}\ If $U$ is scattered w.r.t. a $t$-spread in $V(rt,q)$, then $\dim U \leq rt-t$. Note that for a line spread in $\mathrm{PG}(3,q)$ the upper and lower bound coincide. But this is quite exceptional. In fact, excluding trivial cases we may assume that $t$ and $r$ are both at least 2, and it follows that the exact dimension of a maximum scattered space is determined by the lower and upper bounds only for $(r,t) \in \{ (2,2),(2,3)\}$, i.e., a line spread in $\mathrm{PG}(3,q)$ and a plane spread in $\mathrm{PG}(5,q)$. The projective dimension of a maximum scattered space in these cases is respectively 1, the dimension of a line, and 2, the dimension of a plane. There is a large variety of spreads and there is not much one can say about the possible dimension of a scattered subspace with respect to an arbitrary spread (see Section \[subsec:scattering spreads\]). This is one of the reasons to consider scattered spaces with respect to a Desarguesian spread (see Section \[subsec:desarguesian spreads\]). Another reason is the correspondence between the elements of a Desarguesian spread and the points of a projective space over an extension field (so-called [*field reduction*]{}, see Section \[subsec:linear\_sets\]). However before we proceed, in the following section we show that the upper bound from Theorem \[general upper bound\] cannot be improved without restrictions on the spread. Scattering spreads with respect to a subspace {#subsec:scattering spreads} --------------------------------------------- A spread $D$ is called [*a scattering spread with respect to a subspace*]{} $U$, if this subspace $U$ is scattered with respect to the spread $D$. \[scattering spread\][[@BlLa2000 Theorem 3.2]]{}\ If $W$ is an $(rt-t-1)$-dimensional subspace of $\mathrm{PG}(rt-1,q)$, $r\geq 2$, then there exists a scattering $(t-1)$-spread $\mathcal S$ with respect to $W$. This theorem shows that there is no room for improvement of Theorem \[general upper bound\] without assuming some extra properties on the spread. Up to now, the only spreads that have been investigated in detail are the Desarguesian spreads (see Section \[subsec:desarguesian spreads\]). Scattered spaces w.r.t. Desarguesian spreads {#subsec:desarguesian spreads} -------------------------------------------- Let $S$ be a $(t-1)$-spread of ${\mathrm{PG}}(rt-1,q)$ and consider the following incidence structure. First embed ${\mathrm{PG}}(rt-1,q)$ as a hyperplane $H$ in $\Sigma={\mathrm{PG}}(rt,q)$. Denote the set of points of $\Sigma \setminus H$ by $\mathcal P$ and the set of $t$-dimensional subspaces of $\Sigma$ which intersect $H$ in an element of $S$ by $\mathcal L$. Define an incidence relation $\mathcal I$ on $({\mathcal{P}}\times {\mathcal{L}}) \cup ({\mathcal{L}}\times {\mathcal{P}})$ by symmetric containment. Then the incidence structure ${\mathcal{D}}(S)=({\mathcal{P}},{\mathcal{L}},{\mathcal{I}})$ is a design with parallelism, also called [*Sperner space*]{} or [*S-space*]{} (see e.g. [@BaCo1974]). More precisely, ${\mathcal{D}}(S)$ is a $2-(q^{rt},q^t,1)$ design such that for each anti-flag $(x,U)$, there exists exactly one element of $\mathcal L$ which is incident with $x$ and parallel to $U$ (two elements of $\mathcal L$ are called [*parallel*]{} if their intersection is an element of $S$). Moreover, the design ${\mathcal{D}}(S)$ is a Desarguesian affine space if and only if $S$ is a Desarguesian spread (see [@BaCo1974 Theorem 2]). This correspondence is crucial in the proof of the following theorem and is of central importance for many of the applications of scattered spaces. We remind the reader that ${\mathcal{D}}_{r,t,q}$ denotes the Desarguesian $t$-spread in $V(rt,q)$. \[Desarguesian upper bound\] [[@BlLa2000 Theorem 4.3]]{}\ If $U$ is a scattered subspace w.r.t. ${\mathcal{D}}_{r,t,q}$, then $\dim U \leq rt/2$. It was also shown in [@BlLa2000] that this upper bound is tight whenever $r$ is even. [[@BlLa2000]]{} If $r$ is even, then there exists a scattered subspace w.r.t. ${\mathcal{D}}_{r,t,q}$ in $V(rt,q)$ of dimension $rt/2$. For $r$ odd, the exact dimension of a maximum scattered space is in general not known. The following theorem gives a lower bound on the dimension of a maximum scattered space. \[thm:scattered\_existence\][[@BlLa2000]]{} The dimension of a maximum scattered subspace w.r.t. ${\mathcal{D}}_{r,t,q}$ in $V(rt,q)$ is at least $r'k$ where $r' | r$, $(r',t)=1$, and $r'k$ is maximal such that $$k<(rt-t+3)/2~\mbox{for $q=2$ and $r'=1$}$$ and $$r'k<(rt-t+r'+3)/2~\mbox{otherwise.}$$ We conclude this section with an overview of the values of $r,t,q$, with $r$ odd and $t$ even, for which maximum scattered spaces w.r.t. ${\mathcal{D}}_{r,t,q}$ have been constructed. Note that for $t=2$, the existence of a scattered r-space w.r.t. ${\mathcal{D}}_{r,2,q}$ easily follows by considering an appropriate maximal subspace lying on the Segre variety $S_{r,2}(q)$, or equivalently a subspace ${\mathbb{F}}_q^r\otimes v$  of  ${\mathbb{F}}_q^r\otimes {\mathbb{F}}_q^2$, for some nonzero vector $v\in {\mathbb{F}}_q^2$ (see e.g. Section 1.6 and in particular Theorem 1.6.4 of [@Lavrauw2001] for more details). For $t=4$ and $r=3$, a 6-dimensional scattered space w.r.t. ${\mathcal{D}}_{3,4,q}$ was constructed in [@BaBlLa2000] (see more on this in Section \[subsec:blocking\_sets\]). A much more general result (including the construction from [@BaBlLa2000]) was recently obtained by Bartoli et al. in [@BaGiMaPoPrep]. They constructed scattered linear sets of rank $rt/2$ (see Section \[subsec:linear\_sets\] below for definitions) in ${\mathrm{PG}}(r-1,q^t)$ for many parameters $r$, $t$ and $q$. As a corollary one obtains the existence of maximum scattered spaces w.r.t. ${\mathcal{D}}_{r,t,q}$ in the following cases. [(From [@BaGiMaPoPrep])]{} There exist scattered spaces w.r.t. ${\mathcal{D}}_{r,t,q}$, $t$ even, of dimension $rt/2$ in the following cases: (i) $q=2$ and $t\geq 4$; (ii) $q\geq 2$ and $t\not \equiv 0$ mod $3$; (iii) $q\equiv 1$ mod $3$ and $t \equiv 0$ mod $3$. Apart from the computational examples from [@BaGiMaPoPrep], for $t=6$ and $q\in \{3,4,5\}$, the existence of scattered spaces of dimension $rt/2$ w.r.t. ${\mathcal{D}}_{r,t,q}$, with $r$ odd and $t$ even, remains open for $t\equiv 0 \mod 3$, $q \not \equiv 1 \mod 3$, and $q>2$. Applications ============ Translation hyperovals ---------------------- We start the section with one of the earliest applications of scattered spaces: hyperovals of translation planes. We assume the reader is familiar with the notion of a projective plane of order $q$. All necessary background (and much more) can be found in for instance [@Dembowski1968] or [@HuPi1973]. A [*hyperoval*]{} in a projective plane of order $q$ is a set $\mathcal H$ of $q+2$ points, no three of which are collinear, i.e. no line of the plane contains three points of $\mathcal H$. The existence of a hyperoval in a projective plane $\pi$ implies the order $q$ of $\pi$ to be even. Let $\pi$ be a projective plane. A [*perspectivity*]{} of $\pi$ is a collineation $\alpha$ for which there exists a point-line pair $(x, \ell)$ such that $\alpha$ fixes each line on $x$ and each point on $\ell$, and $\alpha$ is then also called an [*$(x,\ell)$-perspectivity*]{}. If $x$ is on $\ell$ then $\alpha$ is called an [*$(x,\ell)$-elation*]{}, otherwise $\alpha$ is called an [*$(x,\ell)$-homology*]{}. The point $x$ is the [*center*]{} and the line $\ell$ is the [*axis*]{} of $\alpha$. A [*translation plane*]{} is a projective plane which contains a line $\ell_\infty$, such that for each point $x$ on $\ell_\infty$ the group of elations with center $x$ and axis $\ell_\infty$ acts transitively on the points of $m\setminus \{x\}$ for each line $m$ on $x$. Equivalently, the automorphism group $G$ of the plane is called [*$(x,\ell_\infty)$-transitive*]{} for each point $x$ on $\ell_\infty$. A hyperoval $\mathcal H$ is called a [*translation hyperoval*]{} if there exists a group $G$ of $q$ elations with common axis $\ell$, fixing $\mathcal H$. The line $\ell$ is a 2-secant, called the [*translation line*]{} of $\mathcal H$, and the group $G$ acts transitively on the points of ${\mathcal{H}}\setminus \ell$. Translation hyperovals in the Desarguesian projective plane ${\mathrm{PG}}(2,q)$ were classified by Payne in 1971 [@Payne1971]. For non-Desarguesian projective planes, and in particular for translation planes, the classification remains open. Denniston [@Dennistion1979] and Korchmáros [@Korchmaros1986] constructed translation ovals in non-Desarguesian translation planes, while Jha and Johnson [@JhJo1992] proved that for each non-prime integer $N>3$, there exists a non-Desarguesian translation plane of order $2^N$ admitting a translation hyperoval. Note that not every projective plane admits translation hyperovals since some planes (e.g. Figueroa planes) don’t have enough translations. The study of translation planes is equivalent to the study of spreads of a vector space whose dimension is twice the dimension of the elements of the spread. This correspondence is due to the André-Bruck-Bose (ABB) construction of a translation plane from a spread and vice versa. In fact, the ABB-construction is a special case of the construction of the design with parallelism ${\mathcal{D}}(S)$ mentioned above (with $r=2$). In this case, ${\mathcal{D}}(S)$ is a $2-(q^{2t},q^t,1)$ design with parallelism, i.e. an affine plane. The corresponding projective plane is denoted by $\pi(S)$. The equivalence between spreads and translation planes was obtained in 1954 by André [@Andre1954], using a group-theoretic point of view. The geometric construction as presented above was published by Bruck and Bose [@BrBo1964] in 1964. The following theorem shows the equivalence between translation hyperovals in translation planes (sharing the same axis) and scattered spaces, and is comparable to [@JhJo1992 Theorem 5], although the terminology in [@JhJo1992] is quite different. \[thm:scattered-hyperoval\] A translation plane $\pi(S)$ of order $2^{t}$ with translation line $\ell_\infty$ contains a translation hyperoval with translation line $\ell_\infty$ if and only if the spread $S$ in $V(2t,2)$ admits a scattered space of dimension $t$. We prove the first part of the theorem in a projective setting. Let $S$ be a $(t-1)$-spread in $\Sigma={\mathrm{PG}}(2t-1,2)$, and $U$ a scattered $(t-1)$-space w.r.t $S$. Embed $\Sigma$ as a hyperplane in $\Sigma^*$ and choose a $t$-dimensional subspace $K_U$ of $\Sigma^*$ with $K_U\cap \Sigma=U$. Let $\mathcal H$ denote the set of points in $K_U\setminus U$ together with the two points (call them $x$ and $y$) at infinity corresponding to the spread elements which are disjoint from $U$. We claim that $\mathcal H$ is a hyperoval of $\pi(S)$. To prove this claim, consider a line $\ell$ in $\pi(S)$. If $\ell$ contains $x$ (respectively $y$) then the $t$-space of $\Sigma^*$ corresponding to $\ell$ intersects $K_U$ in exactly one point $z$. In this case, the line $\ell$ contains exactly two points of $\mathcal H$, namely $z$ and $x$ (respectively $y$). If $\ell$ does not contain $x$ or $y$, then the $t$-space $\bar{\ell}$ of $\Sigma^*$ corresponding to $\ell$ intersects $\Sigma$ in an element $R\in S$ which intersects $U$ in a point $u$. There are two possibilities: either $\bar{\ell}$ intersects $K_U$ only in the point $u$, or $\bar{\ell}$ intersects $K_U$ in a line $\{u,u',u''\}$. In the first case, $\ell$ is external to $\mathcal H$. In the latter case, the line $\ell$ intersects $\mathcal H$ in exactly two points $u'$ and $u''$. This shows that no three points of $\mathcal H$ are collinear in $\pi(S)$ (note that the line $\ell_\infty$ meets $\mathcal H$ in the two points $x$ and $y$). It follows that $\mathcal H$ is a set of $2^t+2$ points, no three of which are collinear, i.e. $\mathcal H$ is a hyperoval. The existence of the group $H$ of $q$ elations, with the line at infinity as the common axis, immediately follows from the construction, as $H$ corresponds to the translation group stabilising $K_U\setminus U$. Conversely, suppose that $\pi(S)$ contains a translation hyperoval $\mathcal H$. Let $T$ denote the translation group of $\pi(S)$ and $H\leq T$ the translation group of $\mathcal H$. If $T_z\leq T$ denotes the group of $(z,\ell_\infty)$-elations, then the spread $S$ coincides with the set $\{T_z~:~z \in \ell_\infty\}$. All groups $T$, $H$ and $T_z \in S$ are elementary abelian 2-groups, and we will consider them as ${\mathbb{F}}_2$-vector spaces. Note that $|T|=q^{2t}$, $|H|=2^t$, and $|T_z|=2^t$. Let $x$ and $y$ be the two points of $\mathcal H$ on the line $\ell_\infty$. Then, for $z\in \ell_\infty \setminus \{x,y\}$, it follows that $|T_z\cap H|=2$, since a non-trivial $(z,\ell_\infty)$-elation fixing $\mathcal H$ is necessarily an involution. This suffices to conclude that $H$ intersects each element of $S\setminus \{T_x,T_y\}$ in a one-dimensional subspace, and therefore $H$ is a scattered space with respect to $S$ of dimension $t$ in $V(2t,2)$. Translation caps in affine spaces {#subsec:caps} --------------------------------- The next applications is a generalisation to higher dimensional spaces of the correspondence between translation hyperovals and scattered spaces. This comes from recent work [@BaGiMaPoPrep], to which we refer for the details. Here we only give a sketch of this correspondence. As mentioned above, the ABB construction is a special case, with $r=2$, of the more general construction of the $2-(q^{rt},q^t,1)$ design ${\mathcal{D}}(S)$, and when the spread $S$ is Desarguesian, the design ${\mathcal{D}}(S)$ is an affine space ${\mathrm{AG}}(r-1,q^t)$. Generalising the previous construction, of a translation hyperoval from a scattered space in ${\mathrm{PG}}(2t-1,2)$, now starting from a scattered space $U$ w.r.t ${\mathcal{D}}_{r,t,2}$ in ${\mathrm{PG}}(rt-1,2)$ one obtains a set of points ${\mathcal{K}}=K_U\setminus U$ in the affine space ${\mathrm{AG}}(r-1,2^t)$. Again this set of points satisfies the property that no three of them are collinear. Such a set is called a [*cap*]{}, and this particular construction gives a [*translation cap*]{}. Translating this correspondence from [@BaGiMaPoPrep] into our terminology gives the following. A scattered subspace w.r.t. ${\mathcal{D}}_{r,t,2}$, $t>1$, corresponds to a translation cap in ${\mathrm{AG}}(r-1,2^t)$ and viceversa. This correspondence leads to the existence of complete caps whose cardinality is close to the theoretical lower bound for complete caps. See [@BaGiMaPoPrep] for further details. Linear sets {#subsec:linear_sets} ----------- Linear sets have many interesting aspects, and as it would take us too much time to elaborate on all of these. We refer to [@LaVa2015] and [@Polverino2010] for surveys on the topic. Before we explain some of the applications of scattered spaces to the theory of linear sets, we briefly introduce the notion of a linear set using the notation and terminology of field reduction which was formalised in [@LaVa2015]. The technique called “field reduction” is based on the well understood concept of subfields in a finite field, and, maybe surprisingly, has proved to be a very powerful tool in Galois Geometry. Consider the [*field reduction map*]{} ${\mathcal{F}}_{r,t,q}$ as in [@LaVa2015] from ${\mathrm{PG}}(r-1,q^t)$ to ${\mathrm{PG}}(rt-1,q)$. Points of ${\mathrm{PG}}(r-1,q^t)$ are mapped onto $(t-1)$-spaces of ${\mathrm{PG}}(rt-1,q)$, and in particular the image of the set $\mathcal P$ of points of ${\mathrm{PG}}(r-1,q^t)$ forms a Desarguesian $(t-1)$-spread ${\mathcal{D}}_{r,t,q}$ of ${\mathrm{PG}}(rt-1,q)$. If $U$ is a subspace of ${\mathrm{PG}}(rt-1,q)$ then by ${\mathcal{B}}(U)$ we denote the set of points corresponding to the spread elements which have non-trivial intersection with $U$, i.e. $$\begin{aligned} {\mathcal{B}}(U)=\{x \in {\mathcal{P}}~:~{\mathcal{F}}_{r,t,q}(x)\cap U \neq \emptyset\}.\end{aligned}$$ Here the set ${\mathcal{B}}(U)$ is considered as a set of points in ${\mathrm{PG}}(r-1,q^t)$, but using the one-to-one correspondence between $\mathcal P$ and ${\mathcal{D}}_{r,t,q}$ given by the field reduction map ${\mathcal{F}}_{r,t,q}$, sometimes ${\mathcal{B}}(U)$ is also considered as a subset of ${\mathcal{D}}_{r,t,q}$, consistent with the notation we used in the previous sections. This just means that the sets ${\mathcal{B}}(U)$ and ${\mathcal{F}}_{r,t,q}({\mathcal{B}}(U))$ are sometimes identified. The context (i.e. the ambient space) should always clarify if ${\mathcal{B}}(U)$ is considered as a subset of $\mathcal P$ or as a subset of ${\mathcal{D}}_{r,t,q}$. A set of points $L$ in ${\mathrm{PG}}(r-1,q^t)$ is called an [*${\mathbb{F}}_q$-linear set*]{} if there exists a subspace $U$ in ${\mathrm{PG}}(rt-1,q)$ such that $L={\mathcal{B}}(U)$. An ${\mathbb{F}}_q$-linear set ${\mathcal{B}}(U)$ is said to have [*rank $m$*]{} if $U$ has projective dimension $m-1$. These definitions immediately lead to the following proposition. An ${\mathbb{F}}_q$-linear set ${\mathcal{B}}(U)$ in ${\mathrm{PG}}(r-1,q^t)$ has maximal size (w.r.t. its rank) if and only if $U$ is scattered w.r.t. ${\mathcal{D}}_{r,t,q}$. An ${\mathbb{F}}_q$-linear set $L$ of rank $m$ has at most $(q^m-1)/(q-1)$ points and if this bound is reached then $L$ is called a [*scattered linear set*]{}. If a ${\mathcal{B}}(U)$ is an ${\mathbb{F}}_q$-linear set in ${\mathrm{PG}}(r-1,q^t)$ and $U$ is maximum (respectively maximally) scattered w.r.t. ${\mathcal{D}}_{r,t,q}$, then ${\mathcal{B}}(U)$ is called a [*maximum*]{} (respectively [*maximally*]{}) [*scattered ${\mathbb{F}}_q$-linear set*]{}. In [@Polverino2010] Polverino introduced the notion of the dual linear set. If $\beta$ is a non-degenerate sesquilinear form on ${\mathbb{F}}_{q^t}^r$ and $Tr$ denotes the trace map from ${\mathbb{F}}_{q^t}$ to ${\mathbb{F}}_q$, then $Tr\circ \beta$ defines a non-degenerate form from ${\mathbb{F}}_{q^t}^r$ to ${\mathbb{F}}_q$. If $\perp$ denotes the corresponding polarity in ${\mathrm{PG}}(rt-1,q)$, and ${\mathcal{B}}(U)$ is an ${\mathbb{F}}_q$-linear set in ${\mathrm{PG}}(r-1,q^t)$, then ${\mathcal{B}}(U^\perp)$ is called the [*dual linear set with respect to $\beta$*]{}. If ${\mathcal{B}}(U)$ has rank $m$ then ${\mathcal{B}}(U^\perp)$ has rank $rt-m$. For maximum scattered linear sets we have the following theorem. [@Polverino2010 Theorem 3.5.] If $rt$ is even and ${\mathcal{B}}(U)$ is a maximum scattered ${\mathbb{F}}_q$-linear set of ${\mathrm{PG}}(r-1,q^t )$, then the dual linear set with respect to any polarity of ${\mathrm{PG}}(r-1,q^t)$ is a maximum scattered ${\mathbb{F}}_q$-linear set as well. One of the important questions regarding linear sets is the [*equivalence problem*]{}. Contrary to linear subspaces (which are equivalent if and only if they have the same dimension), two linear sets of the same rank are not necessarily equivalent under the action of the projective group or the collineation group of the ambient projective space. This is of course not surprising since two linear sets of the same rank might even have different cardinalities. In few cases the equivalence problem has been solved. [@LaVa2010][ ]{}[^2] All scattered ${\mathbb{F}}_q$-linear sets of rank $3$ in ${\mathrm{PG}}(1,q^3)$ and ${\mathrm{PG}}(1,q^4)$ are equivalent under ${\mathrm{PGL}}(2,q^4)$. All scattered ${\mathbb{F}}_q$-linear sets of rank $3$ in ${\mathrm{PG}}(1,2^5)$ are equivalent under ${\mathrm{P\Gamma L}}(2,2^5)$. The equivalence of maximum scattered ${\mathbb{F}}_q$-linear sets in ${\mathrm{PG}}(1,q^3)$ can be generalised to all projective spaces of odd dimension over ${\mathbb{F}}_{q^3}$. The following was shown for $n=2$ in [@MaPoTr2007 Proposition 2.7] and for general $n$ in [@LaVa2013 Theorem 4]. All maximum scattered ${\mathbb{F}}_q$-linear sets in ${\mathrm{PG}}(2n-1,q^3)$ are ${\mathrm{P\Gamma L}}$-equivalent. This equivalence does however not generalise to maximum scattered ${\mathbb{F}}_q$-linear sets of odd dimensional projective spaces over extension fields of degree $>3$. For instance, in [@LuMaPoTr2014] it was shown that in ${\mathrm{PG}}(2n-1,q^t)$, $q>3$, $t\geq4$, there exist inequivalent maximum scattered linear sets. Another interesting question concerning linear sets is the [*intersection problem*]{}. Given two linear sets $L_1$ and $L_2$ of given rank in a given projective space, what are the possibilities for the intersection $L_1\cap L_2$? Again the answer is trivial for subspaces, and the question has also been answered for subgeometries (see [@DoDu2008 Theorem 1.3]), but for linear sets the problem is much more complicated and only few results are known. The following theorem gives an answer to the intersection problem for an ${\mathbb{F}}_q$-linear set of rank $k$ and a scattered ${\mathbb{F}}_q$-linear set of rank two (i.e. an ${\mathbb{F}}_q$-subline). [@LaVa2010 Theorem 8 and 9] An $\mathbb{F}_q$-subline intersects an ${\mathbb{F}}_q$-linear set of rank $k$ of ${\mathrm{PG}}(1,q^h)$ in $0,1,\ldots,\min\{q+1,k\}$ or $q+1$ points and for every subline $L\cong{\mathrm{PG}}(1,q)$ of ${\mathrm{PG}}(1,q^h)$, there is a linear set $S$ of rank $k$, $k\leq h$ and $k\leq q+1$, intersecting $L$ in exactly $j$ points, for all $0\leq j\leq k$. In [@DoDu2014 Proposition 5.2] the authors determined the intersection of two scattered ${\mathbb{F}}_q$-linear sets of rank $t+1$ in ${\mathrm{PG}}(2,q^t)$. Further results on the intersection of (not necessarily scattered) linear sets can be found in [@LaVa2013] and [@Pepe2011]. Two-intersection sets --------------------- A two-intersection set w.r.t. $k$-dimensional spaces in ${\mathrm{PG}}(V)$ is a set $\Omega$ of points such that the size of the intersection of the set $\Omega$ with a $k$-space only takes two different values, say $m_1$ and $m_2$. The numbers $m_1$ and $m_2$ are called the [*intersection numbers*]{} of the set $\Omega$. A fundamental result which makes scattered spaces particularly interesting is the following. [[@BlLa2000]]{} If $U$ is a scattered space w.r.t. ${\mathcal{D}}_{r,t,q}$ with $\dim U=rt/2$, then $B(U)$ is a two-intersection set w.r.t. hyperplanes in ${\mathrm{PG}}(r-1,q^t)$, with intersection numbers $$m_1=\frac{q^{\frac{rt}{2}-t} -1}{q -1} ~\mbox{and}~ m_2=\frac{q^{\frac{rt}{2}-t+1} -1}{q -1}.$$ If $t$ is even, then this set has the same parameters as the union of $(q^{t/2}-1)/(q-1)$ pairwise disjoint Baer subgeometries isomorphic to ${\mathrm{PG}}(r-1,q^{t/2})$ (call such a set of [*type I*]{}). If $t$ is odd, then this set has the same parameters as the union of $(q^t-1)/(q-1)$ elements of an $(r/2-1)$-spread in ${\mathrm{PG}}(r-1,q^t)$. We call these two-intersection sets of [*type II*]{}. It was proved for $r=3$ and $t=4$ in [@BaBlLa2000] and for general $rt$ even in [@BlLa2002] that the two-intersection sets obtained from scattered spaces are not of these types. [[@BlLa2002]]{}\[thm:nonequiv\] A scattered ${\mathbb{F}}_q$-linear set of rank $rt/2$ in ${\mathrm{PG}}(r-1,q^t)$ is inequivalent to the two-intersection sets of type I or type II. Two-weight codes ---------------- An ${\mathbb{F}}_q$-linear $[n,k]$-code $C$ is a $k$-dimensional subspace of ${\mathbb{F}}_q^n$. Vectors belonging to $C$ are called [*codewords*]{} and the [*weight*]{} ${\mathrm{wt}}(c)$ of a codeword is the number of nonzero coordinates of $c$ with respect to some fixed basis of ${\mathbb{F}}_q^n$. The [*distance*]{} $d(c_1,c_2)$ between two codewords is the number of positions in which they have different coordinates and is thus equal to ${\mathrm{wt}}(c_1-c_2)$. If the nonzero minimum distance of $C$ is $d$, then the code is called an [*${\mathbb{F}}_q$-linear $[n,k,d]$-code*]{}. In this case the code $C$ is an $e$-error correcting code with $e=\lfloor \frac{1}{2}(d-1)\rfloor$. Given a two-intersection set ${\mathcal{B}}(U)$ from a maximum scattered space $U$ in ${\mathrm{PG}}(rt-1,q)$, we can obtain a two-weight code as follows. We briefly sketch the construction and refer to Calderbank et al. [@CaKa1986] for further details. Put $n=|{\mathcal{B}}(U)|$ and define the code $C_U$ as the subspace generated by the columns of the $(n\times r)$-matrix $M_U$ whose rows are the coordinates of the points of ${\mathcal{B}}(U)$ with respect to some fixed frame of ${\mathrm{PG}}(r-1,q^t)$. Then $C_U$ has length $n$ and dimension $r$ (for this we use that $U$ is maximum scattered). Since ${\mathcal{B}}(U)$ has intersection numbers $m_1$ and $m_2$, the code $C_U$ is a two-weight code with weights $n-m_1$ and $n-m_2$. Hence we have the following theorem. [[@BlLa2000]]{} If $U$ is a scattered space of dimension $m=rt/2$ w.r.t. ${\mathcal{D}}_{r,t,q}$, then $C_U$ is an ${\mathbb{F}}_q$-linear $[(q^m-1)(q-1),r]$-code with weights $$q^{m-t}\left ( \frac{q^t-1}{q-1}\right ) ~~\mbox{and}~~q^{m-t+1}\left ( \frac{q^{t-1}-1}{q-1}\right ).$$ Blocking sets {#subsec:blocking_sets} ------------- Blocking sets have received a tremendous amount of attention in the past decades, and it is through research in this area that scattered spaces came into the spotlight. In particular, the results by Blokhuis et al. from [@BlStSz1999] where it was shown that an $s$-fold blocking set in ${\mathrm{PG}}(2,q^4)$ of size $s(q^4+1)+c$, with $s$ and $c$ small enough, contains the union of $s$ disjoint Baer subplanes, motivated the paper by Ball et al. [@BaBlLa2000] from 2000. In the latter paper the authors constructed a scattered linear set of rank 6, thus obtaining a $(q+1)$-fold blocking set of size $(q+1)(q^4+q^2+1)$ in ${\mathrm{PG}}(2,q^4)$, and they proved that it is not the union of Baer subplanes (see also Theorem \[thm:nonequiv\]). A more general result on scattered spaces and blocking sets is the following theorem from [@BlLa2000]. This shows that also scattered spaces which are not maximum generate blocking sets. [[@BlLa2000]]{}\ A scattered subspace $U$ of dimension $m$, with respect to a Desarguesian $t$-spread, in ${\mathrm V}(rt,q)$ induces a $\left ( \theta_{k-1}(q) \right )$-fold blocking set ${\mathcal{B}}(U)$, with respect to $(\frac{rt-m+k}{t}-1)$-dimensional subspaces in $\mathrm{PG}(r-1,q^t)$, of size $\theta_{m-1}(q) $, where $1\leq k \leq m$ such that $t~|~(m-k)$. Embeddings of Segre varieties ----------------------------- The [*Segre variety $S_{t,t}(q)$*]{} is an algebraic variety in ${\mathrm{PG}}(t^2-1,q)\cong {\mathrm{PG}}({\mathbb{F}}_q^t\otimes {\mathbb{F}}_q^t)$, whose points correspond to the fundamental tensors in ${\mathbb{F}}_q^t\otimes {\mathbb{F}}_q^t$. The geometry of points and lines lying on the Segre variety $S_{t,t}(q)$ is a [*semilinear space*]{} (also called a [*product space*]{} representing the product ${\mathrm{PG}}(t-1,q)\times {\mathrm{PG}}(t-1,q)$). A *projective embedding* of a semilinear space is an injective map into a projective space mapping lines into lines. So, $S_{t,t}(q)$ is a projective embedding of the product space ${\mathrm{PG}}(t-1,q)\times {\mathrm{PG}}(t-1,q)$ in ${\mathrm{PG}}(t^2-1,q)$. By [@Zanella1996], any embedded product space is an injective projection of a Segre variety. Since the image of any embedding of ${\mathrm{PG}}(t-1,q)\times {\mathrm{PG}}(t-1,q)$ into a projective space ${\mathrm{PG}}(m,F)$ contains two disjoint $(t-1)$-subspaces, it holds that $m\ge 2t-1$. Therefore, a projective embedding of ${\mathrm{PG}}(t-1,q)\times {\mathrm{PG}}(t-1,q)$ into ${\mathrm{PG}}(2t-1,F)$ is called a *minimum embedding*. In [@LaShZa2015] a construction is given of such a minimum embedding using a maximum scattered subspace w.r.t. a Desarguesian spread. [[@LaShZa2015]]{} If $U$ is a maximum scattered subspace w.r.t. ${\mathcal{D}}_{2,t,q}$, then ${\mathcal{B}}(U)\subset {\mathrm{PG}}(2t-1,q)$ is a minimum embedding of the Segre variety $S_{t,t}(q)$. The smallest non-trivial example of a Segre variety $S_{t,t}(q)$ is the hyperbolic quadric ${\mathrm{PG}}(3,q)$ for $t=2$. In [@LaShZa2015], it is shown that there exists an embedding ${\mathcal{B}}(U)$ of $S_{t,t}(q)$ which is also a hypersurface of degree $t$ in ${\mathrm{PG}}(2t-1,q)$, extending the properties of the hyperbolic quadric in ${\mathrm{PG}}(3,q)$. By construction this embedding is covered by two systems of maximum subspaces (in this case $(t-1)$-dimensional). However, unlike the Segre variety, it turns out that the embedding ${\mathcal{B}}(U)$ contains $t$ systems of maximum subspaces, and hence for $t>2$, contrary to what one might expect, there exist systems of maximum subspaces which are not the image of maximum subspaces of the Segre variety, see [@LaShZa2015 Theorem 6]. Pseudoreguli ------------ The concept of a pseudoregulus is a generalisation of the concept of a regulus. If $A$, $B$, $C$ are three distinct $(n-1)$-dimensional subspaces contained in a common $(2n-1)$-space, then through each point of any of these three subspaces there is exactly one line intersecting each of the spaces $A$, $B$ and $C$. Such a line is called a [*transversal line*]{} w.r.t. $A$, $B$, and $C$. If each transversal line $\ell$ is given coordinates with respect to the frame $A\cap \ell$, $B\cap \ell$, and $C\cap \ell$, then the points on all the transversals with the same coordinates form an $(n-1)$-dimensional subspace. The set of these subspaces is called the [*regulus $R(A,B,C)$ determined by $A$, $B$ and $C$*]{}, and the transversal lines w.r.t. $A$, $B$ and $C$ are also called [*transversal lines of the regulus $R(A,B,C)$*]{}. A regulus consisting of $(n-1)$-dimensional subspaces is also called an [*$(n-1)$-regulus*]{}. Equivalently, the regulus $R(A,B,C)$ is the family of maximal subspaces containing $A$, $B$, and $C$ of the unique Segre variety $S_{2,n}(q)$ containing $A$, $B$ and $C$. The transversal lines form the other family of maximal subspaces of $S_{2,n}(q)$. If $n=2$ these become the two families of $q+1$ lines lying on a hyperbolic quadric in ${\mathrm{PG}}(3,q)$. In 1980, Freeman constructed a set of $q^2+1$ lines in ${\mathrm{PG}}(3,q^2)$ which have exactly 2 transversal lines, called a [*pseudoregulus*]{}. The $q^2+1$ lines are the extended lines of a Desarguesian spread in a Baer subgeometry ${\mathrm{PG}}(3,q)$, and the transversal lines are the two conjugate lines defining the spread. This idea was extended by Marino et al. in [@MaPoTr2007], to a set of $q^3+1$ lines in ${\mathrm{PG}}(3,q^3)$ using a maximum scattered ${\mathbb{F}}_q$-linear set (of rank 6), obtaining the following. [[@MaPoTr2007]]{} To any scattered ${\mathbb{F}}_q$-linear set $L$ of rank 6 of ${\mathrm{PG}}(3, q^3)$ is associated an ${\mathbb{F}}_q$-pseudoregulus $L$ consisting of all $(q^2 + q + 1)$-secant lines of $L$. Instead of a Baer subgeometry in ${\mathrm{PG}}(3,q^2)$ the authors of [@MaPoTr2007] considered a subgeometry ${\mathrm{PG}}(5,q)$ in ${\mathrm{PG}}(5,q^3)$ and a Desarguesian plane spread ${\mathcal{D}}_{4,3,q}$ of ${\mathrm{PG}}(5,q)$ together with the three conjugate lines $\ell$, $\ell^\omega$ and $\ell^{\omega^2}$ defining the spread ${\mathcal{D}}_{4,3,q}$. Projecting the subgeometry from $\ell$ onto the 3-dimensional space $\langle \ell^\omega,\ell^{\omega^2}\rangle$ gives a scattered linear set of rank 6. The transversals to the associated ${\mathbb{F}}_q$-pseudoregulus are the lines $\ell^\omega$ and $\ell^{\omega^2}$. Note that in ${\mathrm{PG}}(3,q^3)$ the pseudoregulus can thus be reconstructed from the scattered linear set of rank 6. Simply take all the $(q^2+q+1)$-secants. This was not the case in the previous situation in ${\mathrm{PG}}(3,q^2)$, where the pseudoregulus cannot be reconstructed from the Baer subgeometry ${\mathrm{PG}}(3,q)$ (or equivalently a maximum scattered linear set of rank 4). In this case any Desarguesian spread of the Baer subgeometry gives a pseudoregulus. These ideas were further developed in [@LaVa2013] in higher dimensional projective spaces, and the following theorem was proved. [[@LaVa2013]]{} If $L$ is a scattered ${\mathbb{F}}_q$-linear set of rank $3n$ in ${\mathrm{PG}}(2n-1,q^3)$, $n\geq 2$, then a line of ${\mathrm{PG}}(2n-1,q^3)$ meets $L$ in $0,1,q+1$ or $q^2+q+1$ points and every point of $L$ lies on exactly one $(q^2+q+1)$-secant to $L$. Two different $(q^2+q+1)$-secants to $L$ are disjoint and there exist exactly two $(n-1)$-spaces, meeting each of the $(q^2+q+1)$-secants in a point. So in this case, the transversals are no longer lines, but $(n-1)$-dimensional subspaces, and again the pseudoregulus is uniquely determined by the maximum scattered linear set. This leads to the following questions. Does every pseudoregulus (uniquely) determine a scattered subspace? And how do we recognise a pseudoregulus, given a set of mutually disjoint lines? The following theorem gives a geometric characterisation of a regulus and pseudoregulus in ${\mathrm{PG}}(3,q^3)$, giving a partial answer to the second question. In the following results, $\tilde{{\mathcal{L}}}$ denotes the set of points contained in the lines of the set $\mathcal L$. [[@LaVa2013 Theorem 24]]{} Let ${\mathcal{L}}$ be a set of $q^3+1$ mutually disjoint lines in ${\mathrm{PG}}(3,q^3)$, $q>2$. If each subline defined by three collinear points of $\tilde{{\mathcal{L}}}$ is contained in $\tilde{{\mathcal{L}}}$, then ${{\mathcal{L}}}$ is a regulus or a pseudoregulus. Also the first question was answered in [@LaVa2013]: it is possible to reconstruct the maximum scattered ${\mathbb{F}}_q$-linear set from an ${\mathbb{F}}_q$-pseudoregulus in ${\mathrm{PG}}(2n-1,q^3)$, but this scattered linear set is not unique. [[@LaVa2013]]{} Let $q > 2$, $n\geq 2$. Let ${\mathcal{L}}$ be a pseudoregulus in ${\mathrm{PG}}(2n-1, q^3)$, let $P$ be a point of $\tilde{{\mathcal{L}}}$, on the line $\ell$ of ${{\mathcal{L}}}$, not lying on one of the transversal spaces to ${{\mathcal{L}}}$. Let $T = \{\ell_1,\ell_2,\ldots\}$ be the set of $(q + 1)$-secants through $P$ to $\tilde{{\mathcal{L}}}$, let $P(T)$ be the set of points on the lines of $T$ in $\tilde{{\mathcal{L}}}$. Let $\pi_i$ be the plane $\langle \ell,\ell_i\rangle$, and let $D_i$ be the set of directions on $\ell$, determined by the intersection of $\pi_i$ with $\tilde{{\mathcal{L}}}$. Then $D_i = D_1$, for all $i$, and $P(T)$, together with the points of $D_1$, form a scattered ${\mathbb{F}}_q$-linear set of rank $3n$ determining the pseudoregulus ${{\mathcal{L}}}$. It follows from the proof of the above theorem in [@LaVa2013], that the scattered linear set is not uniquely determined by the pseudoregulus. In fact we have the following. Let $q > 2$. If ${{\mathcal{L}}}$ is a pseudoregulus in ${\mathrm{PG}}(2n-1, q^3)$, then there are $q-1$ scattered ${\mathbb{F}}_q$-linear sets having ${{\mathcal{L}}}$ as associated pseudoregulus. The property that any maximum scattered ${\mathbb{F}}_q$-linear set in ${\mathrm{PG}}(2n-1,q^t)$ gives rise to a pseudoregulus no longer holds for $t>3$. This was further investigated in [@LuMaPoTr2014], introducing [*maximum scattered linear sets of pseudoregulus type*]{}. Also, all maximum scattered ${\mathbb{F}}_q$-linear sets in ${\mathrm{PG}}(2n-1,q^t)$ are no longer (projectively) equivalent for $t>3$. Indeed the following was proved in [@LuMaPoTr2014]. [[@LuMaPoTr2014]]{} For $n\geq 2$ and $t>3$, (i) there exist maximum scattered ${\mathbb{F}}_q$-linear sets which are not of pseudoregulus type, and (ii) there are $\varphi(t)/2$ orbits of maximum scattered ${\mathbb{F}}_q$-linear sets of ${\mathrm{PG}}(2n-1,q^t)$ of pseudoregulus type under the action of the collineation group ${\mathrm{P\Gamma L}}(2n,q^t)$. In the statement of this theorem $\varphi(t)$ denotes Euler’s totient function, i.e. the number of integers smaller than $t$ and relatively prime to $t$. Also the maximum scattered ${\mathbb{F}}_q$-linear sets in the theorem are always of rank $nt$, and all maximum scattered ${\mathbb{F}}_q$-linear sets of pseudoregulus type are of the form $L_{\rho,f}$, see [@LuMaPoTr2014]. The authors of [@LuMaPoTr2014] also introduced maximum scattered ${\mathbb{F}}_q$-linear sets of pseudoregulus type on a projective line, and these were further investigated in [@CsZaPrep]. Of course there is no longer a pseudoregulus associated to such a linear set, but the definition is stimulated from the algebraic formula for maximum scattered ${\mathbb{F}}_q$-linear sets $L_{\rho,f}$ of pseudoregulus type in higher dimensions. Semifield theory {#subsec:semifields} ---------------- The term “semifield” is a short term introduced by Knuth in 1965 [@Knuth1965] for a non-associative division algebra. The algebraic structures themselves were first studied by Dickson in 1906 [@Dickson1906]. We restrict ourselves to the finite case. A [*finite semifield*]{} $\mathbb S$ is an algebra with at least two elements, and two binary operations $+$ and $\circ$, satisfying the following axioms. - $({\mathbb{S}},+)$ is a group with identity element $0$. - $x\circ(y+z) =x\circ y + x\circ z$ and $(x+y)\circ z = x\circ z + y \circ z$, for all $x,y,z \in {\mathbb{S}}$. - $x\circ y =0$ implies $x=0$ or $y=0$. - $\exists 1 \in {\mathbb{S}}$ such that $1\circ x = x \circ 1 = x$, for all $x \in {\mathbb{S}}$. An algebra satisfying all of the axioms of a semifield except (S4) is called a [*pre-semifield*]{}. An important example of a finite pre-semifield is the Generalized Twisted Field (GTF) due to Albert [@Albert1961], obtained by defining the multiplication $x \circ y=xy-cx^\alpha y^\beta$ on the finite field ${\mathbb{F}}_{q^n}$, where $\alpha, \beta \in {\mathrm{Aut}}({\mathbb{F}}_{q^n})$ with $Fix(\alpha)=Fix(\beta)={\mathbb{F}}_q$, and $c\in {\mathbb{F}}_{q^n}$ with $N_{{\mathbb{F}}_{q^n}/ {\mathbb{F}}_q}(c)\neq 1$. Finite semifields have been studied intensively by a variety of mathematicians and they have many connections with structures in Galois Geometry. We refer to [@Knuth1965], [@Kantor2006], [@Lavrauw2013] and to the chapter [@LaPo2011] and its references for the necessary background, an overview of the results, connections, and further reading. A semifield ${\mathbb{S}}$ defines a projective plane $\pi({\mathbb{S}})$, called a [*semifield plane*]{}, and isomorphism classes of semifield planes correspond to [*isotopism classes*]{} of semifields, a result from Albert [@Albert1960]. We restrict ourselves here to what we call scattered semifields, which we define as follows. Suppose ${\mathbb{S}}$ is a semifield which is $l$-dimensional over its left nucleus and $ls$-dimensional over its center ${\mathbb{F}}_q$. Consider the set $R({\mathbb{S}})$ of endomorphisms of ${\mathbb{F}}_{q^s}^l$ corresponding to right multiplication in the semifield ${\mathbb{S}}$. This set is called a [*semifield spread set*]{}, and $R({\mathbb{S}})$ is an ${\mathbb{F}}_q$-linear set of rank $ls$ in the projective space ${\mathrm{PG}}(l^2-1,q^s)$. Now if $R({\mathbb{S}})$ is a scattered ${\mathbb{F}}_q$-linear set, then ${\mathbb{S}}$ is called a [*scattered semifield*]{}. It is known that $R({\mathbb{S}})$ is disjoint from the $(l-2)$–th secant variety $\Omega(S_{l,l}(q^s))$ of a Segre variety $S_{l,l}(q^s)$ and this connection also gives a nice interpretation of the isotopism classes. [^3] \[thm:spreadequivalent\] The isotopism class of a semifield ${\mathbb{S}}$ corresponds to the orbit of $R({\mathbb{S}})$ under the action of the group ${\mathcal{H}}\leq {\mathrm{PGL}}(l^2,q^s)$ preserving the two systems of maximal subspaces contained in the Segre variety $S_{l,l}(q^s)$ in ${\mathrm{PG}}(l^2-1,q^s)$. It follows from Theorem \[thm:spreadequivalent\] that being “scattered” is an isotopism invariant, and this makes it a useful tool to investigate the isotopism problem, which is usually a very hard problem: given two semifields ${\mathbb{S}}_1$ and ${\mathbb{S}}_2$, decide whether they are isotopic or not. The power of this geometric approach is illustrated in [@CaPoTr2006] ($l=s=2$), and [@MaPoTr2007] ($l=2$, $s=3$), and by the recent results from [@LaMaPoTr2015a] and [@LaMaPoTr2015b], where the structure of $R({\mathbb{S}})$ is used to solve the isotopism problem regarding the semifields constructed by Dempwolff in [@Dempwolff2013]. Splashes of subgeometries ------------------------- Given a subgeometry $\pi_0$ and a line $l_\infty$ in a projective space $\pi$, by extending the hyperplanes of $\pi_0$ to hyperplanes of $\pi$ and intersecting these with the line $l_\infty$, one obtains a set of points on the projective line $l_\infty$. Precisely, if we denote the set of hyperplanes of a projective space $\pi$ by ${\mathcal{H}}(\pi)$, and $\overline{U}$ denotes the extension of a subspace $U$ of the subgeometry $\pi_0$ to a subspace of $\pi$, then we obtain the set of points $\{ l_\infty \cap \overline{H} ~:~H \in {\mathcal{H}}(\pi_0)\}$. These sets have been studied in [@LaZa2015] generalising the initial studies in [@BaJa15] where the [*splash of $\pi_0$ on $l_\infty$*]{} was introduced for Desarguesian planes and cubic extensions, i.e. for a subplane $\pi_0\cong{\mathrm{PG}}(2,q)$ in $\pi\cong{\mathrm{PG}}(2,q^3)$. If $l_\infty$ is tangent to (respectively, disjoint from) $\pi_0$, then a splash is called the [*tangent splash*]{} (respectively, [*external*]{} or [*exterior splash*]{}) [of $\pi_0$ on $l_\infty$]{}. Note that when $l_\infty$ is secant to $\pi_0$, the splash of $\pi_0$ on $l_\infty$ is just a subline. One of the main results of [@LaZa2015] shows the equivalence between splashes and linear sets on a projective line. Let $r,n>1$. If $S=S(\pi_0,l_\infty)$ is the splash of the $q$-subgeometry $\pi_0$ of ${\mathrm{PG}}(r-1,q^n)$ on the line $l_\infty$, then $S$ is an ${\mathbb{F}}_q$-linear set of rank $r$. Conversely, if $S$ is an ${\mathbb{F}}_q$-linear set of rank $r$ on the line $l_\infty\cong{\mathrm{PG}}(1,q^n)$, then there exists an embedding of $l_\infty$ in ${\mathrm{PG}}(r-1,q^n)$ and a $q$-subgeometry $\pi_0$ of ${\mathrm{PG}}(r-1,q^n)$ such that $S=S(\pi_0,l_\infty)$. Also, it is shown that the number of hyperplanes through a point determines the weight of that point in the linear set. This leads to the following characterisation of scattered linear sets. Let $S$ be the splash of a subgeometry $\pi_0\cong{\mathrm{PG}}(r-1,q)$ of $\pi\cong{\mathrm{PG}}(r-1,q^n)$ on $l_\infty\cong{\mathrm{PG}}(1,q^n)$. Then $S$ is a scattered linear set if and only if $S$ is an external splash, where every point of $S$ is on exactly one hyperplane of $\pi_0$. MRD-codes --------- A [*rank metric code*]{} is a set of $(m\times n)$-matrices over some field where the distance between two codewords is defined as the matrix rank of their difference. A [*maximum rank distance code*]{} ([*MRD*]{} code) is a rank metric code of maximum size with respect to its distance. Precisely, if $C\subset M_{m\times n}({\mathbb{F}}_q)$, with minimum distance $d$, then $C$ is an MRD code if and only if $|C|=q^{nk}$ with $k=m-d+1$. Such codes were constructed by Delsarte [@Delsarte1978] and Gabidulin [@Gabidulin1985] for every $m$, $n$, and $d$. Recently, Sheekey constructed new families of MRD codes in [@Sheekey2015]. They generalise the previously known Gabidulin codes. These codes are called [*twisted Gabidulin codes*]{} because of their analogy with the generalized twisted fields constructed by Albert (see Section \[subsec:semifields\]). In the same paper [@Sheekey2015], the author gives an interesting connection between maximum scattered linear sets on a projective line ${\mathrm{PG}}(1,q^t)$, and MRD-codes of dimension $2t$ over ${\mathbb{F}}_q$ and minimum distance $t-1$. If $U$ is a $t$-dimensional subspace of ${\mathbb{F}}_q^{2t}$, then $U$ can be represented by $\{(x,f(x))~:~x \in {\mathbb{F}}_{q^t}\}$ for some ${\mathbb{F}}_q$-linear polynomial $f(Y)\in {\mathbb{F}}_q[Y]$. Now $U$ is scattered w.r.t. the Desarguesian spread $D_{2,t,q}$ if for each $a\in {\mathbb{F}}_{q^t}^*$, the dimension of the intersection of $U$ with $D_a=\{(x,ax)~:~x \in {\mathbb{F}}_{q^t}\}$ has dimension at most 1. This is equivalent with the condition that the rank of the ${\mathbb{F}}_q$-linear map $x\mapsto ax+bf(x)$ is at least $t-1$, for all $a,b \in {\mathbb{F}}_{q^t}$, $(a,b)\neq (0,0)$. Then $C_{U}=\{\varphi(ax+bf(x))~:~a,b \in {\mathbb{F}}_{q^t}, (a,b)\neq (0,0)\}$ is an ${\mathbb{F}}_q$-subspace of the vector space of $(t\times t)$-matrices, where $\varphi$ is an isomorphism associating a $(t\times t)$-matrix $M_g$ to each linearized polynomial $g(x)$ w.r.t. some fixed basis. [[@Sheekey2015]]{} If $U$ is a maximum scattered space w.r.t. ${\mathcal{D}}_{2,t,q}$, then $C_{U}$ is an ${\mathbb{F}}_q$-linear MRD-code of dimension $2t$ and minimum distance $t-1$, and conversely. This one-to-one correspondence is particularly nice since it preserves equivalence. [[@Sheekey2015]]{} Two scattered ${\mathbb{F}}_q$-linear sets ${\mathcal{B}}(U)$ and ${\mathcal{B}}(U' )$ of rank $t$ are equivalent in ${\mathrm{PG}}(1,q^t)$ if and only if $C_U$ and $C_{U'}$ are equivalent MRD-codes. We refer to [@Sheekey2015] for examples and further details. #### Acknowledgment The author thanks the members of the Algebra group of Sabanci University for their hospitality and the anonymous referees for their comments and suggestions. [99]{} André, J. Über nicht-Desarguessche Ebenen mit transitiver Translationsgruppe. (German) Math. Z. 60 (1954). 156–186. Albert, A. A. Finite division algebras and finite planes. 1960 Proc. Sympos. Appl. Math., Vol. 10 pp. 53–70 American Mathematical Society, Providence, R.I. Albert, A. A. Generalized twisted fields. Pacific J. Math. 11 (1961) 1–8. Ball, S.; Blokhuis, A.; Lavrauw, M. Linear $(q+1)$-fold blocking sets in ${\mathrm{PG}}(2,q^4)$. Finite Fields Appl. 6 (2000), no. 4, 294–301. Barlotti, A.; Cofman, J. Finite Sperner spaces constructed from projective and affine spaces. Abh. Math. Sem. Univ. Hamburg 40 (1974), 231–241. Bartoli, D.; Giulietti, M.; Marino, G.; Polverino, O. Maximum scattered linear sets and complete caps in Galois spaces. Preprint arXiv:1512.07467v1. Barwick, S. G.; Jackson, W. An investigation of the tangent splash of a subplane of ${\mathrm{PG}}(2,q^3)$. Des. Codes Cryptogr. 76 (2015), no. 3, 451–468. Blokhuis, A.; Lavrauw, M. Scattered spaces with respect to a spread in ${\mathrm{PG}}(n,q)$. Geom. Dedicata 81 (2000), no. 1-3, 231–243. Blokhuis, A.; Lavrauw, M. On two-intersection sets with respect to hyperplanes in projective spaces. J. Combin. Theory Ser. A 99 (2002), no. 2, 377–382. Blokhuis, A.; Storme, L.; Szőnyi, T. Lacunary polynomials, multiple blocking sets and Baer subplanes. J. London Math. Soc. (2) 60 (1999), no. 2, 321–332. Bruck R.H.; Bose R.C. The construction of translation planes from projective spaces. J. Algebra 1 (1964) 85–102. Calderbank, R.; Kantor, W. M. The geometry of two-weight codes. Bull. London Math. Soc. 18 (1986), no. 2, 97–122. Cardinali, I.; Polverino, O.; Trombetti, R. Semifield planes of order $q^4$ with kernel ${\mathbb{F}}_{q^2}$ and center ${\mathbb{F}}_q$. European J. Combin. 27 (2006), no. 6, 940–961. Csajbók, B.; Zanella C. On the equivalence of linear sets. arXiv:1501.03441 Csajbók, B.; Zanella C. On scattered linear sets of pseudoregulus type in $\mathrm{PG}(1,q^t)$. arXiv:1506.08875 Delsarte, Ph. Bilinear forms over a finite field, with applications to coding theory. J. Combin. Theory Ser. A 25 (1978), no. 3, 226–241. Dembowski, P. Finite geometries. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 44 Springer-Verlag, Berlin-New York 1968 xi+375 pp. Dempwolff, U. More translation planes and semifields from Dembowski-Ostrom polynomials, Des. Codes Cryptogr. 68(1–3) (2013), 81–103. Denniston, R. H. F. Some non-Desarguesian translation ovals. Ars Combin. 7 (1979), 221–222. Dickson, L. E. Linear algebras in which division is always uniquely possible. Trans. Amer. Math. Soc. 7 (1906), no. 3, 370–390. Donati, G.; Durante, N. On the intersection of two subgeometries of ${\mathrm{PG}}(n,q)$, Des. Codes Cryptogr., vol. 46, no. 3 (2008), 261–267. Donati, G.; Durante, N. Scattered linear sets generated by collineations between pencils of lines, J. Algebr. Comb., vol. 40, no. 4 (2014), 1121–1134. FinInG – a [GAP]{} package - , version 1.0, 2014. Bamberg, J.; Betten, A.; Cara, Ph.; De Beule, J.; Lavrauw, M. and Neunhöffer, M. The GAP Group, GAP - Groups, Algorithms, and Programming, Version 4.7.8; 2015 (http://www.gap-system.org) Gabidulin, È. M. Theory of codes with maximum rank distance. (Russian) Problemy Peredachi Informatsii 21 (1985), no. 1, 3–16. Hughes, D. R.; Piper, F. C. Projective planes. Graduate Texts in Mathematics, Vol. 6. Springer-Verlag, New York-Berlin, 1973. x+291 pp. Jha, V.; Johnson, N. L. On the ubiquity of Denniston-type translation ovals in generalized André planes. Combinatorics ’90 (Gaeta, 1990), 279–296, Ann. Discrete Math., 52, North-Holland, Amsterdam, 1992. Kantor W.M. Finite semifields. In: Hulpke, A., Liebler, R., Penttila, T., Seress, À, (eds.) Finite Geometries, Groups, and Computation, pp. 103–114. Walter de Gruyter GmbH & Co. KG, Berlin (2006). Knuth D.E. Finite semifields and projective planes. J. Algebra 2 (1965), 182–217. Korchmáros, G. Inherited arcs in finite affine planes. J. Combin. Theory Ser. A 42 (1986), no. 1, 140–143. Lavrauw, M. Scattered spaces with respect to spreads, and eggs in finite projective spaces. Dissertation, Technische Universiteit Eindhoven, Eindhoven, 2001. Eindhoven University of Technology, Eindhoven, 2001. viii+115 pp. Lavrauw, M. Finite semifields with a large nucleus and higher secant varieties to Segre varieties. Adv. Geom. 11 (2011), no. 3, 399–410. Lavrauw, M. Finite semifields and nonsingular tensors. Des. Codes Cryptogr. 68 (2013), no. 1-3, 205–227. Lavrauw, M.; Marino, G.; Polverino, O.; Trombetti, R. Solution to an isotopism question concerning rank 2 semifields. J. Combin. Des. 23 (2015), no. 2, 60–77. Lavrauw, M.; Marino, G.; Polverino, O.; Trombetti, R. The isotopism problem of a class of 6-dimensional rank 2 semifields and its solution. Finite Fields Appl. 34 (2015), 250–264. Lavrauw, M.; Polverino, O. Finite semifields. Chapter in [*Current Research Topics in Galois Geometry (Editors J. De Beule and L. Storme)*]{} Nova Academic Publishers (2011), pp. 131-160. Lavrauw, M.; Zanella, C. Segre embeddings and finite semifields. Finite Fields Appl. 25 (2014), 8–18. Lavrauw, M.; Sheekey, J.; Zanella, C.. On embeddings of minimum dimension of ${\mathrm{PG}}(n,q)\times {\mathrm{PG}}(n,q)$. Des. Codes Cryptogr. 74 (2015), no. 2, 427–440. Lavrauw, M.; Van de Voorde, G. On linear sets on a projective line. [*Des. Codes Cryptogr.*]{} 56 (2-3) (2010), 89–104. Lavrauw, M.; Van de Voorde, G. Scattered linear sets and pseudoreguli. Electron. J. Combin. 20 (2013), no. 1, Paper 15, 14 pp. Lavrauw, M.; Van de Voorde, G. Field reduction and linear sets in finite geometry. Topics in finite fields, 271–293, Contemp. Math., 632, Amer. Math. Soc., Providence, RI, 2015. Lavrauw, M.; Zanella, C. Subgeometries and linear sets on a projective line. Finite Fields Appl. 34 (2015), 95–106. Lunardon, G.; Marino, G.; Polverino, O.; Trombetti, R. Maximum scattered linear sets of pseudoregulus type and the Segre variety $S_{n,n}$. J. Algebraic Combin. 39 (2014), no. 4, 807–831. Marino, G.; Polverino, O.; Trombetti, R. On $F_q$-linear sets of ${\mathrm{PG}}(3,q^3)$ and semifields. J. Combin. Theory Ser. A 114 (2007), no. 5, 769–788. Payne, S. E. A complete determination of translation ovoids in finite Desarguesian planes, Atti Acad. Naz. Lincei Rend., 51 (1971), 328–331. Pepe, V. On the algebraic variety $V_{r,t}$. Finite Fields Appl. 17 (2011), no. 4, 343–349. Polverino, O. Linear sets in finite projective spaces. Discrete Math. 310 (2010), no. 22, 3096–3107. Segre, B. Teoria di Galois, fibrazioni proiettive e geometrie non desarguesiane. (Italian) Ann. Mat. Pura Appl. (4) 64 (1964), 1–76. Sheekey, J. A new family of linear maximum rank distance codes, arXiv:1504.01581. Zanella, C. Universal properties of the Corrado Segre embedding. Bull. Belg. Math. Soc. Simon Stevin 3 (1996), no. 1, 65–79. [^1]: This notion of scattered spaces is not to be confused with scattered spaces in Topology, where a space $A$ is called [*scattered*]{} if every non-empty subset $T\subset A$ contains a point isolated in $T$. [^2]: It was pointed out in [@CsZa2015] that one of the conditions of Theorem 3 in [@LaVa2010] is not necessary for the equivalence of two linear sets. The condition is however sufficient, and hence does not affect the equivalences stated here. [^3]: A relation between the isotopism classes of semifields and equivalence classes of [*embeddings*]{} of Segre varieties can be found in [@LaZa2014].
--- abstract: 'A large number of deep learning architectures use spatial transformations of CNN feature maps or filters to better deal with variability in object appearance caused by natural image transformations. In this paper, we prove that spatial transformations of CNN *feature maps* cannot align the feature maps of a transformed image to match those of its original, for general affine transformations, unless the extracted features are *themselves invariant*. Our proof is based on elementary analysis for both the single- and multi-layer network case. The results imply that methods based on spatial transformations of CNN feature maps or filters cannot replace image alignment of the input and *cannot enable invariant recognition* for general affine transformations, specifically not for scaling transformations or shear transformations. For rotations and reflections, spatially transforming feature maps or filters can enable invariance but only for networks with learnt or hardcoded rotation- or reflection-invariant features.' author: - Ylva Jansson - Maksim Maydanskiy - Lukas Finnveden - Tony Lindeberg bibliography: - 'bib/yjdeepl.bib' - 'bib/stn\_extra.bib' title: Inability of spatial transformations of CNN feature maps to support invariant recognition --- Introduction ============ Convolutional neural networks (CNNs) that are *invariant* to certain groups of image transformations have fewer parameters, can learn from smaller datasets and enable *generalization outside the training distribution*. A number of current methods use spatial transformations of CNN feature maps or filters to enhance the ability of CNNs to handle different types of image transformations [@ChoGwaSavSil-NIPS2016; @LiCheCaiDav-arXiv2017; @KimLinJeoMin-NIPS2018; @zheng2018pedestrian; @HeZhaXia-ECCV2014; @yuarXiv2015; @DaiQiXio-arXiv2017; @JadSimZisKav-NIPS2015]. For example, *spatial transformer networks* (STNs) [@JadSimZisKav-NIPS2015] were designed to enable CNNs to learn invariance to image transformations by transforming *CNN feature maps* as well as input images. Clearly, if a network learns to align transformed input images to a common pose, this can enable invariant recognition. The original work [@JadSimZisKav-NIPS2015], however, simultaneously claims the ability of STNs to learn invariance from data and that the spatial transformer layers (STs) can be inserted into the network “anywhere" (i.e. at any depth). There is no mention of whether the key motivation for the framework - the ability to learn invariance - is still supported when transforming feature maps deeper in the network. This seems to have left some confusion about whether spatially transforming CNN feature maps can support invariant recognition. A number of subsequent works advocate image alignment by *transforming feature maps* [@ChoGwaSavSil-NIPS2016; @LiCheCaiDav-arXiv2017; @KimLinJeoMin-NIPS2018; @zheng2018pedestrian], including e.g. pose alignment of pedestrians [@zheng2018pedestrian] and use of a spatial transformer to mimic the kind of patch normalization done in SIFT [@ChoGwaSavSil-NIPS2016]. Other commonly used methods that are based on transforming CNN feature maps or filters are spatial pyramid pooling [@HeZhaXia-ECCV2014], dilated convolutions [@yuarXiv2015] and deformable convolutions [@DaiQiXio-arXiv2017]. Such methods are often motivated by the need for CNNs to better deal with variability in object pose. There is, however, no discussion about the difference between pose normalizing the input image and spatially transforming feature maps, or the implications this choice has for the ability to achieve e.g. affine or scale invariance [@HeZhaXia-ECCV2014; @yuarXiv2015; @DaiQiXio-arXiv2017; @JadSimZisKav-NIPS2015]. Here, we elucidate under what conditions it is possible to achieve invariance to affine image transformations by means of *purely spatial transformations* of CNN feature maps. These conditions turn out to be very restrictive, implying network filters or features that are *already invariant* to the relevant image transformations. This implies that spatial transformations of CNN feature maps *cannot*, in general, align the feature maps of a transformed image with those of an original and thus not enable affine-invariant recognition. The exception is translations, where the translation covariance of CNNs does imply that translations and feature extraction do commute. We do not claim much mathematical novelty of these facts, which are in some sense intuitive, and, in the single-layer case, have some parallels with the work in [@cohen2016group] and [@CohGeiWei-NIPS2019]. Our contribution is to present an alternative proof based on elementary analysis for the special case of purely spatial transformations of CNN feature maps (as opposed to more general transformations that might mix information between the different feature channels). Since we only consider spatial transformations, we can give a more direct proof. We also provide an analysis of the general multi-layer case, without relying on any covariance assumptions about the individual layers. Our results have straightforward implications for STNs and other methods that perform spatial transformations of CNN feature maps or filters. An experimental evaluation of the practical consequences of our result in the context of *spatial transformer networks*, together with a short intuitive version of the proof presented here, has been presented in [@FinJanLin-arXiv2020]. Preliminaries ============= Images and image transformations -------------------------------- We work with a continuous model of the image space. We consider both an **image** ${f}$ and a convolutional **filter** ${\lambda}$ to be a map from ${\mathbb{R}}^N$ to ${\mathbb{R}}$. We use notation $V$ for the function space to which the images $f$ belong, and $\Vk$ for the space of maps that have each of their $k$ components in $V$. We are somewhat lax about specifically what class of functions $\lambda$ and $f$ should belong to. We need that the convolution operator $${\Lambda}_{\lambda}f(x) {\vcentcolon=}(f\star {\lambda})(x)=\int_{{\mathbb{R}}^N} f(y){\lambda}(x-y) dy=\int_{{\mathbb{R}}^N} {\lambda}(y) f(x-y) dy$$ is defined and has output that lies in the same space, and that applying a Lipshitz continuous point-wise non-linearity $\sigma$ to an image also produces an image in the same space. This will hold for example if ${\lambda}$ are integrable and compactly supported (we’ll write ${\lambda}\in L^1_{comp}$) and the images $f$ are locally integrable ($f\in L^1_{loc}$). Hence, when necessary we will assume $V$ to be the space of locally integrable functions (with the corresponding $L^1_{loc}$ topology). To avoid possible confusion, we denote the zero function by $0$ and the point $0 \in {\mathbb{R}}^N$ by $\overline{0}$. Continuous model of a CNN {#sec:CNN} ------------------------- Let ${\Lambda}: V \to V^{M_k}$ denote a *continuous CNN* with $k$ layers and $M_k$ feature channels in the final layer and let $\theta^{(i)}$ represent the transformation between layers $i-1$ and $i$ such that $$(\Lambda f)_c(x) = (\theta^{(k)} \theta^{(k-1)} \cdots \theta^{(2)} \theta^{(1)} f)_c(x) \label{eq:phi-def},$$ where $c \in \{1,2, \dots M_k\}$ denotes the feature channel. Let further $\Lambda^{(i)} f$ refer to the output from layer $i$ (with $M_i$ feature channels and ${\Lambda}^{(0)} f = f$) $$\begin{aligned} {\Lambda^{(i)}}f &= \theta^{(i)} \theta^{(i-1)} \cdots \theta^{(2)} \theta^{(1)} f. \label{eq:phi_i-def}\end{aligned}$$ We model the transformation $\theta^{(i)}$ between two adjancent layers $\Lambda^{(i-1)}f$ and $\Lambda^{(i)}f$ as a convolution followed by the addition of a bias term $b_{i,c} \in {\mathbb{R}}$ and the application of a pointwise non-linearity $\sigma_i:{\mathbb{R}}\to {\mathbb{R}}$: $$\begin{gathered} ({\Lambda^{(i)}}f)_c (x)=\sigma_i \left( \sum_{m=1}^{M_{i-1}} \int_{y \in {\mathbb{R}}^N } (\Lambda^{(i-1)}f)_m (x-y)\, \lambda^{(i)}_{m,c}(y) \, dy + b_{i,c} \right), \label{eq:CNN}\end{gathered}$$ where $\lambda^{(i)}_{m,c} \in L^1_{comp}$ denotes the convolution kernel that propagates information from feature channel $m$ in layer $i-1$ to output feature channel $c$ in layer $i$. A final fully connected classification layer with compact support can also be modelled as a convolution combined with a non-linearity $\sigma_k$ that represents *a softmax operation* over the feature channels. We note that since a convolution with ${\lambda}\in L^1_{comp}$ is a continuous operator from $V$ to $V$ (recall that we are using $L^1_{loc}$ topology, so the continuity follows from the $L^1$ norm inequality for convolutions, see [@stein2009real], Chapter 2, Exercise 21 d), we conclude that when the $\sigma_i$s are Lipschitz continuous functions the resulting ${\Lambda}: V \to V^{M_k}$ is a continuous operator. Transformations of images and feature maps ------------------------------------------ We will consider the group of *affine image transformations*, which here correspond to a collection of linear maps[^1] ${T_h}: {\mathbb{R}}^N\to {\mathbb{R}}^N$. For each such map, we have a corresponding operator ${{\cal{T}}_h}^k:V^k\to V^k$, defined by the “contragradient" representation, that is by precomposing with $T_h^{-1}$, as follows: \[def:op-Th\] We define ${{\cal{T}}_h}^k: V^k \to V^k$, first for input images, by setting $$\label{eg:Th-def} ({{\cal{T}}_h}^1 f) (x)= f(T_h^{-1} x)$$ and then on feature maps as $$\label{eg:Th-def-k} ({{\cal{T}}_h}^{k} {\Lambda}f)_c (x)= ({\Lambda}f)_c(T_h^{-1} x),$$ where $k$ denotes the number of feature channels. Note how this definition implies purely spatial transformations of feature maps. Although the ${{\cal{T}}_h}^k$’s are, technically, different operators for different values of $k$ we often refer to all these operators as ${{\cal{T}}_h}$ to simplify the notation. \[def:op-D\] We define the **translation operator** $\calD_{\delta}$, with $\delta \in {\mathbb{R}}^N$ for input images by $$(\calD_\delta f) (x) = f(x - \delta)$$ and then for feature maps by $$(\calD_\delta^k {\Lambda}f) (x) = ({\Lambda}f)(x - \delta).$$ We will again use single notation $\calD_\delta$ for all operators $\calD_\delta^k:V^k\to V^k$. Invariance and covariance ------------------------- Consider a general (possibly non-linear) feature extractor $\Lambda: {{\overline{\to}}} \Vk$ such as e.g. the continuous analog of a CNN described in Section \[sec:CNN\]. \[def:covariance\] We define an operator $\Lambda$ to be **covariant** to an operator ${\mathcal{O}}$ if there exists an input independent operator ${\mathcal{O}}'$ such that we can express a communative relation over $\Lambda$ of the form (see also Figure \[fig:commdia\]) $$\Lambda {\mathcal{O}}f = {\mathcal{O}}' \Lambda f. \label{eq:covariance1}$$ If such an operator exists and is in addition invertible, then it is possible to “undo" the action of ${\mathcal{O}}$ after feature extraction. (In the invariant neural networks literature, covariance is also often referred to as equivariance.) $$\begin{CD} {\Lambda} \, f @>{{\mathcal{O}}'}>> \Lambda {\mathcal{O}}f \\ \Big\uparrow\vcenter{\rlap{$\scriptstyle{{\Lambda}}$}} & & \Big\uparrow\vcenter{\rlap{$\scriptstyle{{\Lambda}}$}} \\ f @>{{\mathcal{O}}}>> {{\mathcal{O}}f} \end{CD}$$ We here consider operators ${{\cal{T}}_h}^k$ corresponding to affine transformations of the spatial image domain that do not mix information between the feature channels (Definition \[def:op-Th\]), which leads us to study (restricted) covariance relations of the form: $$ \Lambda \calT_h f = ({{\cal{T}}_g}^{k})^{-1} \Lambda f. \label{eq:covariance2}$$ We ask the question if and under what conditions such (restricted) covariance relations exist for CNNs. \[def:translation-covariance\] We define an operator ${\Lambda}$ to be **translation covariant** if for every $\delta$ we have $$\begin{aligned} \label{eq:tr-covar} {\Lambda}\calD_{\delta}=\calD_{\delta} {\Lambda}. \end{aligned}$$ \[def:invariance\] We define an operator $\Lambda$ to be **invariant** to an operator ${{\cal{T}}_h}$ if the feature representation of a transformed image is *equal to* the feature representation of the original image $$\Lambda \calT_h f = \Lambda f \label{eq:invariance}$$ for all $f \in V$. If this is true for all $h$ in a transformation group $H$, we say that $\Lambda$ is invariant to $H$. \[lemma:conv-trans-covar\] The convolution operator is translation covariant $$\calD_{\delta}{\Lambda}_{\lambda}={\Lambda}_{\lambda}\calD_{\delta} ={\Lambda}_{\calD_{\delta}\lambda}.$$ The proof is given in Appendix \[app:single-layer-covariance\]. \[prop:CNN-covar\] A CNN as defined in Section \[sec:CNN\] is a translation-covariant operator. Since each convolution operation is translation covariant by Lemma \[lemma:conv-trans-covar\] and the nonlinearities act on the values returned as output from the convolutions, all the operators $\Lambda^{(i)}$ are translation covariant. Formal proof is by induction on $i$ (see Appendix \[app:prop-CNN-proof\]). \[lem:conv-lin\] Translation and general linear operators (c.f. (\[eg:Th-def\])) have the following commutation relation: $$\label{eq:commutator1} {{\cal{T}}_h}\calD_\delta = \calD_{({T_h}\delta)} {{\cal{T}}_h}$$ or equivalently $$\label{eq:commutator2} \calD_\delta {{\cal{T}}_h}= {{\cal{T}}_h}\calD_{({T_h}^{-1} \delta)}.$$ Applying both sides to $f$ we compute $$\begin{aligned} ({{\cal{T}}_h}\calD_\delta f)(x)=(\calD_\delta f)(T_h^{-1}(x))=f(T^{-1}_h(x)-\delta), \end{aligned}$$ $$\begin{aligned} (\calD_{({T_h}\delta)} {{\cal{T}}_h}f)(x) = ({{\cal{T}}_h}f)(x-{T_h}\delta)=f(T^{-1}_h(x-{T_h}\delta))=f(T^{-1}_h(x)-\delta). \end{aligned}$$ ![*An inverse spatial transformation of a *CNN feature map* cannot, in general, align the feature maps of a transformed image with those of its original*. Here, the network ${\Lambda}$ has two feature channels “W” and “M”, and $T_g$ corresponds to a 180$^\circ$ rotation. Since different *feature channels* respond to the rotated image as compared to the original image, it is not possible to align the respective feature maps with a spatial rotation. In fact, *spatially transforming feature maps* can, in most cases, not eliminate differences related to object pose and can thus not enable invariant recognition.[]{data-label="fig:tiny-proof"}](tiny_proof_for_proof){width="50.00000%"} ![*For any transformation that includes a scaling component, the field of view of a feature extractor with respect to an object will differ between an original and rescaled image.* Consider e.g. a simple linear model that performs template matching with a single filter. When applied to the original image, the filter matches the size of the object that it has been trained to recognize and thus responds strongly. When applied to a rescaled image, the filter never covers the full object of interest, and thus the response cannot be guaranteed to take even *the same set of values* for a rescaled image and its original. []{data-label="fig:tiny-proof-scale"}](tiny_proof_scale_for_proof){width="70.00000%"} Intuition and outline of proof ============================== A spatial transformation of *an input image* can clearly support invariant recognition by applying the inverse transformation to a transformed input: $${\Lambda}\, {{\cal{T}}_h}^{-1} {{\cal{T}}_h}f = {\Lambda}f.$$ The key question is whether it is possible to in a similar way undo a transformation of an input image *after feature extraction*. Is there a spatial transformation ${T_g}^k$ dependent on ${{\cal{T}}_h}$ such that at a certain depth in the network $${{\cal{T}}_g}^k {\Lambda}^{(i)} {{\cal{T}}_h}f \stackrel{?}{=} {\Lambda}^{(i)} f \label{eq:feature_alignment}$$ holds for all $f$. Note that this would imply that ${\Lambda}^{(i)}$ is (restricted) covariant to ${{\cal{T}}_h}$. Remember that, since we consider spatial transformations of feature maps, the same transformation is applied in each feature channel $$({T_g}^k {\Lambda}^{(i)} {{\cal{T}}_h}f)_c(x) = ({\Lambda}^{(i)} {{\cal{T}}_h}f)_c(T_g^{-1} x).$$ Clearly, if (\[eq:feature\_alignment\]) holds then transformations of feature maps could enable invariant recognition in a similar way as for input images. The feature maps of transformed images could be aligned at a certain depth, and the rest of the network could work on data without any variability stemming from differences in object pose. Note that the question of *how to know* which transformation to apply for each image, something which is e.g. learned from data for STNs, is not the topic here. We simply show that even with perfect information about the pose of the input image, invariance cannot be achieved by a spatial transformation of the feature map. Intuition --------- The key intuitions why a spatial transformation of CNN feature maps cannot, in the general case, align feature maps of a transformed image with those of an original image, and thus not enable invariant recognition, are as follows: (i) The natural way to align the feature maps of a transformed image with those of its original would be to apply *the inverse spatial transformation* to the feature maps of the transformed image i.e. $${{\cal{T}}_h}^{-1}({\Lambda^{(i)}}{{\cal{T}}_h}f)_c(x) = ({\Lambda^{(i)}}{{\cal{T}}_h}f)_c({T_h}x). \label{eq:inverse_alignment}$$ For example, to align the feature maps of an original and a rescaled image, we would, after feature extraction, apply the inverse scaling to the feature maps. We will show that using ${{\cal{T}}_h}^{-1}$ is, in fact, *a necessary condition* for (\[eq:feature\_alignment\]) to hold. The reason for this is that the features for corresponding spatial positions after alignment will otherwise be computed from not fully overlapping image regions in the original image, in which case the output can clearly not be guaranteed to be equal. (ii) When transforming an input image, this typically causes not only *a spatial shift* in its feature map representation but also *a shift in the channel dimension* of the feature maps. This is illustrated in Figure \[fig:tiny-proof\] for the case of rotations, but a similar reasoning holds for a large range of spatial transformations. A *purely spatial transformation* of the feature maps cannot correct for a change in e.g. which channels respond most strongly at a specific spatial position. Thus, a spatial transformation is not enough to align the feature maps of a transformed image with those of its original. (iii) *The receptive fields*, i.e. the region in the input that influence the response, of the features extracted in a neural network (for a single layer, this corresponds to the support of the convolutional filters) are typically *not invariant* to the relevant transformation group. Indeed, any finite support region will not be invariant to shears or transformations that contain a uniform or non-uniform scaling component. For example, for a scaling transformation, a filter applied to a rescaled image, might never cover the full object of interest, and thus the feature response cannot be guaranteed to take even *the same set of values* for a rescaled image and its original. This is illustrated in Figure \[fig:tiny-proof-scale\]. Since a purely spatial transformation cannot align the feature maps of a transformed image with those of its original, spatially transforming feature maps will not enable invariant recognition. The exception is if the features in the specific network layer are *themselves invariant* to the relevant transformation. An example of this would be a network built from rotation invariant filters ${\lambda}$, where ${\lambda}(x) = {\lambda}({T_h}x)$ for all ${\lambda}$. For such a network, or a network with more complex (learned or hardcoded) rotation invariant features in a certain layer, invariant recognition could be enabled by spatial transformations of the feature maps. One might, however, note that such invariant features in intermediate layers are in many cases not desirable (especially not early in the network), since they discard too much information about object pose. For example, rotation invariant edge detectors would lose information about the edge orientations which tend to be important for subsequent tasks. Outline of proof ---------------- ### Single-layer case We first consider the case of *a single convolutional layer* and show that the requirement that it should be possible to align feature maps implies very strict conditions on the filters. Lemma \[lemma:equiv1\] shows that *inversely transforming the feature maps* of a transformed image is equivalent to applying *transformed filters* to the original image: $$ {{\cal{T}}_h}^{-1}{\Lambda}_{\lambda}{{\cal{T}}_h}f ={\Lambda}_{(\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda})}f.$$ Lemma \[lemma:noRot\] is the key to seeing that ${{\cal{T}}_h}^{-1}$ is the only possible candidate to align the feature maps of a transformed image with those of its original, since otherwise features at corresponding spatial positions are computed from different parts of the original image. Finally, we discuss the conditions on the filters under which invariance is possible, where Lemma \[id\] implies that we can give quite detailed conditions on the filters and transformations, since it says that if two single-layer networks compute the same function they must have the same filter/filters. ### Multi-layer case We then consider a more general non-linear feature extractor such as the *multi-layer convolutional network* defined in Section \[sec:CNN\] and show that similar strict conditions hold in this case. We first isolate two key features shared by single convolution operators and CNNs: *translation covariance* and *semi-locality*. These features underpin most of the proofs for the single-layer case and allow these proofs to be extended to the multi-layer case. *Semi-locality* (Definition \[def:semi-local\]) is an extension of the concept of an operator with *compact support*. The reason to define the concept of semi-locality, instead of considering operators with compact support, is that we wish to include operators that output a constant for the input $f=0$, such as CNNs with non-zero biases or non-linearities that do not take zero to zero (or both) would do. We then show that the multi-layer continuous neural network (\[eq:CNN\]) is a translation-covariant, semi-local operator. Since it is not possible to give explicit conditions for individual filters (e.g. symmetries implies that the same function can be implemented by more than one set of filters), we will instead consider conditions that need to hold for the non-linear features extracted in a specific network layer ${\Lambda^{(i)}}$, to enable aligning CNN feature maps of a transformed image with those of an original image at depth $i$. A key step in our proof is to note that any translation-covariant operator ${\Lambda}$ is captured by a map $\mu_{\Lambda}: V \to {\mathbb{R}}$ defined by (equation (\[eq:mu-def\])) $$\mu_{\Lambda}(f):=({\Lambda}f) ({\overline{0}}),$$ which we refer to as *the generator*. The generator can be seen as a non-linear analog of a convolutional filter (evaluated at the origin for a single-layer network). Lemma \[lem:conj\] and Lemma \[multi-equiv\] then establish the relationship between the inversely transformed feature maps of a transformed image and the feature maps of the original image, showing that $${{\cal{T}}_h}^{-1}{\Lambda}{{\cal{T}}_h}={\Lambda}f$$ implies that $\mu_{\Lambda}({{\cal{T}}_h}f ) = \mu_{\Lambda}(f)$. That is, the network features must themselves *already be invariant* to the relevant image transformation. Lemma \[lemma:noRot-multi\] shows that, as for the single-layer case, ${{\cal{T}}_h}^{-1}$ is the only possible candidate to align the feature maps of a transformed image with those of its original. Covariance and invariance in the single-layer case {#sec:single} ================================================== Consider a single channel convolutional neural network with the filter ${\lambda}$ $$\label{eq:single-layer-cnn} {\Lambda}_{\lambda}f(x) {\vcentcolon=}(f\star {\lambda})(x)=\int_{{\mathbb{R}}^N} f(y){\lambda}(x-y) dy=\int_{{\mathbb{R}}^N} {\lambda}(y) f(x-y) dy.$$ Can precomposing with ${{\cal{T}}_h}$ be undone after the convolution step by postcomposing with some other ${{\cal{T}}_g}$: $$\label{eq:alignment} {{\cal{T}}_g}{\Lambda}_{\lambda}{{\cal{T}}_h}\stackrel{?}{=} {\Lambda}_{\lambda}.$$ We will see that this is not possible. Note that since a spatial transformation of feature maps never mixes information between different channels, it is enough to show this for a network with a single feature channel. Covariance relations of convolution operators --------------------------------------------- We begin by showing the following lemma, expressing naturality of convolution. \[lemma:equiv1\] $$\label{eq:equiv1} {{\cal{T}}_h}^{-1}{\Lambda}_{\lambda}{{\cal{T}}_h}={\Lambda}_{(\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda})}$$ We compute using change of variables $u={T_h}^{-1} y$, $du=(\det {T_h}^{-1}) \, dy$ $$\begin{aligned} ({\Lambda}_{\lambda}{{\cal{T}}_h}f)(x) &= \int_{{\mathbb{R}}^N} f({T_h}^{-1} y){\lambda}((x-y)) dy \nonumber \\ &= \int_{{\mathbb{R}}^N} f(u){\lambda}({T_h}({T_h}^{-1}x-{T_h}^{-1}y)) \det {T_h}du \nonumber\\ &= \int_{{\mathbb{R}}^N} f(u){\lambda}({T_h}({T_h}^{-1}x-u)) \det {T_h}du \nonumber\\ &= ({\Lambda}_{(\det {T_h}) {{\cal{T}}_h}^{-1} {\lambda}} f) ({T_h}^{-1}x) \nonumber\\ &= ( {{\cal{T}}_h}{\Lambda}_{(\det {T_h}) {{\cal{T}}_h}^{-1} {\lambda}} f) (x) \end{aligned}$$ Applying ${{\cal{T}}_h}^{-1}$ to both sides yields the lemma. Thus, inversely transforming *the feature maps* of a transformed image will not yield the same feature maps as for the original image. Instead, this is equivalent to extracting features from the original image with *transformed filters*. Using ${{\cal{T}}_g}={{\cal{T}}_h}^{-1}$ is a necessary condition to align feature maps --------------------------------------------------------------------------------------- The following lemma will be the key to seeing that a necessary condition for being able to align the feature maps of a transformed image with those of its original is using ${{\cal{T}}_h}^{-1}$. \[lemma:noRot\] If for two compactly supported filters ${\lambda}_1\neq 0$ and ${\lambda}_2\neq 0$ we have ${\Lambda}_{{\lambda}_1}={{\cal{T}}_h}{\Lambda}_{{\lambda}_2}$ then ${T_h}=\operatorname{Id}$. Since ${\lambda}_1\neq 0$, we can pick a compactly supported $f$ such that $({\Lambda}_{{\lambda}_1} f) ({\overline{0}})\neq 0$ (pick any $f$ with ${\Lambda}_{{\lambda}_1} f\neq 0$, translate it to make ${\Lambda}_{{\lambda}_1} f({\overline{0}})\neq 0$, and, if needed, multiply by a bump function of sufficiently large ball to make it compactly supported). Suppose $f$ is supported on a ball of radius $r(f)$ around the origin and ${\lambda}_2$ on a ball of radius $r({\lambda}_2)$ around the origin. If $T_h\neq \operatorname{Id}$ we can pick $p$ such that $|T^{-1}_h(p)-p| >r(f)+ r({\lambda}_2)+1$. Let $\hat{f}(x)=f(x+p)$ i.e. $\hat{f}=\calD_{-p}f$. Then using Lemma \[lemma:conv-trans-covar\] we have $$(\calD_p {\Lambda}_{{\lambda}_1} \hat{f} )({\overline{0}})=({\Lambda}_{{\lambda}_1} \calD_p \hat{f} )({\overline{0}}) =({\Lambda}_{{\lambda}_1} f )({\overline{0}}) \neq 0$$ but $$\begin{aligned} (\calD_p {{\cal{T}}_h}{\Lambda}_{{\lambda}_2} \hat{f} )({\overline{0}})=( {{\cal{T}}_h}\calD_{T^{-1}_h p} {\Lambda}_{{\lambda}_2} \hat{f})({\overline{0}})=( {{\cal{T}}_h}{\Lambda}_{{\lambda}_2} \calD_{T^{-1}_h p -p} f)({\overline{0}}) \nonumber \\ =({\Lambda}_{{\lambda}_2} \calD_{T^{-1}_h p -p} f) (T_h^{-1}({\overline{0}}))=({\Lambda}_{{\lambda}_2} \calD_{T^{-1}_h p -p} f)({\overline{0}})=0, \end{aligned}$$ where the first equality follows from Lemma \[lem:conv-lin\], the second from Lemma \[lemma:conv-trans-covar\], and the last from the fact that $\tilde{f}=\calD_{T^{-1}_h p -p} f$ is supported on a ball of radius $r(f)$ around $T^{-1}_h p -p$, which is disjoint from the ball of radius $r({\lambda}_2)$ around the origin on which ${\lambda}_2$ is supported; this means that in the convolution integral $( {\Lambda}_{{\lambda}_2}\tilde{f})(0)= \int \tilde{f}(y) \lambda (-y) dy$ the integrand is zero at every point $y$, thus yielding the zero result, as wanted. Convolution determines the filter --------------------------------- We now show that if two single-layer networks compute the same function, their filters must be equal. \[id\] If ${\Lambda}_{{\lambda}_1}={\Lambda}_{{\lambda}_2}$ then ${\lambda}_1={\lambda}_2$. Letting ${\lambda}={\lambda}_1-{\lambda}_2$, we just need to show that ${\Lambda}_{\lambda}=0$ implies ${\lambda}=0$. Let $f_n$ be a sequence of mollifiers converging to the delta function at the origin (that is a sequence of non-negative smooth functions each with integral equal to 1 and with their supports on balls of radii converging to $0$). Then (see for example [@stein2009real], Chapter 3, Theorem 2.3) we have ${\Lambda}_{\lambda}f_n \to {\lambda}$ (in $L^1$), so that if ${\Lambda}_{\lambda}$ is the zero functional, then ${\lambda}$ is zero. This lemma implies that we can give more specific conditions on the filters in a single-layer network for which it is possible to achieve invariance by aligning CNN feature maps. Conclusions in the single-layer case ------------------------------------ We can now conclude that the only admissible operator to align CNN feature maps is ${{\cal{T}}_h}^{-1}$ and that alignment is only possible if the convolutional filters are themselves invariant to the relevant transformation: If $ {{\cal{T}}_g}{\Lambda}_{\lambda}{{\cal{T}}_h}={\Lambda}_{\lambda}$, this implies that ${{\cal{T}}_g}= {{\cal{T}}_h}^{-1} $ and that ${\lambda}= (\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda})$ Writing ${{\cal{T}}_g}={\mathcal{T}_H}({{\cal{T}}_h})^{-1}$ and ${\lambda}_h=(\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda})$, we see that $${{\cal{T}}_g}{\Lambda}_{\lambda}{{\cal{T}}_h}={\mathcal{T}_H}({{\cal{T}}_h})^{-1}{\Lambda}_{\lambda}{{\cal{T}}_h}={\mathcal{T}_H}{\Lambda}_{{\lambda}_h}.$$ Thus, if (\[eq:alignment\]) holds, by Lemma \[lemma:noRot\] we must have $T_H=\operatorname{Id}$ and ${{\cal{T}}_g}={{\cal{T}}_h}^{-1} $. Then, by Lemma \[lemma:equiv1\] and Lemma \[id\] we must have $$\label{eq:equivI}{\lambda}= (\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda}).$$ This means that up to rescaling by $\det {T_h}$, the filter ${\lambda}$ is *invariant* under the linear transformations ${T_h}$. Observe that this implies that ${\lambda}$ is invariant under all integer powers of ${T_h}$. If we further wish to have a network invariant to all transformations in a group $H$, then this also needs to hold for all $h \in H$. The equality (\[eq:equivI\]) is impossible for bounded non-zero ${\lambda}$ unless ${|\det {T_h}|=1}$. We have $|\sup (\det {T_h}) {{\cal{T}}_h}^{-1} ({\lambda})|=|\det {T_h}||\sup {\lambda}|$, so if $|\sup {\lambda}|\neq 0$, $|\sup {\lambda}|\neq \infty$ and (\[eq:equivI\]) holds then we must have $|\det {T_h}|=1$. One may be prepared to ignore intensity (aka rescaling), instead considering $$\label{eq:equivII}{\lambda}= C {{\cal{T}}_h}^{-1} ({\lambda})$$ for some $C\in {\mathbb{R}}$. Even with this relaxation, this invariance can only hold for severely limited kinds of filters and transformations: The equality (\[eq:equivII\]) is impossible for ${\lambda}$ with support on a set of finite but non-zero measure, unless ${|\det {T_h}|=1}$. If ${\lambda}$ has support of measure $m$, then ${{\cal{T}}_h}^{-1} ({\lambda})$ has support of measure $|\det {T_h}^{-1}|m$. If (\[eq:equivII\]) holds then $|\det {T_h}^{-1}|m=m$ and so if $m$ is finite and non-zero we must have $|\det {T_h}^{-1}|=1$, i.e. $|\det {T_h}|=1$. More strongly, in the case when the image domain is ${\mathbb{R}}^2$, one can use the classification of 2D real matrices by Jordan canonical form to study the behavior of iterations of ${T_h}$, as done, for example, in Chapter 3.1 of [@hasselblatt2003first] (a very similar analysis is possible in higher dimensions). Using this, we can analyze further even the cases where $|\det {T_h}|=1$, as follows. \[pro:single-layer\] The equality (\[eq:equivII\]) can hold for ${\lambda}$ with support on a set of finite but non-zero measure only if ${T_h}$ is conjugate to some rotation or, if $T_h$ is orientation reversing, a reflection matrix; and in those cases only if (i) ${T_h}^n=Id$ for some $n$ and ${\lambda}$ is symmetric with respect to this finite set of transforms, or (ii) if ${\lambda}$ is constant on a collection of concentric ellipses along which ${T_h}$ rotates things. There are special cases when all the eigenvalues of ${T_h}$ are real and have absolute value 1. Then, either ${T_h}^2=\operatorname{Id}$, in which case $\lambda$ simply has to have a 2-fold symmetry (this includes the cases when ${T_h}$ is the reflection around the origin or a reflection through a line); or ${T_h}$ has Jordan form $\begin{pmatrix} 1&1\\0&1\end{pmatrix}$ or $\begin{pmatrix} -1&1\\0&-1\end{pmatrix}$ and ${T_h}^n=B^{-1}\begin{pmatrix} 1&n\\0&1\end{pmatrix}B$ or ${T_h}^n=B^{-1}\begin{pmatrix} (-1)^n&n\\0&(-1)^n\end{pmatrix}B$, respectively, for some fixed basis change matrix $B$. We see that the eigenspace of eigenvalue 1 is fixed, but everything else moves out to infinity, so an invariant $\lambda$ would have to be supported on this (1D) eigenspace (which would imply that the only possible invariant filter corresponds to a ${\Lambda}_{\lambda}$ which is zero). Similarly, if $|\det {T_h}|=1$ but ${T_h}$ has distinct real eigenvalues $d_1, d_2$ (this happens precisely when $\operatorname{tr}^2 {T_h}-4 \det {T_h}>0$), of size not equal to 1, $|d_1|>1>|d_2|$, then ${T_h}$ has Jordan form $\begin{pmatrix} d_1&0\\0&d_2\end{pmatrix}$ and ${T_h}^n=B^{-1}\begin{pmatrix} d_1^n&0\\0&d_2^n\ \end{pmatrix}B$ for some fixed basis change matrix $B$; everything not in the $d_2$ eigenspace moves out to infinity under positive iterations and everything not in $d_1$ eigenspace under negative ones (in the new coordinates the motion is along hyperbolas $y=1/x$, and this is why such ${T_h}$ is called *hyperbolic*), so an invariant $\lambda$ would have to be supported only at the origin. Further, *the only remaining case* $|\det {T_h}|=1$ but $\operatorname{tr}^2 {T_h}-4 \det {T_h}<0$ (a.k.a. $\det {T_h}=1$, but $|\operatorname{tr}{T_h}| <2$), gives, up to a change of basis, *a rotation matrix*. In the new basis, concentric circles around the origin are preserved by the rotation; in the original basis these are “concentric" ellipses (this is the reason ${T_h}$ is called *elliptic* in this case). If the rotation is by an irrational multiple of $\pi$, the orbit of any point is dense in the corresponding ellipse (see, for example, [@hasselblatt2003first], Proposition 4.1.1) and equality (\[eq:equivII\]) would still imply that ${\lambda}$ is constant on each of these ellipses. On the other hand, the ${T_h}$s where rotation is by a rational multiple of $\pi$ are precisely ones with ${T_h}^n=Id$ for some $n$. Thus, we conclude that for a single-layer network, aligning the feature maps of a transformed image with those of its original is only possible for transformations that *correspond to rotations or reflections* in some basis, and in that case only if the filters are themselves rotation/reflection invariant. Notably, such alignment is not possible for general affine transformations, scaling transformations or shears since there do not exist any non-trivial affine-, scale- or shear-invariant filters with compact support. Covariance and invariance in the multi-layer case {#sec:multilayer} ================================================= We now give an equivalent proof for a more general non-linear, semi-local, translation-covariant feature extractor ${\Lambda}$ (semi-locality is defined below). We are specifically interested in continuous multi-layer CNNs (Section \[sec:CNN\]) but the proof is valid for any such operator. We ask whether equation (\[eq:alignment\]) $${{\cal{T}}_g}^k {\Lambda}{{\cal{T}}_h}\stackrel{?}{=} {\Lambda}\label{eq:post-pre-compose}$$ could be true for such operators and if so under what conditions. Note that for the case of a multi-layer convolutional neural network, it is enough to consider a single feature channel at a certain depth, since a spatial transformation never mixes information between the channels. For simplicity, we will refer to a feature map at depth $i$ $({\Lambda^{(i)}}f)_c$ as ${\Lambda}f$. Two key features are shared by single convolution operators and CNNs: translation covariance and semi-locality. These features underpin most of the proofs for the single-layer case and allow these proofs to be extended to the multi-layer case. Commutators and conjugation of translation-covariant operators -------------------------------------------------------------- Recall that by Proposition \[prop:CNN-covar\] the multi-layer CNN is a translation-covariant operator. We further note that translation covariance holds also when one changes coordinates on both input and output using $T_h$, i.e. when conjugating $\Lambda$ with the operator ${{\cal{T}}_h}$. If ${\Lambda}$ is translation covariant, then so is ${{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}$. Using Lemma \[lem:conv-lin\] and Definition \[def:translation-covariance\] we compute: $$\begin{aligned} \calD_{x} {{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}=&\nonumber\\ =& {{\cal{T}}_h}^{-1} \calD_{T_h x} {\Lambda}{{\cal{T}}_h}={{\cal{T}}_h}^{-1} {\Lambda}\calD_{T_h x} {{\cal{T}}_h}={{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}\calD_{T_h^{-1}(T_h x)}=\nonumber\\&\hspace{6.5cm} ={{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}\calD_{ x} \end{aligned}$$ Generators of translation-covariant operators --------------------------------------------- A key step in the multi-layer proof is to note that any translation-covariant operator ${\Lambda}:V \to V$ is captured by a map $\mu_{\Lambda}: V \to {\mathbb{R}}$ defined by $$\mu_{\Lambda}(f):=({\Lambda}f) ({\overline{0}}) \label{eq:mu-def}.$$ We call this $\mu_{\Lambda}$ **the generator** of ${\Lambda}$ (sometimes denoted simply by $\mu$ when the relevant ${\Lambda}$ is clear from the context). Since we have $$({\Lambda}f) (x)=(\calD_{-x} {\Lambda}f) ({\overline{0}})=({\Lambda}\calD_{-x} f)({\overline{0}})=\mu (\calD_{-x} f) \label{eq:lambda-nonlinear-def},$$ we can, conversely, given $\mu$ define a translation-covariant operator ${\Lambda}_\mu$ by $$({\Lambda}_\mu f)(x) := \mu (\calD_{-x} f). \label{eq:lambda-mu-def}$$ Clearly the operations in (\[eq:mu-def\]) and (\[eq:lambda-mu-def\]) are inverses of each other. The generator $\mu_{\Lambda}$ can be seen as a non-linear analog of a convolutional filter in the single-layer case. The following Lemma is the equivalent to Lemma \[lemma:equiv1\] in the single-layer case. \[lem:conj\] The generator of ${{\cal{T}}_h}^{-1} {\Lambda}_\mu {{\cal{T}}_h}$ is $\mu_h:=\mu({{\cal{T}}_h}f)$. $$\begin{aligned} & ({{\cal{T}}_h}^{-1} {\Lambda}_\mu {{\cal{T}}_h}f) ({\overline{0}}) = \text{\{definition of ${{\cal{T}}_h}$ (\ref{eg:Th-def})\}} \nonumber \\ & = ({\Lambda}_\mu {{\cal{T}}_h}f) ({T_h}{\overline{0}})\nonumber \\ & = ({\Lambda}_\mu {{\cal{T}}_h}f) ({\overline{0}}) = \text{\{definition of ${\Lambda}_\mu$ (\ref{eq:lambda-mu-def}) \}} \nonumber\\ & = \mu( {{\cal{T}}_h}f) \label{eq:lambda-postcomposed} \end{aligned}$$ Thus, also in the case of a non-linear, translation-covariant feature extractor, inversely transforming *the feature maps* of a transformed image will not yield the same feature maps as for the original image. Instead, it is corresponds to extracting features from transformed image patches. Semi-locality ------------- To enable considering operators that output a constant for the input $f=0$, we define the concept of *semi-locality*. A semi-local operator is an extension of the concept of an operator with *compact support*. It similarly implies that the output will only be affected by the values in a bounded region of the input image. However, that output does not necessarily have to be 0 for the input $f=0$ (but translation covariance implies that it must output a constant). \[def:semi-local\] We will say that ${\Lambda}$ is **semi-local** if there exists a radius $r({\Lambda})$ such that for any point $p$ and any two functions $f_1$ and $f_2$ which agree on the ball of radius $r({\Lambda})$ around a point $p$ we have ${\Lambda}f_1 (p)= {\Lambda}f_2 (p)$. Semi-locality interacts well with translation covariance. \[lem:semiloc-transl\] If ${\Lambda}$ is translation covariant and semi-local with radius $r({\Lambda})$ and $f_1$ and $f_2$ agree on a ball of radius $r+r({\Lambda})$ around $p$, then ${\Lambda}f_1$ and ${\Lambda}f_2$ agree on a ball of radius $r$ around $p$. For any $x$ a in ball of radius $r$ around the origin, the functions $\calD_x f_1$ and $\calD_x f_2$ agree on a ball of radius $r({\Lambda})$ around $p$; by definition of semi-locality, this means $({\Lambda}D_x f_1)(p)=({\Lambda}\calD_x f_2) (p)$, or $({\Lambda}f_1)(p-x)=({\Lambda}f_2) (p-x)$, which is what we wanted. Semi-locality is unaffected by conjugation with ${{\cal{T}}_h}$. \[lemma:comp-loc\] If ${\Lambda}$ is semi-local, then so is ${{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}$. Let $k=\max_{|v|=1} |{T_h}^{-1}(v)|$ be the operator norm of ${T_h}^{-1}$. Set $r=k r({\Lambda})$. We claim ${{\cal{T}}_h}^{-1} {\Lambda}{{\cal{T}}_h}$ is semilocal with radius $r$. Indeed, if $f_1$ and $f_2$ agree on a ball of radius $r$ around $p$, then ${{\cal{T}}_h}f_1$ and ${{\cal{T}}_h}f_2$ agree on ball of radius $r({\Lambda})$ around $T_h p$, and so do the values $ ({\Lambda}{{\cal{T}}_h}f_1)(T_h p)$ and $ ({\Lambda}{{\cal{T}}_h}f_2)(T_h p)$ agree. This means ${{\cal{T}}_h}^{-1} ({\Lambda}{{\cal{T}}_h}f_1)( p)={{\cal{T}}_h}^{-1} ({\Lambda}{{\cal{T}}_h}f_2)( p)$ as wanted. Convolutions with compactly-supported ${\lambda}$ are semi-local. \[lemma:conv-loc\] If ${\lambda}$ is supported on a ball of radius $r({\lambda})$ around the origin, then ${\Lambda}_{\lambda}$ is semi-local with radius $r({\lambda})$. If ${\lambda}$ is supported on a ball $B$ of radius $r({\lambda})$ then we have $${\Lambda}f (p)=\int {f}(p-y) {\lambda}(y) dy= \int_{B} {f}(p-y) {\lambda}(y) dy.$$ Thus, if $f_1$ and $f_2$ agree on the ball of radius $r({\lambda})$ around $p$, then the integrals for $f_1$ and $f_2$ agree, i.e. ${\Lambda}f_1 (p)={\Lambda}f_2 (p)$. This simple Lemma \[lemma:conv-loc\] is the basis of the following proposition. \[prop:CNN-loc\] A CNN as defined in Section \[sec:CNN\] is a semi-local operator. \[Sketch\] Observe that if two functions agree on a ball of radius $R$, then after convolution with a kernel supported on a ball of radius $r$ the results agree at least on a ball of radius $R-r$. Applying a pointwise non-linearity $\sigma$ to each of the values does not affect this equality. Thus, if the radius $R$ is large enough, then after multiple convolution layers, the results are guaranteed to agree on some non-empty ball, which is what we wanted to prove. A more detailed proof (using induction and Lemmas \[lem:semiloc-transl\] and \[lemma:conv-loc\]) is given in Appendix \[app:prop-CNN-proof\]. Covariance of the operator in the non-linear case ------------------------------------------------- We, now consider the conditions on $\mu$ or $f$ that are required for it to be possible to undo a precomposing with ${{\cal{T}}_h}$ after feature extraction by postcomposing with ${{\cal{T}}_h}^{-1}$. \[multi-equiv\] Recall from Lemma \[lem:conj\] that $\mu_h (f)=\mu ({{\cal{T}}_h}f)$. Then, for a general non-linear translation-covariant feature extractor ${\Lambda}_\mu$ generated by $\mu$ (\[eq:lambda-mu-def\]) $$({{\cal{T}}_h}^{-1} {\Lambda}_\mu {{\cal{T}}_h}f) = ({\Lambda}_{\mu} f)$$ implies $$\mu = \mu_h,$$ i.e. that $\mu$ must be invariant to ${{\cal{T}}_h}$. This is immediate from Lemma \[lem:conj\]. Thus, for an inverse spatial transformation of the feature maps of a transformed image to render the same feature maps as for the original image, either $f$ must be invariant to ${{\cal{T}}_h}$ around every image point (which implies $f$ is constant) or the feature extractor (i.e. the generator) must be invariant to the relevant transformation group. We say that a functional ${\Lambda}$ is **non-constant** if there exists an $f$ such that ${\Lambda}(f)\neq {\Lambda}(0)$. Observe that if the functional is semi-local, we can take $f$ to be compactly supported. A translation-covariant ${\Lambda}$ is non-constant precisely when its generator $\mu$ is non-constant, i.e. there exists $f$ such that $\mu(f)\neq \mu (0)$ (Proof: take $f$ given by non-constancy of ${\Lambda}$; then there is some $x$ such that $({\Lambda}f) (x) \neq ({\Lambda}0) (x) $, and $\hat{f}=\calD_x f$ has $\mu(\hat{f})\neq \mu (0)$). Using ${{\cal{T}}_g}={{\cal{T}}_h}^{-1}$ is still a necessary condition to align feature maps --------------------------------------------------------------------------------------------- The following lemma is the key to seeing that also in the non-linear case, a necessary condition for being able to align the feature maps of a transformed image with those of it’s original is using ${{\cal{T}}_h}^{-1}$. It is equivalent to Lemma \[lemma:noRot\] in the single-layer case. \[lemma:noRot-multi\] If for two semi-local translation-covariant non-constant operators we have ${\Lambda}_{\mu_1}={{\cal{T}}_h}{\Lambda}_{\mu_2}$ then ${{\cal{T}}_h}=\operatorname{Id}$. This is a more abstract version of the proof of Lemma \[lemma:noRot\]. First of all, applying ${\Lambda}_{\mu_1}={{\cal{T}}_h}{\Lambda}_{\mu_2}$ to the zero function we get ${\Lambda}_{\mu_1}0={{\cal{T}}_h}{\Lambda}_{\mu_2} 0={\Lambda}_{\mu_2} 0$, and evaluating at location ${\overline{0}}$ obtain $\mu_1(0)=\mu_2(0)$. Now, take compactly supported $f$ with $\mu_1(f)\neq \mu_1(0)$. Suppose $f$ is supported in a ball of radius $r(f)$. If $T_h\neq Id$, we can pick $p$ such that $|T^{-1}_h(p)-p| >r({\Lambda}_{\mu_2})+r(f)+1 $ (where $r({\Lambda}_{\mu_2})$ is as in Definition \[def:semi-local\]). Then, by (\[eq:lambda-mu-def\]) we have $$\begin{aligned} ( \calD_{p} {\Lambda}_{\mu_1} \calD_{-p} f(x)) ({\overline{0}})&= ({\Lambda}_{\mu_1} \calD_{p} \calD_{-p} f (x))({\overline{0}}) \nonumber\\ &= ({\Lambda}_{\mu_1} f (x))({\overline{0}})= \mu_1 (f (x))\neq \mu_1(0) \end{aligned}$$ but $$\begin{aligned} ( \calD_{p} {{\cal{T}}_h}{\Lambda}_{\mu_2} \calD_{-p} f (x))({\overline{0}})=&\nonumber\\ =&( {{\cal{T}}_h}\calD_{T_h^{-1} (p)} {\Lambda}_{\mu_2} \calD_{-p} f (x))( {\overline{0}}) = \nonumber\\ =&( {{\cal{T}}_h}{\Lambda}_{\mu_2} \calD_{T_h^{-1} (p)-p} f (x))({\overline{0}}) =\nonumber\\ =& ( {\Lambda}_{\mu_2} D_{T_h^{-1}(p)-p} f )((T_h^{-1}({\overline{0}}) )=\nonumber\\ =&( {\Lambda}_{\mu_2} D_{T_h^{-1}(p)-p} f )({\overline{0}})=\nonumber\\ &\hspace{2cm}=({\Lambda}_{\mu_2}0)({\overline{0}})=\mu_2(0)=\mu_1(0), \end{aligned}$$ where the third-to-last equality (to $({\Lambda}_{\mu_2}0)({\overline{0}})$) holds for the following reason: since $f(x)$ is supported on ball of radius $r(f)$ around the origin, $D_{T_h^{-1}(p)-p} f( x)$ is supported on a ball of radius $ r(f)$ around $T_h^{-1}(p)-p$ which is entirely outside the ball of radius $r({\Lambda}_{\mu_2})$ around the origin. This means ${\Lambda}_{\mu_2}$ applied to $D_{T_h^{-1}(p)-p} f( x)$ evaluated at the origin is equal to ${\Lambda}_{\mu_2} 0$ evaluated at the origin by Definition \[def:semi-local\] of semi-locality. Conclusions in the multi-layer case ----------------------------------- We can now conclude also for the non-linear case that the only admissible operator to align feature maps is ${{\cal{T}}_h}^{-1}$ and for alignment to be possible the extracted non-linear features must themselves be invariant to the relevant transformation. If $ {{\cal{T}}_g}{\Lambda}_\mu {{\cal{T}}_h}={\Lambda}_\mu $, this implies that ${{\cal{T}}_g}= {{\cal{T}}_h}^{-1} $ and that $\mu({{\cal{T}}_h}f ) = \mu(f).$ Writing ${{\cal{T}}_g}={\mathcal{T}_H}({{\cal{T}}_h})^{-1}$ and $\mu_h=\mu({{\cal{T}}_h}f)$, we, as for the single-layer case, see that $${{\cal{T}}_g}{\Lambda}_\mu {{\cal{T}}_h}={\mathcal{T}_H}({{\cal{T}}_h})^{-1}{\Lambda}_\mu {{\cal{T}}_h}={\mathcal{T}_H}{\Lambda}_{\mu_h}.$$ Suppose we do have $${{\cal{T}}_g}{\Lambda}_\mu {{\cal{T}}_h}={\Lambda}_\mu.$$ Then, by Lemma \[lemma:noRot-multi\] (which is applicable because of Lemma \[lemma:comp-loc\]) we must have $T_H=\operatorname{Id}$ and ${{\cal{T}}_g}={{\cal{T}}_h}^{-1} $. Further, by Lemma \[multi-equiv\] we must have $$\label{eq:mu-invar} \mu({{\cal{T}}_h}f ) = \mu(f)$$ if the equality (\[eq:post-pre-compose\]) should hold for all $f$. Thus, the combined non-linear transformation must be computed from *transformation invariant* non-linear operators $\mu$. Since it is not possible to give explicit conditions for individual filters (e.g. symmetries implies that the same function can be implemented by more than one set of filters), we will instead investigate under which conditions invariant non-linear features $\mu_{\Lambda}$ (\[eq:mu-def\]) exist. \[pro:no-inv-has-contracting-direction\] If not all eigenvalues (real or complex) of ${T_h}$ have absolute value equal to 1, then for a continuous, semi-local, translation-covariant operator ${\Lambda}$, equation (\[eq:mu-invar\]) implies $\mu(f)=\mu(0)$ i.e. that ${\Lambda}$ is the trivial operator that outputs the same constant signal for all inputs. We consider the case in which $T_h$ has at least one eigenvalue of absolute value bigger than $1$ (i.e. $T_h^{-1}$ has at least one eigenvalue of absolute value less than $1$). The case in which $T_h$ has at least one eigenvalue of absolute value less than $1$ follows by noting that invariance with respect to ${{\cal{T}}_h}$ is the same as invariance with respect to ${{\cal{T}}_h}^{-1}$. First, observe that for a translation-covariant operator, continuity of ${\Lambda}$ implies continuity of $\mu$. Now, let ${\Lambda}$ be semi-local with radius $r({\Lambda})$. Let $\chi$ be the characteristic function of the ball of radius $r({\Lambda})$. Then $$\label{eq:mu-loc} \mu(g)=\mu (\chi g )$$ for any $g$ in $V=L^{1}_{loc}$. We now decompose $\mathbb{R}^n$ into generalized eigenspaces of ${T_h}^{-1}$, $\mathbb{R}^n=E^{+}\oplus E^{0} \oplus E^{-}$ as in Section 3.3.3 in [@hasselblatt2003first]. The condition that at least one eigenvalue of ${T_h}^{-1}$ have absolute value less than 1 means that $E^{-}$ is non-trivial. By Corollary 3.3.7 in [@hasselblatt2003first], when restricted to a non-trivial subspace $E^{-}\subseteq \mathbb{R}^n$ the operator ${T_h}^{-1}$ is eventually contracting (see Definition 2.6.11 ibid.), so that by Corollary 2.6.13 and Lemma 3.3.6 ibid. under the iterates of ${T_h}^{-1}$ all points of $E^{-}$ converge to the origin with exponential speed. This implies that the points of $\mathbb{R}^n$ converge to points in the proper subspace $S=E^{+}\oplus E^{0}$. Now starting with any $f$ in $L^{1}_{loc}$, and denoting by $B$ the ball of radius $r({\Lambda})$ around the origin, the functions $f_n =\chi {{\cal{T}}_h}^n (\chi f) $ will eventually have supports lying in arbitrarily small neigbourhood of $S\cap B$, i.e. on a set of arbitrarily small measure. If $\chi f$ is bounded, this implies that $f_n$ converge to the zero function in $L^{1}_{loc}$. Then, by continuity of $\mu$, the values $\mu(f_n)$ converge to $\mu(0)$. On the other hand, by semi-locality (\[eq:mu-loc\]) and invariance (\[eq:mu-invar\]) we get $$\label{eq:mu-const} \mu(f_n)=\mu(\chi {{\cal{T}}_h}^n (\chi f))=\mu( {{\cal{T}}_h}^n (\chi f))=\mu(\chi f)=\mu(f).$$ We conclude $\mu(f)=\mu(0)$ for any $f$ in $L^{1}_{loc}$ with bounded $\chi f$. Since any $f$ in $L^{1}_{loc}$ can be approximated arbitrarily well by functions $g_i$ with bounded $\chi g_i$, and $\mu$ is continuous, we conclude that $\mu(f)=\lim \mu(g_i)= \mu(0)$ for all $f$. In the 2D case we can enhance this further to give conclusions similar to those of Proposition \[pro:single-layer\]. \[pro:multilayer-2D\] The equality (\[eq:mu-invar\]) can hold for a continuous, semi-local, translation-covariant operator ${\Lambda}$ only if ${T_h}$ is conjugate to some rotation or, if $T_h$ is orientation reversing, a reflection matrix. As in the proof of Proposition \[pro:single-layer\], studying the Jordan form of ${T_h}$ shows that the only cases not covered by Proposition \[pro:no-inv-has-contracting-direction\] are ones when ${T_h}$ is conjugate to $\begin{pmatrix}1&1\\0&1 \end{pmatrix}$ or $\begin{pmatrix}-1&1\\0&-1 \end{pmatrix}$ (this is the case of shear transformations). In this case ${T_h}$ does not have iterates that contract $\mathbb{R}^2$ to a proper subspace, but the intersection of images of $B$ under ${T_h}$ with $B$ still lie arbitrarily close to a 1-D subspace. Then the same proof as in Proposition \[pro:no-inv-has-contracting-direction\] yields the result. In the higher dimensional case, one can perform very similar analysis based on Jordan form of ${T_h}$ and extend the proof of Proposition \[pro:multilayer-2D\] to conclude that invariance with respect to ${T_h}$ can only be obtained if ${T_h}$ is conjugate to an orthogonal matrix. Thus, we reach a very similar conclusion as for the single-layer case. To enable aligning feature maps of a transformed image with those of its original, the non-linear features $\mu_{\Lambda}$ (\[eq:mu-def\]) must be invariant to the relevant transformation. Furthermore, Propositions \[pro:no-inv-has-contracting-direction\] and \[pro:multilayer-2D\] show that there *does not exist* any such invariant non-linear features $\mu_{\Lambda}$ unless ${T_h}$ corresponds to a rotation or a reflection (or in higher dimensions an orthogonal) matrix in some coordinate system. In other words, there does not exist any such features invariant to affine transformations, scaling transformations or shears. Since the restricted covariance relation (\[eq:covariance2\]) cannot hold for these transformations, purely spatial transformations of feature maps *cannot enable affine- scale- or shear-invariant recognition*. These conclusions hold for any continuous, semi-local, translation-covariant operator, which in particular includes ${\Lambda}$ given by a CNN (\[eq:CNN\]) with Lipschitz continuous non-linearities $\sigma_i$. Summary and conclusions ======================= Using elementary analysis, we have presented a proof that spatial transformations cannot, in general, align CNN feature maps of a transformed image to match those of its original. We have showed that, in order for feature extraction and spatial transformations to commute for translation-covariant, semi-local operators (such as CNNs), the features computed by the network must themselves be *invariant* to the relevant image transformation. Since this is not generally the case, applying the inverse spatial transformation to a feature map extracted from a transformed image will typically *not render the same feature map* as for the original image. This can be contrasted with the case of pure translations, where the translation covariance of a CNN implies that a translation of the input indeed corresponds to a translation of the feature maps. Furthermore, we have shown that features computed with convolutional filters of compact support and Lipschitz continuous non-linearities (such as would be the case for a standard CNN) can only be made invariant to transformations that *correspond to reflections or rotations* in some basis. In other words, there does not exist any such features invariant to affine transformations, scaling transformations or shear transformations. Thus, spatial transformations of feature maps *cannot enable affine-, scale-, or shear-invariant recognition* for CNNs or indeed any continuous, semi-local, translation-covariant feature extractor. Our results imply that methods based on spatial transformations of CNN feature maps or filters (e.g. [@HeZhaXia-ECCV2014; @yuarXiv2015; @DaiQiXio-arXiv2017; @JadSimZisKav-NIPS2015]) is not a replacement for image alignment of the input. In particular, transforming feature maps *cannot enable invariant recognition for general affine transformations, scaling transformations or shear transformations*, and it will only enable rotation-invariant recognition for networks with learnt or hardcoded [rotation-invariant filters/features]{}. Appendix ======== Proof that a single convolutional layer is translation covariant {#app:single-layer-covariance} ---------------------------------------------------------------- A single-layer continuous CNN (\[eq:single-layer-cnn\]) is translation covariant: $$\calD_{\delta}{\Lambda}_{\lambda}={\Lambda}_{\lambda}\calD_{\delta}={\Lambda}_{\calD_{\delta}{\lambda}}.$$ We compute $$\begin{aligned} (\calD_{\delta}{\Lambda}_{\lambda}f)(x)=({\Lambda}_{\lambda}f)(x-\delta)=\int_{{\mathbb{R}}^N} f(y){\lambda}(x-\delta-y) dy=({\Lambda}_{D_{\delta}{\lambda}} f)(x)\end{aligned}$$ and using the change of variables $u= y-\delta$ $$\begin{aligned} ({\Lambda}_{\lambda}\calD_{\delta} f)(x) &= \int_{{\mathbb{R}}^N} f( y-\delta){\lambda}(x-y) dy = \int_{{\mathbb{R}}^N} f(u){\lambda}(x-\delta-u) du=\nonumber\\ & \hspace{6cm}=({\Lambda}_{D_{\delta}{\lambda}} f)(x). \end{aligned}$$ Proof that CNNs are semi-local and translation covariant {#app:prop-CNN-proof} -------------------------------------------------------- Recall Propositions \[prop:CNN-covar\] and \[prop:CNN-loc\]: A multi-layer continuous CNN, as defined in Section \[sec:CNN\], is a translation-covariant semi-local operator. The proof is inductive and is based on (\[eq:CNN\]) which we copy here for convenience: $$\begin{gathered} \label{eq:CNN-App} (\Lambda^{(i)} f)_c (x) = \sigma_i \left( \sum_{m=1}^{M_{i-1}} \int_{y \in {\mathbb{R}}^N } (\Lambda^{(i-1)}f)_m (x-y)\, \lambda^{(i)}_{m,c}(y) \, dy + b_{i,c} \right) \end{gathered}$$ We will prove that $ (\Lambda^{(i)} f)_c$ in (\[eq:CNN-App\]) are translation covariant and semi-local by induction on $i$. The base case when $i=0$ and $\Lambda^{(i)} f=f$ is immediate. The induction step for translation covariance is immediate from the formula (\[eq:CNN-App\]) and the fact that a single convolution is translation covariant (Lemma \[lemma:conv-trans-covar\]). For semi-locality, denoting, as before, for any convolution kernel $\lambda$ by $r(\lambda)$ radius such that $\lambda$ is supported on a ball of radius $r(\lambda)$, we pick $$r({\Lambda}^{i}_c) = \max_{m} [r({\Lambda}^{(i-1)}_m)+r(\lambda^{(i)}_{m, c})].$$ Observe that since by the induction hypothesis, ${\Lambda}^{(i-1)}_m$ is semi-local with radius $r({\Lambda}^{(i-1)}_m)$, if $f_1$ and $f_2$ agree on a ball of radius $[r({\Lambda}^{(i-1)}_m)+r(\lambda^{(i)}_{m, c})]$ around some $p$, then by Lemma \[lem:semiloc-transl\] the functions $(\Lambda^{(i-1)} f_1)_m(x)$ and $(\Lambda^{(i-1)} f_2)_m(x)$ agree over the ball $B$ of radius $r(\lambda^{(i-1)}_{m, c})$ around $p$, and we denote this common function on the ball by $f^{i-1}_m$. By Lemma \[lemma:conv-loc\] the convolution integrals for the specific $m$ in formula (\[eq:CNN\]) for $f_1$ and $f_2$ evaluated at $p$ are equal. Therefore, if $f_1$ and $f_2$ agree on a ball of radius $r({\Lambda}^{i}_c)$ around $p$ then the overall expressions computed by formula (\[eq:CNN\]) for $f_1$ and $f_2$ at $p$ will be equal, which is exactly what we set out to prove. Finally, the non-linearity $\sigma_i$ applies the same function to values at all locations so does not affect either translation covariance, nor semilocality (the equality $({\Lambda}f_1)(p) = ({\Lambda}f_2)(p)$ is preserved when applying a pointwise non-linearity). [^1]: We are thus not interested in translations.
--- abstract: 'We provide a structure theorem for all almost complete intersection ideals of depth three in any Noetherian local ring. In particular, we find that the minimal generators are the pfaffians of suitable submatrices of an alternating matrix. From the graded version of the previous result, we characterize the graded Betti numbers of all $3$-codimensional almost complete intersection schemes of ${\mathbb P}^r.$' author: - Alfio Ragusa - Giuseppe Zappalà title: A structure theorem and the graded Betti numbers for almost complete intersections --- Introduction {#introduction .unnumbered} ============ It is well known that in any regular local ring $R$ every ideal $I$ for which the number of elements in a minimal set of generators is equal to its heigth is generated by a regular sequence, in particular it is a perfect ideal (in the sense that ${\operatorname{hd}}R/I= {\operatorname{depth}}I$). This simple fact has an important consequence for studying projective schemes $X$ of $ {\mathbb P}^r_k,$ since this class of ideals characterizes complete intersection schemes, i.e. schemes $X$ of codimension $c$ whose defining (saturated) ideal $I_X$ in $R=k[x_0,x_1,\ldots,x_r]$ is generated by $c$ elements, hence schemes which can be described with the minimum number of equations. Thus, complete intersection schemes are necessarily generated by a regular sequence and consequently are arithmetically Cohen Macaulay (aCM). In particular, from this one has a very simple description of a minimal free resolutions of $R/I_X$ (Koszul resolution) and consequently of the graded Betti numbers. The question becomes immediately more complicated when one admits that the ideal $I_X$ has one generator more than its depth. In some part of the literature aCM schemes $X$ of $ {\mathbb P}^r_k,$ of codimension $c$ whose defining ideal $I_X$ is minimally generated by $c+1$ elements are called [*almost complete intersection*]{} schemes. Similarly, in any unitary commutative local ring $A$ a perfect ideal $I$ for which $\nu(I)={\operatorname{depth}}I+1$ is said to be an [*almost complete intersection*]{} ideal. Some discussion about almost complete intersection schemes can be found on the J. Migliore book’s [@Mi]. Very little is known about almost complete intersection schemes (or ideals), for instance some result for generic almost complete intersections can be found on the article of J. Migliore and R. Mirò Roig [@MM]. The very first observation which makes these ideals (or these schemes) nice to study is the fact that every almost complete intersection is directly linked in a complete intersection to a Gorenstein ideal (or aG scheme). Indeed, if $I_Q \subset R$ is the defining ideal of an almost complete intersection of codimension $c$ and $I_Z \subseteq I_Q$ is generated by $c$ minimal generators of $I_Q$ which form a regular sequence then $I_G:=I_Z:I_Q$ is the defining ideal of an aG scheme. By liaison theory (see for instance [@PS]) we have also $I_Q=I_Z:I_G.$ Now, using this fact and the very nice structure theorem for aG schemes of codimension $3$ (Buchsbaum-Eisenbud [@BE]), in this paper we will give a structure theorem for all almost complete intersection of depth $3$ (see Theorem \[tqci\]). This result generalizes an analogous one of S.  Seo which, in [@Se] Theorem 2.4, gives a similar characterization for those almost complete intersections $Q$ such that the aG scheme defined by $I_G:=I_Z:I_Q$ has the three minimal generators of smallest degree performing a regular sequence. On the other hand, by Diesel paper (see [@Di]), one can characterize all graded Betti numbers for $3$-codimensional aG schemes. Then, using this characterization and the mentioned structure theorem, we obtain the main result of this paper, i.e a charaterization of all graded Betti numbers for $3$-codimensional almost complete intersections (Theorem \[betti\]). Thus, this result provides a new large class of $3$-codimensional projective schemes, besides Gorenstein schemes, for which one is able to give a complete description of the graded Betti numbers. Notation and preliminaries ========================== Let $R$ be a Noetherian local ring. A perfect ideal $I$ of $R$ is called an [*almost complete intersection*]{} ideal if $\nu(I)={\operatorname{depth}}I+1$ (here $\nu(I)$ denotes the number of minimal generators of $I$). Of course, in case $R$ is Cohen-Macaulay an ideal $I$ is an almost complete intersection when $\nu(I)=ht(I)+1,$ where $ht(I)$ is its height or codimension. Analogously, let $k$ be an algebraically closed field and $X \subset {\mathbb P}^r_k$ be a closed subscheme of codimension $c.$ Let $I_X\subset R=k[x_0,x_1,\ldots,x_r]$ be the saturated homogeneous ideal defining $X.$ Then $X \subset {\mathbb P}^r_k$ is said an [*almost complete intersection*]{} scheme when its defining ideal $I_X$ is perfect and minimally generated by $c+1$ elements. Our first observation is that every almost complete intersecion is directly linked in a complete intersection to an arithmetically Gorenstein (aG) scheme. Indeed, if $I_Q \subset R$ is the defining ideal of an almost complete intersection of heigth $c$ and $I_Z \subseteq I_Q$ is generated by $c$ minimal generators of $I_Q$ which form a regular sequence then $I_G:=I_Z:I_Q$ is the defining ideal of an aG scheme. By liaison theory (see [@PS]) for a complete discussion on this argument) we have also $I_Q=I_Z:I_G.$ Therefore many properties of almost complete intersections can be deduced by properties of aG schemes. Since in case of codimension $c=3$ we have the Eisenbud Buchsbaum structure theorem for Gorenstein algebras (see [@BE]), in this paper we will produce an analogous structure theorem for all almost complete intersections of depth $3$. We will give also a graded version of this structure theorem. Moreover, since again in codimension $c=3$ by Diesel paper (see [@Di]) all graded Betti numbers for Gorenstein graded algebras are characterized, we will describe all possible Betti sequences for almost complete intersections of depth $3$. For all of this we need to fix some terminology and to remind some known facts about free resolutions, in particular, for Gorenstein algebras and about pfaffians of alternating matrices. Moreover, since the Betti sequences are simply sequences of finite multisets of positive integers, we need also to fix terminology and basic facts on multisets. We start just with basic notation and properties of multisets. Let $A$ be a set. A [*multiset*]{} on $A$ is a function $M:A \to \mathbb{Z}^+;$ for every $a \in A$ the integer $M(a)$ will be called the [*multiplicity*]{} of $a$ in $M$ and we will denote it also with $\mu_M(a).$ When $M$ is a multiset the domain of $M$ is called the [*support*]{} of $M$ and we denote it by ${\operatorname{Supp}}M.$ Whenever $b \notin {\operatorname{Supp}}M$ we will set $\mu_M(b)=0.$ Sometimes, for simplicity, we will denote the multiset $M$ with a finite support by $M= \{\{m_1,m_2, \ldots, m_t\}\},$ where in the list each element of the support of $M$ appears as many times as its multiplicity in $M,$ and sometimes we will use the notation $M= \left\{a_1^{(\mu_M(a_1))},a_2^{(\mu_M(a_2))}, \ldots, a_r^{(\mu_M(a_r))}\right\},$ where $a_i$ are all the elements in ${\operatorname{Supp}}M.$ We define $|M|=\sum_{a \in {\operatorname{Supp}}M}\mu_M(a).$ Of course, if $M$ is a multiset with $\mu_M(a)=1$ for all elements in ${\operatorname{Supp}}M$ we can identify the multiset $M$ with its support. Now we will remind the definitions concerning the main operations on multisets. If $M$ and $N$ are two multisets we will denote $$M \cap N: {\operatorname{Supp}}M \cap {\operatorname{Supp}}N \to \mathbb{Z}^+, \quad \textrm{where} \quad (M \cap N)(y)=\min\{\mu_M(y),\mu_N(y)\},$$ $$M \cup N: {\operatorname{Supp}}M \cup {\operatorname{Supp}}N \to \mathbb{Z}^+, \quad \textrm{where} \quad (M \cup N)(y)=\max\{\mu_M(y),\mu_N(y)\},$$ $$M \sqcup N: {\operatorname{Supp}}M \cup {\operatorname{Supp}}N \to \mathbb{Z}^+, \quad \textrm{where} \quad (M \sqcup N)(y)=\mu_M(y)+\mu_N(y),$$ if we set ${\operatorname{Supp}}(M\setminus N)=\{x \in {\operatorname{Supp}}M \ |\ \mu_M(x)> \mu_N(x)\}$ then $$M \setminus N: {\operatorname{Supp}}(M\setminus N) \to \mathbb{Z}^+, \quad \textrm{where} \quad (M \setminus N)(y)=\mu_M(y)-\mu_N(y);$$ moreover, we will say that $M$ is a submultiset of $N$ and we will write $M \subseteq N$ whenever ${\operatorname{Supp}}M \subseteq {\operatorname{Supp}}N$ and for every $y \in {\operatorname{Supp}}M$ $\mu_M(y)\le \mu_N(y).$ Of course, when $M \subseteq N$ then $N=M \sqcup (N\setminus M).$ In the sequel when $M$ is a finite multiset of integers we define $\|M\|=\sum_{y\in {\operatorname{Supp}}M}\mu_M(y)y.$ Furthermore, if $n$ is also an integer, we define the multiset $n\pm M$ by $$n\pm M: n\pm {\operatorname{Supp}}M \to \mathbb{Z}^+, \quad \textrm{where} \quad (n\pm M)(n\pm y)=\mu_M(y) .$$ \[dual\] If $M$ is a multiset of integers, $n$ is an integer and $H:=M \cap (n-M),$ then for every $x\in {\operatorname{Supp}}H$ $\mu_H(x)= \mu_H(n-x).$ Of course, for every $x \in {\operatorname{Supp}}M$ $\mu_M(x)= \mu_{n-M}(n-x).$ Then for every $x\in {\operatorname{Supp}}H$ we have $$\begin{gathered} \mu_H(x)=\min\{\mu_M(x),\mu_{n-M}(x)\}=\\ =\min\{\mu_{n-M}(n-x),\mu_{M}(n-x)\}=\mu_H(n-x).\end{gathered}$$ Let now $R=k[x_0,\ldots,x_r]$ and $I_X$ the defining (saturated) ideal of a projective scheme $X \subset {\mathbb P}^r_k.$ We recall that the standard graded $k$-algebra $R/I_X$ admits a graded minimal free resolution of the following type $$0\to\bigoplus_jR(-j)^{\beta_{pj}}\to\ldots\to\bigoplus_jR(-j)^{\beta_{0j}}\to R\to R/I_X\to 0$$ which, if we restrict ourselves to the $\beta_{ij}\ne 0$, can be written as $$0\to\bigoplus_{h\in {\operatorname{Supp}}M_p}R(-h)^{\mu_p(h)}\to\ldots\to\bigoplus_{h\in {\operatorname{Supp}}M_1}R(-h)^{\mu_1(h)}\to R\to R/I_X\to 0$$ where each $M_i$ is a multiset of positive integers and $\mu_i(h):=\mu_{M_i}(h).$ We will call $(M_1,\ldots,M_p)$ the [*Betti sequence*]{} of $X$ or also of $R/I_X,$ and we will denote it by $\beta_X$ or $\beta_{R/I_X}.$ For simplicity, from now on for every multiset of positive integers $M$ we will set $$\bigoplus_{h\in M}R(-h):=\bigoplus_{h\in {\operatorname{Supp}}M}R(-h)^{\mu_M(h)}$$ hence, the resolution can be written $$0\to\bigoplus_{h\in M_p}R(-h)\to\ldots\to\bigoplus_{h\in M_1}R(-h)\to R\to R/I_X\to 0$$ Now, since Gorenstein algebras of depth $3$ can be described by alternating matrices, we establish some terminology for such matrices. Let $R$ be any commutative ring. If $i\ne j$ are positive integers we set $\langle i,j \rangle$ the following integer $$\langle i,j \rangle=\begin{cases} i+j+1 & \text{if} \ i<j \\ i+j & \text{if} \ i>j\end{cases}$$ If $M=(a_{ij})$ is an alternating matrix of size $m$ with entries in $R$ we will denote by $M_{\widehat{ij}}$ the alternating matrix obtained from $M$ by deleting both rows and columns $i$ and $j.$ With this terminology when $m$ is even one can easily verify that the pfaffian of $M$ (for definitions and basic facts on phaffians see for instance [@IK] Appendix B) can be computed, for every $i=1, \ldots, m,$ by $${\operatorname{pf}}M= \sum_j (-1)^{\langle i,j \rangle}a_{ij}\ {\operatorname{pf}}M_{\widehat{ij}}. \quad \quad (1)$$ Moreover, if $M$ is an alternating matrix of size $m,$ with $m$ even, we will denote by $\overline{M}=(\overline{a}_{ij}),$ where $\overline{a}_{ij}=(-1)^{\langle i,j \rangle}{\operatorname{pf}}M_{\widehat{ij}},$ the pfaffian adjoint of $M,$ which is clearly an alternating matrix. \[lpl\] If $$M=\begin{pmatrix} 0 & a & | & B \\ -a & 0 & | & \\ -- & -- & -|- & --\\ -{}^tB & & | & C\\ \end{pmatrix}$$ is an even alternating matrix, by repeating the previous formula for $i=1,2,$ one can show that its pfaffian can be computed as $${\operatorname{pf}}M =a\ {\operatorname{pf}}C + {\operatorname{pf}}(B \ \overline{C}\ {}^tB)$$ where $\overline{C}$ is the pfaffian adjoint of $C$ as defined above. Let $\psi: F^{\vee} \to F$ be an alternating map with $F$ a free $R$-module of rank even $m.$ If $B$ is a basis for $F$ and $B^{\vee}$ the dual basis for $F^{\vee},$ if $M$ is the matrix associated to $\psi$ with respect to such bases, we will denote by $\overline{\psi}: F \to F^{\vee}$ the map whose associated matrix with respect to the previous bases is the pfaffian adjoint of $M,$ i.e. $M(\overline{\psi})=\overline{M}.$ Note that $\overline{\psi}\psi= ({\operatorname{pf}}\psi) {\operatorname{id}}_{F^{\vee}}$ and $\psi \overline{\psi}= ({\operatorname{pf}}\psi) {\operatorname{id}}_{F}.$ Moreover we will write ${\operatorname{Pf}}_{s}(\psi)$ for the ideal generated by th pfaffiani di $\psi$ of order $s.$ Furthermore, with the same notation, when the rank $m$ is odd and $\{p_1, \ldots, p_m\}$ are the $m$ submaximal pfaffians of the matrix $M=M(\psi)$ (with respect to fixed bases $B$ and $B^{\vee}$), if $p \in I=(p_1, \ldots, p_m),$ then there exists an alternating map $\widetilde{\psi}: G^{\vee} \to G,$ where $G$ is a free module of rank $m+2,$ such that the alternating matrix $\widetilde{M}=M(\widetilde{\psi})$ (with respect to suitable bases $\widetilde{B}$ and $\widetilde{B}^{\vee}$) has the $m+2$ submaximal pfaffians $\{p_1, \ldots, p_m, p, 0\}.$ Indeed, if $p=a_1p_1+ \ldots + a_mp_m,$ it is enough to take $G=F\oplus R^2$ and as $\widetilde{\psi}: G^{\vee} \to G,$ the alternating map which, respect to given bases $\widetilde{B}$ and $\widetilde{B}^{\vee},$ is defined by the alternating matrix $$\widetilde{M}=\begin{pmatrix} & & & | & 0 & a_1 \\ & M & & | & \vdots & \vdots\\ & & & | & 0 & a_m \\ -- & -- &-- & --|-- & --& -- \\ 0 & \cdots & 0 & | & 0 & -1 \\ -a_1 & \cdots & -a_m & | & 1 & 0 \\ \end{pmatrix}$$ To complete the assertion it is enough to compute the submaximal pfaffians of $\widetilde{M}$, using the above formula $(1)$ with respect to the $(m+1)$-th column for all pfaffians except for the $(m+1)$-th pfaffian for which we use formula $(1)$ with respect to the last column. Applying the previous observation we get the following \[3gen\] Let $R$ be a noetherian local ring and $I_G$ a Gorenstein ideal of depth $3$ in $R$ and $p_1,p_2,p_3\in I_G.$ Then there exists an alternating map $\varphi: H^{\vee} \to H$ of odd rank $m$ such that ${\operatorname{Pf}}_{m-1}(\varphi)=I_G$ and $3$ of the submaximal pfaffians of $\varphi$ are exactly $p_1,p_2,p_3.$ Take any alternating map $\psi: F^{\vee} \to F$ of rank odd $n$ such that ${\operatorname{Pf}}_{n-1}(\psi)=I_G=(g_1, \ldots g_n),$ then apply the previous observation to get a new alternating map $\varphi: H^{\vee} \to H$ of odd rank $m=n+6$ such that ${\operatorname{Pf}}_{m-1}(\varphi)=I_G=(g_1, \ldots g_n,p_1,p_2,p_3, 0,0,0).$ We need also the following lemma. Let $M$ be an alternating matrix of odd size $m,$ with entries in a unitary commutative ring $R.$ Let $I$ be the ideal generated by the submaximal pfaffians of $M$ and let $(q_1,\ldots,q_m)$ be a set of generators of $I.$ Then there exists an alternating matrix whose submaximal pfaffians are $uq_1,\ldots,uq_m,$ where $u$ is a unit of $R.$ Let ${\bf p}=(p_1\ldots p_m)$ be the vector of the submaximal pfaffians of $M$ and let ${\bf q}=(q_1\ldots q_m).$ Then ${\bf p}={\bf q}A,$ where $A$ is an invertible square matrix of size $m.$ Let us consider the alternating matrix $AM\:{}^t\!\!A.$ Now, if $N$ is an alternating matrix we write ${\operatorname{pf}}N$ for the vector of the submaximal pfaffians of $N$ and if $A$ is a square matrix we write $\overline{A}$ for the adjoint matrix of $A.$ A straightforward computation shows that $${\operatorname{pf}}(AM\:{}^t\!\!A)=({\operatorname{pf}}M)\overline{A}={\bf p}\overline{A}=(\det A){\bf q}.$$ Since $A$ is invertible, $\det A$ is a unit, so we are done. Now recall that if $A_G=R/I_G$ is a Gorenstein graded algebra of depth $3$ ($R=k[x_0, \ldots, x_r]$), its graded minimal free resolution is of the type $$0\to R(-\vartheta_G) \to \bigoplus_{h \in \vartheta_G-H}R(-h) \to \bigoplus_{h \in H}R(-h) \to R \to A_G \to 0$$ where $H$ is the multiset of the degrees of the minimal generators of $I_G$ and $\vartheta_G=2\|H\|/ (|H|-1)$ (remind that $|H|$ must be odd). Hence the multiset $H$ determines all the graded Betti numbers of $A_G.$ Moreover, the integers in $H=\{\{h_1\le h_2 \le \ldots \le h_{2m+1}\}\}$ must satisfy the following Gaeta-Diesel conditions (see [@Ga] and [@Di]): $ \vartheta_G$ is an integer and $\vartheta_G > h_{i+1} + h_{2m+2-i}$ for $i=1, \ldots, m.$ In general when $T$ is a graded $R$-module, ($R$ any polynomial ring), and $$\begin{gathered} $$F_{\bullet} \quad 0\to\bigoplus_{h\in M_p}R(-h)\to\ldots\to\bigoplus_{h\in M_{i+1}}R(-h)\to \bigoplus_{h\in M_{i}}R(-h)\to\ldots \\ \ldots \to \bigoplus_{h\in M_1}R(-h)$$\end{gathered}$$ is a graded free resolution of $T$ with each $M_j$ a multiset of positive integers, we say $(M_1, \ldots, M_p)$ the [*multiset sequence*]{} associated to the resolution $F_{\bullet}.$ If $s\in {\operatorname{Supp}}M_i\cap {\operatorname{Supp}}M_{i+1},$ we say that $s$ is a [*repetition*]{} in the resolution. Moreover, if $N \subseteq M_i\cap M_{i+1}$ we say that $N$ is [*cancellable*]{} for $T$ if there is a graded free resolution of $T$ whose associated multiset sequence is $(M_1, \ldots,M_{i}\setminus N,M_{i+1}\setminus N, \ldots, M_p).$ Next proposition collects some numerical facts about the graded free resolutions of $3$-codimensional standard graded Gorenstein algebras. Let $$0\to\bigoplus_{h\in C}R(-h)\to\bigoplus_{h\in B}R(-h)\to\bigoplus_{h\in A}R(-h)\to R\to R/I_G\to 0$$ be a graded free resolution (not necessarily minimal) of a depth $3$ Gorenstein graded algebra. - $|B\cap C|=|C|-1.$ - Let $\vartheta$ be the unique element in ${\operatorname{Supp}}(C\setminus B).$ If $s\in {\operatorname{Supp}}(A\cap B)$ is such that $\mu_{A\cap B}(s)> \mu_{A\cap B}(\vartheta-s)$ then the multiset $\{s^{(\mu_{A\cap B}(s)-\mu_{A\cap B}(\vartheta-s))}\}$ is cancellable for $R/I_G.$ i\. It is enough to remind that in a minimal free resolution of a Gorenstein algebra the module of last syzygies has rank one.\ ii. It is enough to remind that in a minimal free resolution of a depth $3$ Gorenstein algebra, by selfduality, $\mu_{A \cap B}(s)=\mu_{A \cap B}(\vartheta-s).$ Since we need complete intersection ideals contained in Gorenstein ideals, we state some results from the paper [@RZ2]. Let $$\beta=\big(\{\{d_1,\ldots, d_{2n+1}\}\},\{\{\vartheta-d_{2n+1},\ldots,\vartheta-d_1\}\},\{\vartheta\}\big),$$ $n\vartheta=\sum d_i,$ $d_1\le\ldots\le d_{2n+1},$ be a Betti sequence admissible for an depth $3$ standard graded Gorenstein algebra. Let $${\operatorname{CI}}^g_{\beta}=\{\mathbf{a}\in{{\mathbb N}_{\le}}^3\mid\exists\mbox{ an ideal }I\subset R \mbox{ containing a regular sequence of type }\mathbf{a}$$ $$\mbox{ with }{\beta}_{R/I}=\beta, \, R/I \mbox { Gorenstein ring}\},$$ where $${{\mathbb N}_{\le}}^3=\{(a_1,a_2,a_3)\in{\mathbb N}^3\mid a_1\le a_2\le a_3\}.$$ In  [@RZ2] Theorem 3.6 it was shown that the poset ${\operatorname{CI}}^g_{\beta}$ has only one minimal element and it was computed. We report here that statement. \[c3b\] Let $\beta=\big(\{\{d_1,\ldots, d_{2n+1}\}\},\{\{\vartheta-d_{2n+1},\ldots,\vartheta-d_1\}\},\{\vartheta\}\big),$ $n\vartheta=\sum d_i,$ $d_1\le\ldots\le d_{2n+1},$ be a Betti sequence admissible for an Artinian Gorenstein quotient of $k[x_1,x_2,x_3]$ and define the sets $$B=\{3\le i\le n+1\mid\vartheta\le d_i+d_{2n+4-i}\}$$ and $$C=\{4\le i\le n+2\mid\vartheta\le d_i+d_{2n+5-i}\}.$$ Then ${\operatorname{CI}}^g_{\beta}$ has a unique minimal element which we will call ${\operatorname{mci}}\beta.$ Precisely, - if $B \ne \emptyset,$ then ${\operatorname{mci}}\beta=(d_1,d_{\max B}, d_{2n+4-\min B});$ - if $B = \emptyset$ and $C \ne \emptyset,$ then ${\operatorname{mci}}\beta=(d_1,d_2, d_{\max C});$ - if $B = \emptyset$ and $C = \emptyset,$ then ${\operatorname{mci}}\beta=(d_1,d_2, d_3).$ In particular ${\operatorname{CI}}^g_{\beta}=\{\mathbf{b}\in{{\mathbb N}_{\le}}^3\mid\mathbf{b}\ge{\operatorname{mci}}\beta\}.$ In the sequel we will need also the following proposition which is a reformulation of Lemma 3.7 of  [@RZ2]. \[BC\] Let $H$ be an admissible Hilbert function for an Artinian Gorenstein quotient of $k[x_1,x_2,x_3]$ and let $\beta=(D,\vartheta -D,\{\vartheta\})$ be a Gorenstein Betti sequence compatible with $H,$ with $D= \{\{d_1,\ldots, _{2n+1}\}\},$ $d_1 \le \ldots \le d_{2n+1}.$ Let $C$ be as in Theorem \[c3b\] and $$\overline{B}:=\{3\le i\le 2n+1\mid\vartheta\le d_i+d_{2n+4-i}\}.$$ We have - $i\in\overline{B}$ ${\Rightarrow}$ $\mu_D(d_i)=-\Delta^2 H(d_i).$ - $i\in C,$ $i\not\in\overline{B},$ $i-1\not\in\overline{B}$ ${\Rightarrow}$ $\mu_D(d_i)=-\Delta^2 H(d_i)-1.$ - $i,j\in\overline{B},$ $d_i=d_j$ ${\Rightarrow}$ $i=j.$ - If $\mu_D(d_i)=-\Delta^2 H(d_i)$ and $k=\min\{j\,|\ d_j=d_i\}$ then $k\in\overline{B}.$ - If $\mu_D(d_i)=-\Delta^2 H(d_i)-1$ and $k=\min\{j\,|\ d_j=d_i\}\le n+2$ then $k\in C.$ We set $\nu_h:=\mu_D(h)$ and $\sigma_h:=\mu_{\vartheta -D}(h).$ - Let $i\in\overline{B};$ then $d_i\ge\vartheta-d_{2n+4-i}>d_{i-1};$ furthermore $\vartheta-d_{2n+3-i}>d_i,$ so $$-\Delta^2 H(d_i)=\sum_{h=1}^{d_i}\nu_{h}-\sum_{h=1}^{d_i}\sigma_{h}-1=(\nu_{d_i}+i-1)-(i-2)-1=\nu_{d_i}.$$ - Let $i\in C\setminus\overline{B}$ such that $i-1\not\in\overline{B};$ then $d_i\ge\vartheta-d_{2n+5-i}>d_{i-1};$ furthermore, since $i\not\in\overline{B}$, $\vartheta-d_{2n+4-i}>d_i,$ so $$-\Delta^2 H(d_i)=\sum_{h=1}^{d_i}\nu_{h}-\sum_{h=1}^{d_i}\sigma_{h}-1=(\nu_{d_i}+i-1)-(i-3)-1=\nu_{d_i}+1.$$ - If $i<j$ then $d_i=d_{i+1}$ so we should have $$\vartheta\le d_i+d_{2n+4-i}=d_{i+1}+d_{2n+4-i}<\vartheta.$$ - We have $d_i>d_1,$ therefore $k>1$ and $d_k>d_{k-1};$ consequently $$\sum_{h=1}^{d_i-1}\nu_{h}=\sum_{h=1}^{d_k-1}\nu_{h}=k-1;$$ moreover by hypotheses $$-\Delta^2 H(d_i)=\sum_{h=1}^{d_i}\nu_{h}-\sum_{h=1}^{d_i}\sigma_{h}-1=\nu_{d_i}{\Rightarrow}\sum_{h=1}^{d_k-1}\nu_{h}-\sum_{h=1}^{d_k}\sigma_{h}=1$$ i.e. $$\sum_{h=1}^{d_k}\sigma_{h}=k-2{\Rightarrow}\vartheta-d_{2n+4-k}\le d_k{\Rightarrow}k\in\overline{B}.$$ - Again $d_i>d_1,$ therefore $k>1$ and $d_k>d_{k-1};$ consequently $$\sum_{h=1}^{d_i-1}\nu_{h}=\sum_{h=1}^{d_k-1}\nu_{h}=k-1;$$ moreover by hypotheses $$-\Delta^2 H(d_i)=\sum_{h=1}^{d_i}\nu_{h}-\sum_{h=1}^{d_i}\sigma_{h}-1=\nu_{d_i}+1{\Rightarrow}\sum_{h=1}^{d_k-1}\nu_{h}-\sum_{h=1}^{d_k}\sigma_{h}=2$$ i.e. $$\sum_{h=1}^{d_k}\sigma_{h}=k-3{\Rightarrow}\vartheta-d_{2n+5-k}\le d_k,$$ hence $k\in C.$ \[maxg\] We remind that by [@RZ1], Proposition 3.7, for every $d_i>d_1,$ $-\Delta^2 H(d_i)$ is equal to the largest number of minimal generators of degree $d_i$ (which we will denote by ${\operatorname{Mng}}_H(d_i)$) compatible with an Artinian Gorenstein graded algebra of codimension $3$ having Hilbert function equal to $H.$ \[Bvuoto\] Note that if $B=\emptyset$ then $\overline{B}\subseteq\{n+2\}.$ Indeed let $i\in\overline{B};$ since $B=\emptyset$ we have that $i\ge n+2;$ moreover $\vartheta\le d_i+d_{2n+4-i},$ so $2n+4-i\ge n+2,$ hence $i\le n+2$ i.e. $i=n+2.$ Structure theorem for almost complete intersections =================================================== In the sequel, if $f:M \to N$ and $g:M \to P$ are maps of modules we will write $(f,g): M \to N\oplus P$ for the map defined by $(f,g)(m)=(f(m),g(m)).$ Moreover, if $f:M \to P$ and $g:N \to P$ we will write $f|g: M\oplus N \to P$ for the map defined by $f|g(m,n)=f(m)+g(n).$ Let $R$ be a Noetherian local commutative ring, $H_0$ a free $R$-module of odd rank $m_0\ge 5$ and $\varphi_0:H_0^{\vee}\to H_0$ an alternating map. Let $J:={\operatorname{Pf}}_{m_0-1}(\varphi_0)$ be the ideal generated by the pfaffians of $\varphi_0$ of size $m_0-1$ (note that despite the fact that the pfaffians depend on the choice of the bases in $H_0,$ $J$ depends only on the map $\varphi_0$). Moreover, we denote by ${\operatorname{pf}}\varphi_0: H_0 \to R$ the map defined by the submaximal pfaffians of $\varphi_0.$ It is known by Buchsbaum-Eisenbud Theorem (see [@BE]) that ${\operatorname{depth}}J\le 3$ and we suppose here that ${\operatorname{depth}}J=3.$ Take a regular sequence $(p_1,p_2,p_3)$ in $J.$ By Lemma \[3gen\] there exists an alternating map $\varphi: H^{\vee} \to H$ of odd rank $m$ such that ${\operatorname{Pf}}_{m-1}(\varphi)=J$ and $3$ of the submaximal pfaffians of $\varphi$ are exactly $p_1,p_2,p_3.$ So $J=(p_1,p_2,p_3, \ldots, p_m),$ where the $p_i$’s, for $1\le i\le m,$ are the submaximal pfaffians of $\varphi.$ Let $I=(p_1,p_2,p_3)$ and $\psi:G^{\vee}\to G$ an alternating map whose pfaffians are exactly $p_1,p_2,p_3.$ Then $H=G\oplus F,$ where $F$ is a free $R$-module of rank $m-3,$ and ${\operatorname{pf}}\varphi={\operatorname{pf}}\psi|\sigma,$ where $\sigma:F\to R.$ Therefore we have the following decomposition: $\varphi=\big(\alpha\,|-\lambda^{\vee},\lambda|\beta\big),$ where $\alpha:G^{\vee}\to G,$ $\beta:F^{\vee}\to F$ and $\lambda:G^{\vee}\to F.$ Moreover we denote by $\overline{\beta}:F\to F^{\vee},$ the alternating map such that $\beta\overline{\beta}=p{\operatorname{id}}$ and $\overline{\beta}\beta=p{\operatorname{id}},$ where $p$ is the pfaffian of the map $\beta.$ One can see that (with respect to suitable bases) the matrix associated to $\overline{\beta}$ is the pfaffian adjoint of the matrix of $\beta.$ \[The pfaffian of an empty matrix will be $1$; note that ${\operatorname{pf}}\psi^{\vee}=-({\operatorname{pf}}\psi)$\] \[constr\] With the above notation $$\xymatrix@R=1.5cm@C=1.3cm{ 0\ar[r]&F^{\vee}\ar[r]^(.4){(\lambda^{\vee},-\beta)}&G\oplus F\ar[rr]^{\big(p\, | \lambda^{\vee}\overline\beta,-{\operatorname{pf}}\psi\, |- \sigma\big)}&&G\oplus R\ar[r]^-{{\operatorname{pf}}\psi | p}&R}$$ is a free resolution of an almost complete intersection algebra $R/Q.$ We start by proving that the canonical surjection $R/I\to R/J$ can be lifted to a map between their resolutions in the following way: $$\xymatrix@R=1.5cm@C=1.3cm{ 0\ar[r]&R\ar[r]^{({\operatorname{pf}}\psi)^{\vee}}\ar[d]^{p}&{\phantom{a}G^{\vee}}\ar[rr]^{\psi}\ar[d]^{(p,-\overline\beta\lambda)}&& G\ar[r]^{{\operatorname{pf}}\psi}\ar[d]^{({\operatorname{id}},0)}&R\ar[d]^{{\operatorname{id}}} \\ 0\ar[r]&R\ar[r]^(.4){(({\operatorname{pf}}\psi)^{\vee},\sigma^{\vee})}&G^{\vee}\oplus F^{\vee}\ar[rr]^-{\big(\alpha\,|-\lambda^{\vee},\lambda|\beta\big)}&&G\oplus F\ar[r]^-{{\operatorname{pf}}\psi | \sigma}&R }$$ Indeed, since $$0=(\lambda|\beta)(({\operatorname{pf}}\psi)^{\vee},\sigma^{\vee})=\lambda({\operatorname{pf}}\psi)^{\vee}+\beta\sigma^{\vee}{\Rightarrow}$$ $${\Rightarrow}\overline{\beta}\lambda({\operatorname{pf}}\psi)^{\vee}+\overline{\beta}\beta\sigma^{\vee}=0{\Rightarrow}\overline{\beta}\lambda({\operatorname{pf}}\psi)^{\vee}+p\sigma^{\vee}=0$$ we have $$(p,-\overline\beta\lambda)({\operatorname{pf}}\psi)^{\vee}=(p({\operatorname{pf}}\psi)^{\vee},-\overline\beta\lambda({\operatorname{pf}}\psi)^{\vee}) =(p({\operatorname{pf}}\psi)^{\vee},p\sigma^{\vee})=p(({\operatorname{pf}}\psi)^{\vee},\sigma^{\vee}).$$ Now Remark \[lpl\] will imply, in our notation, that $\alpha p+\lambda^{\vee}\overline\beta\lambda=\psi.$ Therefore we have $$\begin{gathered} \big(\alpha\,|-\lambda^{\vee},\lambda|\beta\big)(p,-\overline\beta\lambda)=\\ =(\alpha p+\lambda^{\vee}\overline\beta\lambda,\lambda p-\beta\overline\beta\lambda)= (\psi,\lambda p-p\lambda)=(\psi,0)=({\operatorname{id}},0)\psi.\end{gathered}$$ Then we set ${\mathbb F}^I_{\bullet}$ and ${\mathbb F}^J_{\bullet}$ the above resolutions of $R/I$ and $R/J$ and $\tau:{\mathbb F}^I_{\bullet}\to{\mathbb F}^J_{\bullet}$ the above complex map. Thus, if we set $Q:=I:J,$ we see that a free resolution of $R/Q$ is given by the mapping cone of the map $\tau^{\vee}:({\mathbb F}^J_{\bullet})^{\vee}\to ({\mathbb F}^I_{\bullet})^{\vee}.$ If we make cancellations where we have identities map we get the required resolution of the almost complete intersection $R/Q.$ As a simple consequence of the previous result we have \[gen\] Let $R$ be a Noetherian local ring and let $M$ be an alternating matrix of odd rank $m,$ whose entries are in $R.$ Let $p_1,\ldots,p_m$ be the submaximal pfaffians of $M$ and $p_{abc}$ the pfaffian of order $m-3$ obtained by $M$ by deleting the rows and columns $a,b,c$. If $(p_a,p_b,p_c)$ is a regular sequence, then $$(p_a,p_b,p_c):(p_1,\ldots,p_m)=(p_a,p_b,p_c,p_{abc}).$$ The Proposition \[constr\] gives a way to construct almost complete intersection algebras but the very interesting thing is that every almost complete intersection can be constructed in this way. \[tqci\] Let $R$ be a Noetherian local ring and $I_Q\subset R$ be a perfect ideal of depth $3$ of an almost complete intersection. Then there exists an alternating map $\psi:H^{\vee}\to H,$ where $H$ is a free $R$-module of odd rank with ${\operatorname{im}}{\operatorname{pf}}\psi$ of depth $3$, such that, with the above notation, $$\xymatrix@R=1.5cm@C=1.3cm{ 0\ar[r]&F^{\vee}\ar[r]^(.4){(\lambda^{\vee},-\beta)}&G\oplus F\ar[rr]^{\big(p\, | \lambda^{\vee}\overline\beta,-{\operatorname{pf}}\psi\, |- \sigma\big)}&&G\oplus R\ar[r]^-{{\operatorname{pf}}\psi | p}&R}$$ is a free resolution of $R/I_Q.$ Since $I_Q$ is an almost complete intersection ideal we have that $I_Q=(p_0,p_1,p_2,p_3)$ and since ${\operatorname{depth}}I_Q=3$ we can suppose that $I_Z:=(p_1,p_2,p_3)$ is generated by a regular sequence. Let $I_G:=I_Z:I_Q.$ Then $I_G$ is a Gorenstein ideal of depth $3.$ By Lemma \[3gen\], there exists an alternating map $\varphi:H^{\vee}\to H,$ such that the submaximal pfaffians of $\varphi$ are exactly $p_1,p_2,p_3$ and the other generators of $I_G.$ By Proposition \[constr\] the above resolution gives a resolution of $R/I_Q.$ Now we would like to give a graded version of the previous result. Let ${\mathbb P}^r_k$ be the projective space with $r \ge 3$ and $R=k[x_0,x_1,\ldots,x_r]$ the standard graded coordinate ring of ${\mathbb P}^r_k.$ \[grqci\] Let $Q \subset {\mathbb P}^r_k$ be an almost complete intersection scheme of codimension $3$ and $I_Q\subset R$ be its defining ideal. Then $R/I_Q$ admits a graded free resolution of the following type $$0\to K^{\vee}(-d) \to G(-d_0) \oplus K \to G \oplus R(-d_0)\to R$$ where $d_0$ is a positive integer, $G=\oplus_{i=1}^3R(-d_i),$ $d=d_o+d_1+d_2+d_3,$ $K=\oplus_{i=4}^mR(-e_i),$ $m\ge 5$ an odd integer and $d_1, d_2, d_3,e_4-d_0, \ldots, e_m-d_0$ are the degrees of all submaximal pfaffians of a suitable alternating matrix of size $m.$ Let $Z\subset{\mathbb P}^r_k$ be a complete intersection generated by $3$ minimal generators of $I_Q,$ of degrees $d_1,d_2,d_3.$ Let $I_\Gamma:=I_Z:I_Q.$ $I_\Gamma$ is the saturated homogeneous ideal of an aG scheme $\Gamma$ directly linked to $Q$ in $Z.$ As in Proposition \[constr\] we have the following diagram $$\xymatrix@R=1.3cm@C=1.1cm{ 0\ar[r]&R(-\vartheta_Z)\ar[r]\ar[d]&{\phantom{a}G^{\vee}}(-\vartheta_Z)\ar[r]\ar[d]& G\ar[r]\ar[d]&R\ar[d] \\ 0\ar[r]&R(-\vartheta_G)\ar[r]&G^{\vee}(-\vartheta_G)\oplus F^{\vee}(-\vartheta_G)\ar[r]^{\phantom{aaaaaaa}\varphi}&G\oplus F\ar[r]&R }$$ where the first row is a minimal graded resolution of $R/I_Z$ and the second row is the graded resolution of $R/I_{\Gamma},$ obtained by the alternating map $\varphi$ as in Lemma \[3gen\]. So we have that $G=\oplus_{i=1}^3R(-d_i)$ and $F=\oplus_{i=4}^mR(-d_i).$ Dualizing and shifting by $-\vartheta_Z$ this diagram we get $$\xymatrix@R=1.1cm@C=0.5cm{ 0\ar[r]&R(-\vartheta_Z)\ar[r]\ar[d]&G^{\vee}(-\vartheta_Z)\oplus F^{\vee}(-\vartheta_Z)\ar[d]\ar[r]&G(-d_0)\oplus F(-d_0)\ar[r]\ar[d]&R(-d_0)\ar[d] \\ 0\ar[r]&R(-\vartheta_Z)\ar[r]&{\phantom{a}G^{\vee}}(-\vartheta_Z)\ar[r]& G\ar[r]&R }$$ where $d_0:=\vartheta_Z-\vartheta_G.$ Taking the mapping cone and after the trivial cancellations we obtain the following resolution of $R/I_Q$ $$0\to F^{\vee}(-\vartheta_Z)\to G(-d_0)\oplus F(-d_0)\to G\oplus R(-d_0)\to R.$$ Now if we set $K:=F(-d_0)$ and $d:=2\vartheta_Z-\vartheta_G=d_0+d_1+d_2+d_3$ we get $$0\to K^{\vee}(-d) \to G(-d_0) \oplus K \to G \oplus R(-d_0)\to R,$$ which is the required resolution. \[d0\] Let $Q \subset {\mathbb P}^r_k$ be an almost complete intersection scheme of codimension $3$ and $I_Q\subset R$ be its defining ideal, then $R/I_Q$ admits a graded free resolution of the type $$0\to K^{\vee}(-d) \to G(-d_0) \oplus K \to G \oplus R(-d_0)\to R$$ in which $d_0=\min\{\deg p \ | \ p\in I_Q\}.$ Let $d_0\le d_1\le d_2 \le d_3$ the degrees of a minimal set of generatorso of $I_Q.$ Since we can find a regular sequence of minimal generators in $I_Q$ of type $d_1, d_2,d_3,$ the conclusion follows by repeating the argument of Theorem \[grqci\] using such a regular requence. Let $Q\subset{\mathbb P}^3_k$ be the $0$-dimensional almost complete intersection linked to $5$ general points ($\Gamma$) in a complete intersection $Z$ of type $(2,2,8).$ Then we consider the following graded resolutions $$0\to R(-12)\to R(-4)\oplus R(-10)^2\to R(-2)^2\oplus R(-8)\to R\to R/I_Z\to 0$$ and $$\begin{gathered} 0\to R(-5)\to R(-8)\oplus R(-3)^5\oplus R(3)\to R(-8)\oplus R(-2)^5\oplus R(3)\to \\ \to R\to R/I_{\Gamma}\to 0,\end{gathered}$$ where the last one is the pfaffian resolution of $R/I_{\Gamma}$ got by the minimal one by adding the term $R(-8)\oplus R(3)$ to the second and to the third module of the complex. Such a resolution can be built as illustrated in Lemma \[3gen\] and the pfaffians of its alternating central map are the five minimal generators of $I_{\Gamma}$ of degree $2,$ the form of $I_{\Gamma}$ of degree $8$ used to perform the linkage and the null form. So we have $G=R(-2)^2\oplus R(-8),$ $K=R(-9)^3\oplus R(-4),$ $d_0=7$ and $d=19.$ Consequently we get the following resolution of $R/I_Q$ $$\begin{gathered} 0\to R(-15)\oplus R(-10)^3\to [R(-9)^2\oplus R(-15)]\oplus[R(-9)^3\oplus R(-4)]\to \\ \to[R(-8)\oplus R(-2)^2]\oplus R(-7)\to R\to R/I_{Q}\to 0.\end{gathered}$$ Note that a minimal graded resolution of $R/I_Q$ can be obtained by deleting the term $R(-15).$ The graded Betti numbers of almost complete intersections ========================================================= In this section we would like to characterize all graded Betti sequences admissible for an almost complete intersection scheme of codimension $3$ of ${\mathbb P}^r.$ Since we are interested to the graded Betti numbers for such schemes we can restrict ourselves to the Artinian reduction of $X.$ So, from now on, we let $R=k[x_1,x_2,x_3]$ and $A_Q= R/I_Q$ an Artinian almost complete intersection graded algebra. A graded minimal free resolution of $A_Q$ is of the following form $$0\to \bigoplus_{h \in F}R(-h) \to \bigoplus_{h \in E}R(-h) \to \bigoplus_{h \in D}R(-h) \to R \to A_Q \to 0$$ where $D,E,F$ are multisets of positive integers with $|D|=4,$ $|E|=|F|+3.$ For this aim we will consider the following multisets of positive integers $D=\{\{d_0\le d_1\le d_2\le d_3\}\},$ $E=\{\{e_i\}\}_{i=1}^{p+3}$ and $F=\{\{f_i\}\}_{i=1}^{p}$ where $p\ge 2.$ In the sequel we will denote $D^*=\{\{ d_1\le d_2\le d_3\}\}.$ \[l1\] Let $(D,E,F)$ be a Betti sequence admissible for an Artinian almost complete intersection graded algebra of codimension $3.$ Then - $d-F \subset E,$ where $d=\|D\|;$ - if we set $\widehat{E}:=E\setminus (d-F),$ $S:=D^*\cap (\vartheta_Z-\widehat{E}),$ $\overline{D}:=D^* \setminus S$ then $\widehat{E}=(d_0+\overline{D})\sqcup (\vartheta_Z-S).$ 1\) Let $A=R/I_Q $ be an Artinian almost complete intersection graded algebra such that $\beta_{A}=(D,E,F).$ $I_Q$ contains a length $3$ regular sequence of type $(d_1,d_2,d_3)$ which is part of a minimal set of generators for $I_Q;$ let $I_Z$ be the ideal generated by such a regular sequence. Then $I_G:=I_Z:I_Q$ is a Gorenstein ideal of depth $3.$ By standard mapping cone procedure we obtain the following graded free resolution of $R/I_G$ $$0\to R(-\vartheta_G)\to\bigoplus_{h\in \vartheta_Z-E}R(-h)\to \bigoplus_{h\in \vartheta_Z-F }R(-h)\oplus \bigoplus_{h\in D^*}R(-h)\to R$$ where $D^*=\{\{d_1,d_2,d_3\}\},$ $\vartheta_Z=\|D^*\|,$ $\vartheta_G=\vartheta_Z-d_0.$ Since, by mapping cone, no summand of $\bigoplus_{h\in \vartheta_Z-F }R(-h)$ is cancellable in such a resolution, by Gorenstein duality, $\vartheta_G-(\vartheta_Z-F) \subset \vartheta_Z-E$ hence $\vartheta_Z-(\vartheta_G-(\vartheta_Z-F)) \subset \vartheta_Z-(\vartheta_Z-E),$ i.e. $d-F \subset E.$ 2\) By our setting we have $D^*=S \sqcup \overline{D}$ and $\vartheta_Z-\widehat{E}=S \sqcup [(\vartheta_Z-\widehat{E})\setminus S];$ therefore in the previous resolution we have that $$\bigoplus_{h\in \vartheta_Z-E}R(-h)= \bigoplus_{h\in S}R(-h)\oplus\bigoplus_{h\in (\vartheta_Z-\widehat{E})\setminus S}R(-h)\oplus \bigoplus_{h\in -d_0+F }R(-h)$$ $$\bigoplus_{h\in D^*}R(-h)=\bigoplus_{h\in S}R(-h)\oplus \bigoplus_{h\in \overline{D}}R(-h)$$ where the only summands eventually cancellable are in $\bigoplus_{h\in S}R(-h),$ therefore, by Gorenstein duality, $\vartheta_G-\overline{D}=(\vartheta_Z-\widehat{E})\setminus S,$ hence $\vartheta_Z-(\vartheta_G-\overline{D})= \vartheta_Z-[(\vartheta_Z-\widehat{E})\setminus S],$ i.e. $d_0+\overline{D}=\widehat{E}\setminus (\vartheta_Z -S)$ and we are done. Because of the previous lemma it is convenient to give the following definition. \[aci\] A sequence $(D,E,F)$ of multisets of positive integers is said of [[*aci*]{}]{}-type if it satisfies the following conditions - $|D|=4,$ $|E|=|F|+3,$ $|F|\ge 2;$ - $d-F\subset E,$ where $d:=\|D\|;$ - if we set $\widehat{E}:=E\setminus (d-F),$ $d_0:=\min D,$ $D^*:=D\setminus\{d_0\},$ $\vartheta_Z=\|D^*\|,$ $S:=D^*\cap (\vartheta_Z-\widehat{E}),$ $\overline{D}:=D^* \setminus S$ then $\widehat{E}=(d_0+\overline{D})\sqcup (\vartheta_Z-S).$ \[minres\] Let $A=R/I_Q $ be an Artinian almost complete intersection graded algebra such that $\beta_{A}=(D,E,F);$ let $I_Z$ be an ideal generated by a regular sequence of type $(d_1,d_2,d_3)$ which is part of a minimal set of generators for $I_Q$ and $I_G:=I_Z:I_Q$ the linked Gorenstein ideal of depth $3.$ Then the minimal graded free resolution of $A_G=R/I_G$ has the following form $$\begin{array}{c} \displaystyle{\bigoplus_{h\in \vartheta_Z-F }R(-h)\oplus \bigoplus_{h\in \overline{D}}R(-h) \oplus \bigoplus_{h\in \overline{S}}R(-h)}\\ \uparrow\\ \displaystyle{\bigoplus_{h\in -d_0+F }R(-h)\oplus \bigoplus_{h\in\vartheta_G- \overline{D}}R(-h) \oplus \bigoplus_{h\in \overline{S}}R(-h)}\\ \uparrow\\ R(-\vartheta_G)\\ \uparrow\\ 0 \end{array}$$ where $\overline{S}\subseteq S.$ Now observe that $\vartheta_G- [(\vartheta_Z-F)\sqcup \overline{D}]=(-d_0+F) \sqcup (\vartheta_G- \overline{D})$, so we can apply Proposition \[dual\] and deduce that for $\overline{S}$ there are $4$ possibilities depending on its cardinality. - $\overline{S}=\emptyset;$ - $|\overline{S}|=1$ in such a case, by Proposition 3.7 and Remark 3.8 in [@RZ1], $\overline{S}=\{\vartheta_G/2\};$ - $|\overline{S}|=2$ in such a case, by Proposition 3.7 and Remark 3.8 [@RZ1], $\overline{S}=\{\{\alpha, \vartheta_G-\alpha\}\}$ for some $\alpha;$ - $|\overline{S}|=3$ in such a case, by Proposition 3.7 and Remark 3.8 [@RZ1], $\overline{S}=\{\{\vartheta_G/2,\alpha, \vartheta_G-\alpha\}\}$ for some $\alpha.$ In particular, if $\vartheta_G/2 \notin {\operatorname{Supp}}S$ then $|\overline{D}|+|F|$ is odd and consequently $|S|+|F|$ is even. It is easy to produce examples in which $\vartheta_G/2 \in {\operatorname{Supp}}S$ and $|S|+|F|$ is even and examples in which $\vartheta_G/2 \in {\operatorname{Supp}}S$ and $|S|+|F|$ is odd. Now we prove two technical lemmas which will be crucial for the characterization of the Betti sequences of almost complete intersections. \[magg\] Let $A_G$ be an Artinian Gorenstein graded algebra of codimension $3$ whose Betti sequence is $\beta_G.$ Let $(d_1,d_2,d_3)\ge(e_1,e_2,e_3):={\operatorname{mci}}\beta_G,$ with $d_1\le d_2\le d_3$ and $e_1\le e_2\le e_3.$ - If $d_i=e_i$ for some $i$ then for every regular sequence $(g_1,g_2,g_3)$ in $I_G,$ with $\deg g_j=d_j$ for $1\le j\le 3,$ $g_i$ is a minimal generator for $I_G.$ - If $d_i>e_i$ for some $i$ and $I_G$ has a minimal generator of degree $d_i$ then there are in $I_G$ regular sequences $(g_1,g_2,g_3),$ with $\deg g_j=d_j$ for $1\le j\le 3,$ such that $g_i$ is a minimal generator for $I_G$ and regular sequences $(h_1,h_2,h_3),$ with $\deg h_j=d_j$ for $1\le j\le 3,$ such that $h_i$ is not a minimal generator for $I_G.$ 1\) Let us suppose that $d_i=e_i$ for some $i$ and let us consider a regular sequence $(g_1,g_2,g_3)$ in $I_G,$ with $\deg g_j=d_j$ for $1\le j\le 3.$ It is enough to remind that if $i=1$ then $d_1=\min\{\deg f\mid f\in I_G\};$ if $i=2$ then ${\operatorname{depth}}(I_G)_{\le d_2-1}=1;$ if $i=3$ then ${\operatorname{depth}}(I_G)_{\le d_3-1}=2.$ 2\) Let us suppose that $d_i>e_i$ for some $i$ and that $I_G$ has a minimal generator of degree $d_i.$ Let $(f_1,f_2,f_3)$ be a regular sequence in $I_G$ such that $\deg f_j=e_j$ for $1\le j\le 3.$ If we choice $h_j:=f_ja_j,$ where $a_j$ is a generic form of degree $d_j-e_j,$ for $1\le j\le 3,$ we get a regular sequence in which $h_i$ is not a minimal generator for $I_G.$ Now we set $g_j:=f_ja_j,$ where $a_j$ is a generic form of degree $d_j-e_j$ for $j\ne i$ and we take as $g_i$ a generic form in $(I_G)_{d_i}.$ Of course, $(g_1,g_2,g_3)$ is a regular sequence and, since by hypothesis $I_G$ has minimal generators in degree $d_i,$ $g_i$ will be a minimal generator for $I_G.$ \[zzz\] Let $A_{\Gamma}=R/I_{\Gamma}$ be an Artinian Gorenstein graded algebra of codimension $3,$ whose last syzygy degree is $\vartheta_{\Gamma}.$ Let $(f_1,f_2,f_3)$ be a regular sequence in $I_{\Gamma},$ with $\deg f_i=d_i$ and $d_1\le d_2\le d_3.$ Let us suppose that $d_i+d_j=\vartheta_{\Gamma}$ and $f_i,$ $f_j$ are minimal generators for $I_{\Gamma}$ for some $1 \le i<j\le 3.$ Let $A_{G}$ be an Artinian Gorenstein graded algebra of codimension $3$ whose Betti sequence is obtained from the Betti sequence of $A_{\Gamma}$ by deleting the degrees $d_i$ and $d_j$ between generators and first syzygies. If $(e_1,e_2,e_3):={\operatorname{mci}}\beta_G,$ then $d_i>e_i$ and $d_j>e_j.$ Along this proof we let $\gamma_1\le\ldots\le\gamma_{2m+1}$ be the degrees of a minimal set of generators for $I_{\Gamma},$ $g_1\le\ldots\le g_{2m-1}$ be the degrees of a minimal set of generators for $I_{G}$ and $(\epsilon_1,\epsilon_2,\epsilon_3):={\operatorname{mci}}\beta_{\Gamma}.$ We start with the more delicate case, i.e. for $i=2$ and $j=3.$ By [@RZ3] Theorem 3.9, we have ${\operatorname{mci}}\beta_G\le{\operatorname{mci}}\beta_{\Gamma}\le (d_1,d_2,d_3),$ so if $\epsilon_2<d_2$ the conclusion follows for $d_2.$ Therefore we can suppose that $\epsilon_2=d_2.$ If $A$ is an Artinian Gorenstein graded algebra of codimension $3,$ whose Hilbert function is $H,$ we write ${\operatorname{Mng}}_H(n)$ as the largest number of minimal generators of degree $n$ compatible with an Artinian Gorenstein graded algebra of codimension $3$ having Hilbert function equal to $H$ (see Remark \[maxg\]). Now, if $H$ is the Hilbert function of $A_{\Gamma},$ according to Proposition \[maxg\] and Theorem \[c3b\], either $\epsilon_2=\gamma_2$ or $I_{\Gamma}$ has ${\operatorname{Mng}}_H(\epsilon_2)$ minimal generators of degree $\epsilon_2.$ But $\epsilon_2=\gamma_2$ implies that $\gamma_2=\epsilon_2=d_2=\vartheta_{\Gamma}-d_3$ is a degree of a first syzygy of $I_{\Gamma}$ which is clearly impossible. So, $I_{\Gamma}$ has ${\operatorname{Mng}}_H(\epsilon_2)$ minimal generators of degree $\epsilon_2=d_2.$ On the other hand the Hilbert function of $A_G$ is still $H$ and $I_G$ has ${\operatorname{Mng}}_H(d_2)-1$ minimal generators in degree $d_2.$ This implies, again by Proposition \[maxg\] and Theorem \[c3b\], that $e_2<\epsilon_2=d_2.$ Now, if $d_3>\epsilon_3$ we are done. So we can assume that $d_3=\epsilon_3.$ Observe that first $\epsilon_3 > \gamma_3:$ indeed, otherwise, $d_3= \epsilon_3 = \gamma_3$ and consequently $d_2= \epsilon_2 = \gamma_2$ (we remind that $d_2$ and $d_3$ are degrees of minimal generators for $I_{\Gamma}$) and we should have again a first syzygy of degree $\gamma_2$ for $I_{\Gamma}.$ So we have two possibilities: either $\epsilon_2=\gamma_2$ or $\epsilon_2>\gamma_2.$ If $\epsilon_2=\gamma_2,$ since $\epsilon_3 > \gamma_3$, by Theorem \[c3b\] $B_{\Gamma}=\emptyset$ and $C_{\Gamma}\ne \emptyset.$ If $\overline{B}_{\Gamma}=\emptyset$ then $I_{\Gamma}$ has ${\operatorname{Mng}}_H(\epsilon_3)-1$ minimal generators in degree $\epsilon_3.$ Then $e_2=\gamma_2$ and $I_G$ has ${\operatorname{Mng}}_H(\epsilon_3)-2$ generators in degree $\epsilon_3;$ therefore $e_3<\epsilon_3,$ i.e. $d_3>e_3.$ If $\overline{B}_{\Gamma}=\{n+2\}$ (see Remark \[Bvuoto\]) then $\gamma_{n+2}\ge \vartheta_{\Gamma}-\gamma_{n+2}>\gamma_{n+1}$ hence $e_3\le g_{n+1}=\gamma_{n+1}<\gamma_{n+2}=\epsilon_3$ (note that, in this case, $g_i=\gamma_i,$ for $i\le n+1$). Finally, if $\epsilon_2>\gamma_2$ then $$B_{\Gamma}=\{3\le h\le m+1\mid\vartheta_{\Gamma}\le\gamma_h+\gamma_{2m+4-h}\}\ne \emptyset$$ (as defined in Theorem \[c3b\]) and consequently $I_{\Gamma}$ has ${\operatorname{Mng}}_H(d_3)$ minimal generators in degree $d_3.$ Let $$B_{G}=\{3\le h\le m\mid\vartheta_{\Gamma}\le g_h+g_{2m+2-h}\}$$ and $$C_G=\{4\le h\le m+1\mid\vartheta_{\Gamma}\le g_h+g_{2m+3-h}\}$$ Now, if $B_G\ne\emptyset,$ since $I_G$ has ${\operatorname{Mng}}_H(d_3)-1$ minimal generators in degree $d_3,$ we see that $e_3<\epsilon_3=d_3.$ If $B_G=\emptyset$ and we set $b:=\max B_{\Gamma},$ there exist only two integers, $\gamma_b$ and $\gamma_{2m+4-b},$ in which $I_{\Gamma}$ has the maximum number of minimal generators allowed by $H.$ Therefore $\vartheta_{\Gamma}-\gamma_b=\gamma_{2m+4-b}=\epsilon_3=d_3$ and this implies also that $d_2=\gamma_b=\epsilon_2.$ Moreover $g_h=\gamma_h$ for $1\le h\le b-1,$ $g_h=\gamma_{h+1}$ for $b\le h\le 2m+2-b$ and $g_h=\gamma_{h+2}$ for $2m+3-b\le h\le 2m-1.$ Therefore $$g_b=\gamma_{b+1}\ge\gamma_b=\vartheta_{\Gamma}-\gamma_{2m+4-b}\ge\vartheta_{\Gamma}-\gamma_{2m+5-b}= \vartheta_{\Gamma}-g_{2m+3-b}$$ i.e. $b\in C_G.$ Let $c:=\max C_G.$ Of course $b\le c\le m+1\le 2m+2-b,$ so $e_3=g_c=\gamma_{c+1}.$ Therefore we have $g_c=\gamma_{c+1}=e_3 \le \epsilon_3=d_3=\gamma_{2m+4-b}.$ Suppose that $e_3= \epsilon_3=d_3,$ i.e. $\gamma_{c+1}=\gamma_{2m+4-b}.$ Since $c+1\le m+2<2m+4-b,$ we have $\gamma_h=\gamma_{2m+4-b}$ for $c+1\le h<2m+4-b.$ Consequently, $\gamma_{h}+\gamma_{2m+4-h}\ge\gamma_{2m+4-b}+\gamma_b=\vartheta_{\Gamma},$ i.e. $h$ and $2m+4-h$ satisfy the inequality to stay in $B_{\Gamma};$ but $h>b$ and $2m+4-h>b$ therefore $h\not\in B_{\Gamma}$ and $2m+4-h\not\in B_{\Gamma}$ so we get that $h\ge m+2$ and $2m+4-h\ge m+2$ i.e. $h=m+2.$ So we are reduced to the case $c+1=m+2$ and $2m+4-b=m+3$ i.e. $c=b=m+1$ and $\gamma_{m+2}=\gamma_{m+3}.$ So we have that $$\vartheta_{\Gamma}=\gamma_b+\gamma_{2m+4-b}=\gamma_{m+1}+\gamma_{m+3}= \gamma_{m+1}+\gamma_{m+2}<\vartheta_{\Gamma}$$ where the last inequality holds for the Gaeta-Diesel conditions. So we get a contradiction and consequently $e_3<\epsilon_3=d_3.$\ The remaining cases are simpler. So, if $i=1$ and $j=2$ since $\vartheta_{\Gamma}-d_2=d_1$ is a first syzygy for $I_{\Gamma}$ we have $d_1>\epsilon_1\ge e_1.$ Moreover, $d_2=\vartheta_{\Gamma}-d_1\ge \vartheta_{\Gamma}/2\ge \gamma_{m+1}\ge \epsilon_2.$ thus, if $d_2=\epsilon_2$ we should have $d_2=\vartheta_{\Gamma}/2=d_1=\gamma_{m+1}$ and, consequently, $\vartheta_{\Gamma}/2=\vartheta_{\Gamma}-\gamma_{m+1}> \gamma_{m+2}\ge \vartheta_{\Gamma}/2,$ a contradiction, therefore $d_2>\epsilon_2\ge e_2.$\ Finally, if $i=1$ and $j=3,$ as before we easily get that $d_1>\epsilon_1\ge e_1.$ Of course, we can suppose $d_3=\epsilon_3.$ We then show that $B_{\Gamma}\ne \emptyset:$ namely, otherwise, $d_3=\epsilon_3\le \gamma_{m+2},$ let us say $d_3=\gamma_h$ for some $h\le m+2.$ Thus we should have $$d_1=\vartheta_{\Gamma}-d_3=\vartheta_{\Gamma}-\gamma_h>\gamma_{2m+3-h}\ge \gamma_{m+1}\ge d_2$$ a contradiction. Therefore, we have $B_{\Gamma}\ne \emptyset$ and as in the case $i=2$ and $j=3$ we are reduced to the case in which $B_{\Gamma}$ has only one element $b.$ In this case we have $\epsilon_2=\gamma_b$ and $\epsilon_3=d_3=\gamma_{2m+4-b};$ then $d_1=\vartheta_{\Gamma}-d_3=\vartheta_{\Gamma}-\gamma_{2m+4-b}>\gamma_{b-1}$ and $\gamma_{b}\ge \vartheta_{\Gamma}-d_3=d_1$ since $b \in B_{\Gamma},$ i.e. $d_1=\gamma_b$ and the conclusion $e_3<\epsilon_3=d_3$ follows as in the final part of the previous case $i=2$ and $j=3.$ \[betti\] Let $(D,E,F)$ be a sequence of multisets of positive integers. $(D,E,F)$ is a Betti sequence admissible for an Artinian almost complete intersection algebra of codimension $3$ if and only if it satisfies the following conditions: - it is an [[*aci*]{}]{}-type sequence; - $\beta_G:=(G_0,G_1,G_2)$ is the Betti sequence of a $3$-codimensional Gorenstein Artinian graded algebra, where $$G_0:=(\vartheta_Z-F)\sqcup\overline{D}\sqcup T$$ $$G_1:=(-d_0+F)\sqcup(\vartheta_G-\overline{D})\sqcup T$$ $$G_2:=\{\vartheta_G\}$$ with $\vartheta_G:=\vartheta_Z-d_0$ and $T:=\{\vartheta_G/2\}$ when $\vartheta_G/2\in {\operatorname{Supp}}S$ and $|F|+|\overline{D}|$ is even and $T:=\emptyset$ otherwise. - if we set $(e_1,e_2,e_3):={\operatorname{mci}}\beta_G,$ $e_1\le e_2\le e_3,$ $D^*=\{\{d_1,d_2,d_3\}\},$ $d_1\le d_2\le d_3$ then $d_i\ge e_i$ for $1\le i\le 3$ and for every $s\in {\operatorname{Supp}}(S\setminus T),$ $d_i>e_i$ for $i:=\min\{j\mid d_j=s\}+\mu_{S\setminus T}(s)-1.$ Let us suppose that $(D,E,F)$ is the Betti sequence of an almost complete intersection of codimension $3.$ By Lemma \[l1\] $(D,E,F)$ is an [[*aci*]{}]{}-type sequence. Using the same terminology of the Definition \[aci\] we denote $D=\{\{d_0,d_1,d_2,d_3\}\}$ with $d_0\le d_1\le d_2\le d_3.$ Let $A_Q=R/I_Q$ be an almost complete intersection Artinian graded algebra with Betti sequence $(D,E,F)$ and $A_Z=R/I_Z$ be a complete intersection Artinian graded algebra of type $(d_1,d_2,d_3)$ with $I_Z$ generated by $f_1,f_2,f_3$ minimal generators of $I_Q.$ Note that such $A_Z$ must exist since $d_1,d_2,d_3\ge d_0.$ Let $I_{\Gamma}:=I_Z:I_Q.$ Of course, $A_{\Gamma}=R/I_{\Gamma}$ is a Gorenstein Artinian graded algebra. Using Remark \[minres\] a minimal graded free resolution of $A_{\Gamma}$ will be of the type $$\begin{array}{c} \displaystyle{\bigoplus_{h\in \vartheta_Z-F }R(-h)\oplus \bigoplus_{h\in \overline{D}}R(-h) \oplus \bigoplus_{h\in \overline{S}}R(-h)}\\ \uparrow\\ \displaystyle{\bigoplus_{h\in -d_0+F }R(-h)\oplus \bigoplus_{h\in\vartheta_G- \overline{D}}R(-h) \oplus \bigoplus_{h\in \overline{S}}R(-h)}\\ \uparrow\\ R(-\vartheta_{\Gamma})\\ \uparrow\\ 0 \end{array}$$ Using Remark \[minres\] one sees that either $\overline{S}=T$ or $\overline{S}\setminus T=\{\{\alpha, \vartheta_G-\alpha\}\}$ for some $\alpha;$ therefore if we substitute in the previos resolution $\overline{S}$ with $T$ we get a graded minimal free resolution of a Gorenstein Artinian graded algebra $A_G$ with $\vartheta_G=\vartheta_{\Gamma}$ (see [@RZ1] Remark 3.8). Hence condition $2)$ is verified. Now observe that $\beta_{G} \le \beta_{\Gamma}$ hence ${\operatorname{mci}}\beta_{G}\le {\operatorname{mci}}\beta_{\Gamma}$ (see [@RZ3] Theorem 3.9). Since $I_Z \subseteq I_{\Gamma}$ $(d_1,d_2,d_3)\ge {\operatorname{mci}}\beta_{\Gamma} \ge {\operatorname{mci}}\beta_G=(e_1,e_2,e_3).$ Let $s \in {\operatorname{Supp}}(S\setminus T)$ and $i=\min\{j\mid d_j=s\}+\mu_{S\setminus T}(s)-1,$ say $m:=\mu_{(S\setminus T)}(s).$ Then, among $f_1,f_2,f_3,$ there are $m$ forms, of degree $s,$ such that each of them is either a generator not minimal for $I_{\Gamma}$ or it is a minimal generator for $I_{\Gamma},$ but there exists a minimal generator for $I_{\Gamma}$ of degree $\vartheta_{\Gamma}-s.$ So, by Lemma \[magg\] and Lemma \[zzz\], $|\{j\mid d_j=s,\,d_j>e_j\}|\ge m.$ Let $h:=\max\{j\mid d_j=s,\,d_j>e_j\}.$ Then $$h\ge\min\{j\mid d_j=s,\,d_j>e_j\}+m-1\ge i:=\min\{j\mid d_j=s\}+m-1,$$ therefore $s=d_i=d_h>e_h\ge e_i$ and we are done. Vice versa, since $(D,E,F)$ is [[*aci*]{}]{}-type we have - $D=\{d_0\} \sqcup \overline{D} \sqcup S,$ $|\overline{D} \sqcup S|=3;$ - $E=(d-F)\sqcup (d_0+\overline{D})\sqcup (\vartheta_Z-S).$ Let $A_G=R/I_G$ be a $3$-codimensional Artinian Gorenstein algebra with Betti sequence $\beta_G=(G_0,G_1,G_2);$ by condition 3), using part 2) of Lemma \[magg\] for every $s \in {\operatorname{Supp}}(S\setminus T)$ with multiplicity $\mu_{S\setminus T}(s)=m$ we can find a regular sequence of lenght $3$ in $I_G$ with $m$ elements of degree $s$ which are not minimal generators for $I_G$ (and the other elements minimal generators for $I_G$). Say $I_Z$ the complete intersection generated by this regular sequence and $I_Q:=I_Z:I_G;$ by mapping cone procedure we see that the minimal resolution of $I_Q$ will be of the following type: $$\begin{array}{c} \displaystyle{R(-d_0)\oplus \bigoplus_{h\in \overline{D}}R(-h) \oplus \bigoplus_{h\in S}R(-h)}\\ \uparrow\\ \displaystyle{\bigoplus_{h\in d-F }R(-h)\oplus \bigoplus_{h\in d_0+ \overline{D}}R(-h) \oplus \bigoplus_{h\in \vartheta_Z-S}R(-h)}\\ \uparrow\\ \displaystyle{\bigoplus_{h\in F }R(-h)}\\ \uparrow\\ 0 \end{array}$$ which means that $(D,E,F)$ is a Betti sequence of an almost complete intersection. In this example we produce a sequence in which all the conditions of Theorem \[betti\] hold but the last one, so it is not admissible as a Betti sequence for an almost complete intersection. Let $D=\{\{3,6,6,6\}\},$ $E=\{\{8,8,8,10,10,10,12,12,12,12\}\},$ $F=\{\{9,11,11,11,13,13,13\}\}$ and $\beta=(D,E,F).$ Then $\vartheta_Z=18,$ $\vartheta_G=15,$ $d=21,$ $D^*=\{\{6,6,6\}\},$ $S=\{\{6,6,6\}\},$ $G_0=\{\{5,5,5,7,7,7,9\}\},$ $G_1=\{\{6,8,8,8,10,10,10\}\},$ $G_2=\{15\}.$ Note that $\beta_G:=(G_0,G_1,G_2)$ is a Gorenstein Betti sequence with ${\operatorname{mci}}\beta_G=(5,5,7)$ hence $(6,6,6)\not\ge {\operatorname{mci}}\beta_G.$ In this example we produce a sequence in which all the conditions of Theorem \[betti\] hold, except that one regarding the $i=\min\{j\mid d_j=s\}+\mu_{S\setminus T}(s)-1$ for some $s\in {\operatorname{Supp}}(S\setminus T),$ so it is not admissible as a Betti sequence for an almost complete intersection. Let $D=\{\{2,5,5,7\}\},$ $E=\{\{7,7,7,9,9,9,10,11\}\},$ $F=\{\{8,10,10,10,12\}\}$ and $\beta=(D,E,F).$ Then $\vartheta_Z=17,$ $\vartheta_G=15,$ $d=19,$ $D^*=\{\{5,5,7\}\},$ $S=\{7\},$ $G_0=\{\{5,5,5,7,7,7,9\}\},$ $G_1=\{\{6,8,8,8,10,10,10\}\},$ $G_2=\{15\}.$ Note that $\beta_G:=(G_0,G_1,G_2)$ is a Gorenstein Betti sequence with ${\operatorname{mci}}\beta_G=(5,5,7),$ $(d_1,d_2,d_3)=(5,5,7)\ge{\operatorname{mci}}\beta_G,$ but $3=\min\{j\mid d_j=7\}+\mu_{S\setminus T}(7)-1$ and $d_3=7$ is not greater than $e_3=7.$ The following sequences are admissible as Betti sequences for an almost complete intersection: $$(D,E,F)=\big(\{4,5^{(2)},9\},\{9^{(3)},11^{(3)},13\},\{12^{(3)},14\}\big)$$ and $$(D,E,F)=\big(\{4,5^{(2)},9\},\{9^{(3)},10,11^{(3)},13\},\{10,12^{(3)},14\}\big).$$ Note that we can obtain the first one by the second one by deleting the ghost degree $10.$ [RZ3]{} D. A. Buchsbaum, D. Eisenbud, *Algebra structures for finite free resolutions, and some structure theorems for ideals of codimension 3*, Amer. J. Math. **99**(1) (1977), 447–485. S. Diesel, *Irreducibility and dimension theorems for families of height 3 Gorenstein algebras*, Pac. J. of Math. **172**(4) (1996), 365–397. F. Gaeta, *Quelques progres récents dans la classification des variétés algébriques d’un espace projectif*, Deuxieme Colloque de Géométrie Algébrique, Liege, (1952), pp. 145–183. A. Iarrobino, V. Kanev, *Power Sums, Gorenstein Algebras, and Determinantal Loci*, LNM 1721 (1999) Springer-Verlag J. Migliore, *Introduction to liaison theory and deficency modules*, Progress in Math. v.165 Birkhauser Boston (1998) J. Migliore, R. Miro Roig, *On the minimal free resolution of $n+1$ general forms*, Trans. Am. MAth. Soc., 355 (2003) vol.1, 1–36 C. Peskine, L. Szpiro, *Liaison des variétés algébriques. I*, Inv. Math. **26** (1974), 271–302. A. Ragusa, G. Zappala, *Properties of $3$-codimensional Gorenstein schemes*, Comm. Algebra, 29 (1), 303-318 (2001) A. Ragusa, G. Zappala, *Gorenstein schemes on general surfaces*, Nagoya Math. J. Vol.162, 111-125 (2001) A. Ragusa, G. Zappala, *Complete intersections containing Cohen Macaulay and Gorenstein schemes* Preprint. S.  Seo *Almost complete intersections*, J. of Algebra **320** (2008), 2594–2609. [(A. Ragusa) Dip. di Matematica e Informatica, Università di Catania,\ Viale A. Doria 6, 95125 Catania, Italy]{} [*E-mail address:* ]{}[ragusa@dmi.unict.it]{} [*Fax number:* ]{}[+39095330094]{} [*E-mail address:* ]{}[zappalag@dmi.unict.it]{} [*Fax number:* ]{}[+39095330094]{}
--- abstract: 'The MSSM with right-handed neutrino supermultiplets, gauged $B-L$ symmetry and a non-vanishing sneutrino expectation value is the minimal theory that spontaneously breaks $R$-parity and is consistent with the bounds on proton stability and lepton number violation. This minimal $B-L$ MSSM can have a colored/charged LSP, of which a stop LSP is the most amenable to observation at the LHC. We study the $R$-parity violating decays of a stop LSP into a bottom quark and charged leptons–the dominant modes for a generic “admixture” stop. A numerical analysis of the relative branching ratios of these decay channels is given using a wide scan over the parameter space. The fact that $R$-parity is violated in this theory by a vacuum expectation value of a sneutrino links these branching ratios directly to the neutrino mass hierarchy. It is shown how a discovery of bottom-charged lepton events at the LHC can potentially determine whether the neutrino masses are in a normal or inverted hierarchy, as well as determining the $\theta_{23}$ neutrino mixing angle. Finally, present LHC bounds on these leptoquark signatures are used to put lower bounds on the stop mass.' author: - | [Zachary Marshall${}^{1}$, Burt A. Ovrut${}^{2}$, Austin Purves${}^{2}$ and Sogee Spinner${}^{2}$]{}\ [*${}^{1}$ Physics Division, Lawrence Berkeley National Laboratory*]{}\ [*Berkeley, CA 94704*]{}\ [*${}^{2}$ Department of Physics, University of Pennsylvania*]{}\ [*Philadelphia, PA 19104–6396*]{}\ --- =10000 Introduction {#introduction .unnumbered} ============ The extension of the standard $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$ model of particle physics, with or without right-handed neutrinos, to $N=1$ supersymmetry (SUSY) is immediately confronted by a fundamental problem. Without any further constraints, the superpotential must contain cubic superfield interactions that violate both baryon number ($B$) and lepton number ($L$)–thus leading, at tree level, to potentially rapid proton decay and unobserved lepton number violating processes. The conventional “natural” solution to this problem is to demand that the Lagrangian be invariant under a discrete $R$-parity, $R=(-1)^{3(B-L)+2s}$ where $s$ is the spin of the component particle. This symmetry indeed eliminates the dangerous $B$ and $L$ violating interactions, and is consistent with the observed constraints on these quantities. The $R$-parity invariant supersymmetric extension of the standard $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y}$ model of particle physics, with or without right-handed neutrinos, is referred to as the minimal supersymmetric standard model (MSSM), and is the usual paradigm for a low energy supersymmetric particle physics model. Be this as it may, from the low energy point of view the imposition of discrete $R$-parity is completely [*ad hoc*]{}. There have been many attempts to justify it by 1) embedding the MSSM into a supersymmetric grand unified theory (GUT), *e.g.* [@Aulakh:2000sn], or 2) as arising from a residual topological, finite or anomalous Abelian symmetry of a superstring vacuum [@Braun:2006me; @Anderson:2010tc]. Without prejudice as to the efficacy or physical reality of these attempts, there is another way to arrive at the same results which is both straightforward, natural and does not require the introduction of any superfields beyond those of the MSSM with right-handed neutrino supermultiplets. This is as follows. It has been known for a long time that the right-handed neutrino version of the SM–and its MSSM extension–remains anomaly free if one enlarges the gauge group to $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y} \times U(1)_{B-L}$. Furthermore, note that $R$-parity is a discrete ${\mathbb{Z}}_{2}$ subgroup of $U(1)_{B-L}$. It follows that one can “naturally” incorporate $R$-parity conservation into the MSSM with right-handed neutrinos simply by extending the gauge group to $SU(3)_{C} \times SU(2)_{L} \times U(1)_{Y} \times U(1)_{B-L}$. However, since it is unobserved at the electroweak scale, this gauged $U(1)_{B-L}$ symmetry must be broken at, say, a TeV scale or above. There have been attempts to do this, while leaving $R$-parity unbroken. This can only be accomplished, however, by introducing new chiral multiplets with even $B-L$ charge [@Font:1989ai]. That is, one must go beyond the MSSM particle content and introduce new fields into the spectrum. However, one need not preserve $R$-parity if the scale of its breaking is sufficiently low–for example, at a TeV. This can be accomplished if one, or more, of the right-handed sneutrino scalars–each carrying an odd $B-L$ charge–develop a vacuum expectation value (VEV). This does not require the introduction of any additional multiplets and is consistent with proton stability–since a sneutrino VEV breaks lepton number only–and the bounds on lepton violation. We will refer to this theory as the minimal $B-L$ MSSM. It was introduced from the “bottom up” point of view in [@FileviezPerez:2008sx; @Barger:2008wn; @Everett:2009vy][^1]. It was also found from a “top down” perspective to be the low energy theory associated with a class of vacua of $E_{8} \times E_{8}$ heterotic $M$-theory [@Lukas:1998yy; @Braun:2005ux; @Braun:2005nv; @Braun:2006ae; @Ambroso:2009jd]. Various aspects of this minimal theory were subsequently discussed, such as the radiative breakdown of the $U(1)_{B-L}$ gauge symmetry and its hierarchy with electroweak breaking [@Ovrut:2012wg; @Ambroso:2009sc; @Ambroso:2010pe], the neutrino sector [@Mohapatra:1986aw; @Ghosh:2010hy; @Barger:2010iv], possible LHC signals [@FileviezPerez:2012mj; @Perez:2013kla] and some cosmological effects [@Perez:2013kla]. We take the point of view that this $B-L$ MSSM is the minimal possible extension of the MSSM that is consistent with proton stability and observed lepton violation bounds. Hence, it is potentially a realistic candidate for a low energy $N=1$ supersymmetric particle physics model. With this in mind, we wish to study the the dominant signatures of this model at the Large Hadron Collider (LHC) that can distinguish it from the MSSM. The initial results of this study are presented in this paper. We find that there are three distinct phenomena that can occur in the minimal $B-L$ MSSM that are potentially observable at the LHC and sharply distinguish this model from the MSSM. These are the following. - Since $R$-parity is violated in the minimal $B-L$ MSSM, it is now possible that the lightest supersymmetric particle (LSP)[^2] can carry color and/or electric charge without coming into conflict with astrophysical data. This is because the LSP can now decay sufficiently quickly via $R$-parity violating operators. Furthermore, the specific nature of this theory–which exactly specifies the $R$-parity violating vertices and their relative strengths–determines all LSP decay products and their branching ratios. - The “Higgs” field that spontaneously breaks $U(1)_{B-L}$ in this minimal model is at least one of the right-handed sneutrinos. It follows that the neutrino sector in this theory is intimately related to the $R$-parity violating operators and, hence, to the allowed decay products of the LSP and their branching ratios. Put the other way, observation at the LHC of the relative branching ratios of the LSP decays can directly inform specific issues in the neutrino mass matrix–specifically, whether there is a “normal” or an “inverted” neutrino mass hierarchy and can potentially remove the ambiguity in the measurement of the $\theta_{23}$ mixing angle, which can be one of two measured central values. - As mentioned above, the minimal $B-L$ theory exactly specifies the allowed $R$-parity violating decays of the LSP. For a chosen LSP, these decay signatures, which are disallowed within the $R$-parity invariant MSSM, can be rather unique. Data on such decays at the LHC can then be used to put a lower bound on the LSP mass. We hasten to point out that confirmation at the LHC of $R$-parity violating LSP decays consistent with the minimal $B-L$ MSSM is not sufficient to establish its reality. Full confirmation of this theory would require at least two other specific discoveries: 1) a massive vector boson in the TeV range corresponding to $B-L$ and 2) the existence of some other explicit superpartner. Be that as it may, a careful study of the three issues discussed in the bullet points–and their implications for the LHC–would be a major step in either confirming, putting bounds on, or disproving the minimal $B-L$ MSSM. We now present the results of such a study. The technical details will be presented in a forthcoming publication [@Purves]. $R$-Parity Violation and Stop LSP Decays {#r-parity-violation-and-stop-lsp-decays .unnumbered} ======================================== First a technical point. It will be assumed in this paper that all gauge couplings of the minimal $B-L$ MSSM unify at a high scale. Under this assumption, we find it easier to work with the rotated Abelian gauge groups $U(1)_{3R} \times U(1)_{B-L}$ rather than $U(1)_{Y} \times U(1)_{B-L}$, since the former, unlike the original gauge group, has no kinetic mixing at any scale. This greatly simplifies the calculations, while changing none of the physics conclusions. It was shown in [@Ghosh:2010hy; @Barger:2010iv] that within the minimal $B-L$ MSSM all non-vanishing right-handed sneutrino VEV’s can, without loss of generality, be rotated into the third family, and that this VEV is given by $$v_R^2=\frac{-8m^2_{\tilde \nu_{3}^c} + g_R^2\left(v_u^2 - v_d^2 \right)}{g_R^2+g_{BL}^2} \label{B1}$$ where $m_{{\tilde \nu_{3}}^c}$ and $v_{u}$, $v_{d}$ are the third family sneutrino soft SUSY breaking mass parameter and the up-, down-Higgs VEV’s respectively. The parameters $g_{R}$ and $g_{BL}$ are the gauge couplings for $U(1)_{3R}$ and $U(1)_{B-L}$. Furthermore, $v_{R}$ induces a smaller VEV for each of the left-handed sneutrinos given by $${v_L}_i=\frac{\frac{v_R}{\sqrt 2}(\mu \, Y_{\nu_{i3}}^*v_d-a_{\nu_{i3}}^*v_u)}{m_{\tilde L_{i}}^2-\frac{g_2^2}{8}(v_u^2-v_d^2)-\frac{g_{BL}^2}{8}v_R^2} \label{B2}$$ for $i=1,2,3$. Here $Y_{\nu_{i3}}$ and $m_{\tilde L_{i}}$ are the neutrino $(i3)$-Yukawa couplings and the left-handed sneutrino soft SUSY breaking mass parameters respectively, $\mu$ is the mu-parameter, $a_{\nu_{i3}}$ are the $(i3)$-components of the sneutrino tri-linear soft SUSY breaking terms, and $g_{2}$ is the gauge coupling parameter for $SU(2)_{L}$. These expectation values spontaneously break the gauged $U(1)_{3R} \times U(1)_{B-L}$ symmetry down to $U(1)_{Y}$. When expanded around these VEV’s, explicit $R$-parity violating terms appear in the Lagrangian. It is these terms that lead to decays of the LSP. These terms are similar to explicit bilinear $R$-parity violation in the MSSM, although there are important differences stemming from the neutrino sector. For example, bilinear $R$-parity violation has only one massive neutrino at tree level, whereas our model has two. For earlier works on non-LSP stop decays see [@Diaz:1999ge; @Restrepo:2001me; @Datta:2006ak]. In addition, for a study on the relationship between neutrino masses and collider phenomenology in the MSSM with explicit trilinear $R$-parity violation, see [@Barger:2001xe]. Generically, within the minimal $B-L$ MSSM any superpartner can potentially be the LSP. Be that as it may, colored particles are more readily produced at the LHC and, hence, one can put more aggressive bounds on their decays. Furthermore, if one assumes unification of the gauge coupling parameters, then it was shown in [@Ovrut:2012wg] that the gluino cannot be the LSP. Therefore, one is driven to consider squark LSP’s only. However, it is well-known from renormalization group analyses of the mass parameters [@Martin:1997ns] that the third family of squarks is generically the lightest. Hence, one should consider both the stop and the sbottom as potential LSP candidates. In this paper, we will, for simplicity, limit the discussion to a stop LSP, deferring the analysis of a sbottom LSP to a forthcoming paper. The left stop-right stop mass matrix is a function of a number of parameters in the $B-L$ MSSM Lagrangian. This can be diagonalized into a light stop, denoted ${\tilde{t}}_{1}$, which we take to be the LSP and a heavier stop, ${\tilde{t}}_{2}$, which can henceforth be ignored. This LSP can be shown to always decay via $R$-parity violating interactions into a lepton and a quark–that is, ${\tilde{t}}_{1}$ behaves as a “leptoquark”. Furthermore, if one only considers generic values of the left and right stop mixing angle, denoted by $\theta_{t}$–that is, ${\tilde{t}}_{1}$ is a generic admixture of the left and right stops and [*not*]{} purely a right stop– then the decay into a bottom quark and a charged lepton dominates over the decay into a top quark and a neutrino. This latter decay will be neglected here, but discussed in detail in [@Purves]. $${\tilde{t}}_{1} \longrightarrow b~ \ell^+_{i}~,~~i=1,2,3 \label{B3}$$ where $b$ is the bottom quark and $\ell^+_{i}, i=1,2,3$, are the positron, anti-muon and anti-tau respectively. The partial widths of a stop LSP into bottom–charged leptons can be calculated, and are found to be $$\Gamma(\tilde t_1 \to b \, \ell^+_i)=\frac{1}{16\pi}(|G^L_{{\tilde t}_{1} b\ell_{i}}|^2+|G^{R}_{{\tilde t}_{1}b\ell_{i}}|^2)m_{\tilde t_1} \label{B4}$$ where, $G^L_{\tilde t_{1} b\ell_{i}}$ and $G^R_{\tilde t_{1} b\ell_{i}}$ are complicated functions of a large number of parameters in the $B-L$ MSSM Lagrangian and $m_{{\tilde t}_{1}}$ is the LSP mass. To illustrate this parameter dependence, we note that they can be approximated by $$\begin{aligned} G^L_{\tilde t_1 b \ell_i} & =&-Y_b c_{\theta_t} \frac{1}{\mu} \epsilon_i \label{B5} \\ G^R_{\tilde t_1 b \ell_i} & =& -g_2^2 c_{\theta_t} \frac{\tan \beta m_{\ell_i}}{\sqrt 2 M_2 \mu} {v_L}_i^* - Y_t s_{\theta_t} \frac{m_{\ell_i}}{\sqrt 2 v_d \mu} {v_L}_i^* \label{B6} \end{aligned}$$ where $\epsilon_i = \frac{1}{\sqrt 2} {Y_\nu}_{i3} v_R $, $Y_b$ and $Y_{t}$ are the bottom and top quark Yukawa couplings respectively, $M_{2}$ is the $SU(2)_{L}$ gaugino mass and $m_{\ell_{i}}, i=1,2,3$ are the physical $e,\mu,\tau $ masses. In our numerical results, however, the exact form of both $G^L_{\tilde t_{1} b\ell_{i}}$ and $G^R_{\tilde t_{1} b\ell_{i}}$ will be used. The various parameters entering the vacuum expectation values (\[B1\]),(\[B2\]) and the partial widths (\[B4\]) come in two classes, those–such as $Y_{b}$, $Y_{t}$, $m_{\ell_{i}}$ and the gauge coupling $g_{2}$–that are physically measured quantities whose values we simply insert, and the rest, which form a large parameter space over which one must scan. Of this latter type, there are a number of constraints which relate them–such as demanding unification of the $g_{3}$, $g_{2}$, $g_{R}$ and $g_{BL}$ gauge couplings with related implications for the gaugino masses. Another set of constraints is directly related to the fact that the spontaneous breaking of $R$-parity occurs as a sneutrino VEV–thus linking the LSP decays to the neutrino mass matrix. In this paper, we will impose the condition that the LSP decays be “prompt”–that is, well within the detection chamber at the LHC. It then follows that the dominant contribution to neutrino masses must be Majorana. The Majorana mass matrix can be computed in the minimal $B-L$ MSSM and is found to be $$\label{B7} {m_\nu}_{ij} = A {v_L}_i^* {v_L}_j^* + B \left({v_L}_i^* \epsilon_j + \epsilon_i {v_L}_j^* \right) + C \epsilon_i \epsilon_j \ ,$$ where $A$, $B$ and $C$ are complicated flavor-independent functions of the above parameters. As a first step, it is important to notice that the determinant of the neutrino mass matrix in (\[B7\]) is zero. This is a consequence of the flavor structure and is independent of the $A, B$ and $C$ parameters. Closer observation reveals that only one eigenstate is massless. This constrains the neutrino masses to be either in the “normal” hierarchy (NH) $$m_1 = 0 < m_2 \sim 8.7 ~\text{meV} < m_3 \sim 50 ~\text{meV} \label{B8}$$ or in the “inverted” hierarchy (IH) $$m_1 \sim m_2 \sim 50 ~\text{meV} > m_3 = 0 \ . \label{B9}$$ In (\[B8\]) and (\[B9\]) we have inserted $m_{1}=0$ and $m_{3}=0$ respectively into the squared mass differences measured in neutrino oscillation experiments and presented, for example, in [@Tortola:2012te; @GonzalezGarcia:2012sz; @Fogli:2012ua]. The constraints on the intitial parameters arise from diagonalizing (\[B7\]) and inserting these values for the neutrino masses, as well as the measured central values for the neutrino mixing angles–see, for example, [@Tortola:2012te; @GonzalezGarcia:2012sz; @Fogli:2012ua]. It is important to note that the central values for all of these mixing angles are determined with the exception of $\theta_{23}$. The data is consistent with this taking either one of two values–$\sin^{2}(\theta_{23})=0.587$ or $\sin^{2}(\theta_{23})= 0.446$. In all cases, this class of constraints eliminates five of the six parameters $\epsilon_{i}, v_{L_{i}}$, $i=1,2,3$. We use the convention that the remaining unconstrained parameter is one of the $\epsilon_{i}$’s. Be this as it may, the precise constraining equations are different in each of the four cases: NH with $\sin^{2}(\theta_{23})=0.587$ or $\sin^{2}(\theta_{23})= 0.446$ and IH with $\sin^{2}(\theta_{23})=0.587$ or $\sin^{2}(\theta_{23})= 0.446$. All of the above constraints reduce the number of independent parameters down to seven. Furthermore, demanding that the analysis should be “generic” without excessive fine-tuning of any parameters–as well as imposing lower bounds on some particle masses set by the LHC–limits the ranges of these parameters. The seven parameters, as well as their allowed ranges, are shown in Table 1. Parameter Range ---------------------------------------- ---------------------- $M_3$ (TeV) 1.5 – 10 $M_{Z_R}$ (TeV) 2.5 – 10 $\tan \beta$ 2 – 55 $\mu$ (GeV) 150 – 1000 $m_{\tilde t_1}$ (GeV) 400 – 1000 $\theta_{t}(\vphantom{t}^\circ)$ 0 – 90 $\left|\epsilon_i\right|$ (GeV) $10^{-4}$ – $10^{0}$ $\arg(\epsilon_i)(\vphantom{t}^\circ)$ 0 – 360 : The independent parameters and their ranges. The neutrino sector leaves only one unspecified $R$-parity violating parameter, which is chosen to be $\epsilon_i$ where the generational index, $i$, is also scanned to avoid any biases.[]{data-label="scan"} We now proceed to give the results of a numerical analysis of the decays in (\[B3\])–that is, of a stop LSP into a bottom quark and charged leptons. The branching ratio is defined as $$\label{B10} {\rm Br}(\tilde t_1 \to b \ell^+_i) \equiv \frac{\Gamma(\tilde t_1 \to b \ell^+_i)}{\sum \limits_{i=1}^3 \Gamma(\tilde t_1 \to b \ell^+_i)}$$ and using the relation $${\rm Br}(\tilde t_1 \to b \, e^+) + {\rm Br}(\tilde t_1 \to b \, \mu^+) + {\rm Br}(\tilde t_1 \to b \, \tau^+)=1 \ , \label{B11}$$ one needs to present a plot of only two of the branching ratios–which we choose to be ${\rm Br}(\tilde t_1 \to b \, e^+)$ and $ {\rm Br}(\tilde t_1 \to b \, \tau^+)$. These quantities are numerically calculated using (\[B4\]) by scanning over the parameters and ranges shown in Table 1. Since these ranges do not, by themselves, gaurantee that the stop remains the LSP, an additional check is implemented in the scan to throw out any points for which the stop cannot be the LSP. In addition, the detailed constraint equations involving the $\epsilon_{i}$, $v_{L_{i}}$ parameters are different in each of the four cases involving the NH versus the IH, as well as the two different central values for $\theta_{23}$. The results are shown in Figure 1. ![The results of the scan specified in Table \[scan\] using the central values for the measured neutrino parameters in the $\text{Br}(\tilde t_1 \to b \, \tau^+)$ - $\text{Br}(\tilde t_1 \to b \, e^+)$ plane. Due to the relationship between the branching ratios, the $(0,0)$ point on this plot corresponds to $\text{Br}(\tilde t_1 \to b \, \mu^+)=1$. The plot is divided into three quadrangles, each corresponding to an area where one of the branching ratios is larger than the other two. In the top left quadrangle, the bottom–tau branching ratio is the largest; in the bottom left quadrangle the bottom–muon branching ratio is the largest; and in the bottom right quadrangle the bottom–electron branching ratio is the largest. The two different possible values of $\theta_{23}$ are shown in blue and green in the IH (where the difference is most notable) and in red and magenta in the NH.[]{data-label="fig:Brs.central"}](CentralValues.pdf) The conclusions to be drawn from Figure 1 are quite clear. - If LHC data indicates bottom quark-charged lepton decays which intersect the populated region predicted by our numerical analysis, then a stop LSP of the minimal $B-L$ MSSM with the associated parameters is a distinct possibility. Were the LHC data to lie within the white regions of Figure 1, however, a stop LSP in this context is unlikely. - If the LHC data point lies in the top left quadrangle of Figure 1–where the bottom-tau branching ratio is the largest–then there are two possibilities. If the branching ratio to bottom-tau is highly dominant, then the neutrino masses are likely to be in the normal hierarchy and consistent with both values for $\sin^{2}(\theta_{23})$. On the other hand, if this branching ratio is only slightly dominant, then the data is compatible with both the normal and the inverted neutrino hierarchies. Were it to be shown by another experiment to be an inverted hierarchy, then this measurement would favor $\sin^{2}(\theta_{23})=0.587$ over $\sin^{2}(\theta_{23})=0.446$. - If the LHC data point lies in the bottom left quadrangle of Figure 1–where the bottom-muon branching ratio is the largest–then there are two possibilities. If the branching ratio to bottom-muon is highly dominant, then the neutrino masses are likely to be in the normal hierarchy and compatible with either value of $\sin^{2}(\theta_{23})$. On the other hand, if this branching ratio is only slightly dominant, then the data is compatible with both the normal and the inverted neutrino hierarchies. Were it to be shown by another experiment to be an inverted hierarchy, then this measurement would favor $\sin^{2}(\theta_{23})=0.446$ over $\sin^{2}(\theta_{23})=0.587$. - If the data point lies in the bottom right quadrangle–where the bottom-electron branching ratio dominates–then the neutrino masses are likely to be in an inverted hierarchy. If the data is in the upper part of the populated points, then this inverted hierarchy would be consistent with $\sin^{2}(\theta_{23})=0.587$. Data in the lower part of this region would indicate an inverted hierarchy with $\sin^{2}(\theta_{23})=0.446$. Lower Bounds on the Mass of a Stop LSP {#lower-bounds-on-the-mass-of-a-stop-lsp .unnumbered} ====================================== Since a stop LSP in the minimal $B-L$ MSSM scenario decays as a leptoquark, one can set bounds on its mass using previous leptoquark searches at the LHC. Under the assumption in this paper that the stop LSP is an admixture, it decays predominantly into a bottom quark and a charged lepton. Stop LSP’s are produced at the LHC in ${\tilde{t}}_{1}$-${\bar{\tilde{t}}_{1}}$ pairs, implying that the final state will consist of two jets and a pair of oppositely charged leptons. The current ATLAS and CMS analyses search for such final states assuming the oppositely charged leptons have the same flavor [@Chatrchyan:2012st; @Chatrchyan:2012sv; @Chatrchyan:2012vza; @ATLAS:2013oea; @Aad:2011ch; @ATLAS:2012aq; @CMS:zva][^3]. This yields upper limits on the ${\tilde{t}}_{1}$-${\bar{\tilde{t}}_{1}}$ production cross section for each of the three possible flavors. The upper limit on the cross section is easily translated into a lower bound on the stop LSP mass, since the cross section depends only on the mass, and the center of mass energy, and falls off steeply as the mass increases. Although the ATLAS and CMS analyses assume branching ratios of unity to a given family, we can generalize their results to arbitrary branching ratios. This is accomplished by rescaling the cross section limit from each search by dividing it by the appropriate branching ratio squared. It is then compared to the calculated production cross section as a function of stop LSP mass, which yields the lower bound on the stop LSP mass from that search. For a given choice of branching ratios to $be^+$, $b\mu^+$, and $b\tau^+$, the search with the strongest expected stop mass lower bound is selected. Then the observed cross section limit from that search is rescaled in the same way and, finally, compared to the calculated production cross section as a function of stop LSP mass. This yields the lower bound on the stop LSP mass[^4]. The production cross section, as calculated by the ATLAS, CMS and LPCC SUSY working group [@Kramer:2012bx; @Kramer2] at next-to-leading order in $\alpha_S$, including resummaiton at next-to-leading log, is used to place these lower bounds. Even though this cross section is calculated in the context of the $R$-parity conserving MSSM , it is valid here because the production cross section is dominated by $R$-parity conserving, color processes. The exclusion results can, again, be plotted on a two-dimensional plot since the sum of all three branching ratios is unity. This is done in the form of lines of constant stop mass lower bound in Figure \[fig:stop.lower.bound\] in the $\text{Br}(\tilde t_1 \to b \, \tau^+)$ - $\text{Br}(\tilde t_1 \to b \, e^+)$ plane, the same plane as in Figure \[fig:Brs.central\]. The absolute lowest bound, 424 GeV, occurs at $\text{Br}(\tilde t_1\to be^+)=0.23$, $\text{Br}(\tilde t_1\to b \mu^+)=0.15$, $\text{Br}(\tilde t_1\to b \tau^+)=0.62$. It is marked by a dot. The bounds are stronger in the three corners of the plot where one of the branching ratios is unity. The strongest of these three bounds corresponds to decays purely to bottom–muon. This reflects the fact that this is the easiest of the three channels to detect and the search has been performed with the most integrated luminosity, 20 fb$^{-1}$, and center of mass energy, 8 TeV at CMS [@CMS:zva]. The weakest of these bounds corresponds to decays purely to bottom–tau because this channel is the hardest to detect. The contours are each composed of several connected straight line segments. The straightness of the segments is due to the fact that the bound is always coming from a single channel (the one with the strongest expected bound) and, hence, only depends on one of the three significant branching ratios. ![Lines of constant stop lower bound in GeV in the $\text{Br}(\tilde t_1 \to b \, \tau^+)$ - $\text{Br}(\tilde t_1 \to b \, e^+)$ plane. The strongest bounds arise when the bottom–muon branching ratio is largest, while the weakest arise when the bottom–tau branching ratio is largest. The dot marks the absolute weakest lower bound at 424 GeV.[]{data-label="fig:stop.lower.bound"}](BoundContourPlotNoCombination.pdf) Acknowledgments {#acknowledgments .unnumbered} =============== S.Spinner is indebted to P. Fileviez Perez for extensive discussion and a long term collaboration on related topics. S. Spinner would also like to thank the Max-Planck Institute for Nuclear Physics for hospitality during the early part of this work and T. Schwetz for useful discussion. B.A. Ovrut, A. Purves and S. Spinner are supported in part by the DOE under contract No. DE-AC02-76-ER-03071 and by the NSF under grant No. 1001296. The work of Z. Marshall is supported by the Office of High Energy Physics of the U.S. Department of Energy under contract DE-AC02-05CH11231. [000]{} C. S. Aulakh, B. Bajc, A. Melfo, A. Rasin and G. Senjanovic, “SO(10) theory of R-parity and neutrino mass,” Nucl. Phys. B [**597**]{}, 89 (2001) \[hep-ph/0004031\]. V. Braun, Y. -H. He and B. A. Ovrut, “Yukawa couplings in heterotic standard models,” JHEP [**0604**]{}, 019 (2006) \[hep-th/0601204\]. L. B. Anderson, J. Gray and B. Ovrut, “Yukawa Textures From Heterotic Stability Walls,” JHEP [**1005**]{}, 086 (2010) \[arXiv:1001.2317 \[hep-th\]\]. A. Font, L. E. Ibanez and F. Quevedo, “Does Proton Stability Imply the Existence of an Extra Z0?,” Phys. Lett. B [**228**]{}, 79 (1989). P. Fileviez Perez and S. Spinner, “Spontaneous R-Parity Breaking and Left-Right Symmetry,” Phys. Lett. B [**673**]{}, 251 (2009) \[arXiv:0811.3424 \[hep-ph\]\]. V. Barger, P. Fileviez Perez and S. Spinner, “Minimal gauged U(1)(B-L) model with spontaneous R-parity violation,” Phys. Rev. Lett.  [**102**]{}, 181802 (2009) \[arXiv:0812.3661 \[hep-ph\]\]. L. L. Everett, P. Fileviez Perez and S. Spinner, “The Right Side of Tev Scale Spontaneous R-Parity Violation,” Phys. Rev. D [**80**]{}, 055007 (2009) \[arXiv:0906.4095 \[hep-ph\]\]. R. N. Mohapatra, “Mechanism for Understanding Small Neutrino Mass in Superstring Theories,” Phys. Rev. Lett.  [**56**]{}, 561 (1986). A. Lukas, B. A. Ovrut, K. S. Stelle and D. Waldram, “The Universe as a domain wall,” Phys. Rev. D [**59**]{}, 086001 (1999) \[hep-th/9803235\]. V. Braun, Y. -H. He, B. A. Ovrut and T. Pantev, “A Heterotic standard model,” Phys. Lett. B [**618**]{}, 252 (2005) \[hep-th/0501070\]. V. Braun, Y. -H. He, B. A. Ovrut and T. Pantev, “The Exact MSSM spectrum from string theory,” JHEP [**0605**]{}, 043 (2006) \[hep-th/0512177\]. V. Braun, Y. -H. He and B. A. Ovrut, “Stability of the minimal heterotic standard model bundle,” JHEP [**0606**]{}, 032 (2006) \[hep-th/0602073\]. M. Ambroso and B. Ovrut, “The B-L/Electroweak Hierarchy in Heterotic String and M-Theory,” JHEP [**0910**]{}, 011 (2009) \[arXiv:0904.4509 \[hep-th\]\]. B. A. Ovrut, A. Purves and S. Spinner, “Wilson Lines and a Canonical Basis of SU(4) Heterotic Standard Models,” JHEP [**1211**]{}, 026 (2012) \[arXiv:1203.1325 \[hep-th\]\]. M. Ambroso and B. A. Ovrut, “The B-L/Electroweak Hierarchy in Smooth Heterotic Compactifications,” Int. J. Mod. Phys. A [**25**]{}, 2631 (2010) \[arXiv:0910.1129 \[hep-th\]\]. M. Ambroso and B. A. Ovrut, “The Mass Spectra, Hierarchy and Cosmology of B-L MSSM Heterotic Compactifications,” Int. J. Mod. Phys. A [**26**]{}, 1569 (2011) \[arXiv:1005.5392 \[hep-th\]\]. D. K. Ghosh, G. Senjanovic and Y. Zhang, “Naturally Light Sterile Neutrinos from Theory of R-parity,” Phys. Lett. B [**698**]{}, 420 (2011) \[arXiv:1010.3968 \[hep-ph\]\]. V. Barger, P. Fileviez Perez and S. Spinner, “Three Layers of Neutrinos,” Phys. Lett. B [**696**]{}, 509 (2011) \[arXiv:1010.4023 \[hep-ph\]\]. P. Fileviez Perez and S. Spinner, “The Minimal Theory for R-parity Violation at the LHC,” JHEP [**1204**]{}, 118 (2012) \[arXiv:1201.5923 \[hep-ph\]\]. P. Fileviez Perez and S. Spinner, “Supersymmetry at the LHC and The Theory of R-parity,” arXiv:1308.0524 \[hep-ph\]. Z. Marshall, B. A. Ovrut, A. Purves and S. Spinner, “Spontaneous $R$-Parity Breaking, Stop LSP Decays and the Neutrino Mass Hierarchy,” Phys. Lett. B [**732**]{}, 325 (2014) \[arXiv:1401.7989 \[hep-ph\]\]. M. A. Diaz, D. A. Restrepo and J. W. F. Valle, “Two-body decays of the lightest stop in supergravity with and without R-parity,” Nucl. Phys. B [**583**]{}, 182 (2000) \[hep-ph/9908286\]. D. Restrepo, W. Porod and J. W. F. Valle, “Broken R-parity, stop decays, and neutrino physics,” Phys. Rev. D [**64**]{}, 055011 (2001) \[hep-ph/0104040\]. A. Datta and S. Poddar, “New signals of a R-parity violating model of neutrino mass at the Tevatron,” Phys. Rev. D [**75**]{}, 075013 (2007) \[hep-ph/0611074\]. V. D. Barger, T. Han, S. Hesselbach and D. Marfatia, “Testing radiative neutrino mass generation via R-parity violation at the Tevatron,” Phys. Lett. B [**538**]{}, 346 (2002) \[hep-ph/0108261\]. S. P. Martin, “A Supersymmetry primer,” In \*Kane, G.L. (ed.): Perspectives on supersymmetry II\* 1-153 \[hep-ph/9709356\]. D. V. Forero, M. Tortola and J. W. F. Valle, “Global status of neutrino oscillation parameters after Neutrino-2012,” Phys. Rev. D [**86**]{}, 073012 (2012) \[arXiv:1205.4018 \[hep-ph\]\]. M. C. Gonzalez-Garcia, M. Maltoni, J. Salvado and T. Schwetz, “Global fit to three neutrino mixing: critical look at present precision,” JHEP [**1212**]{}, 123 (2012) \[arXiv:1209.3023 \[hep-ph\]\]. G. L. Fogli, E. Lisi, A. Marrone, D. Montanino, A. Palazzo and A. M. Rotunno, “Global analysis of neutrino masses, mixings and phases: entering the era of leptonic CP violation searches,” Phys. Rev. D [**86**]{}, 013012 (2012) \[arXiv:1205.5254 \[hep-ph\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], “Search for third-generation leptoquarks and scalar bottom quarks in $pp$ collisions at $\sqrt{s}=7$ TeV,” JHEP [**1212**]{}, 055 (2012) \[arXiv:1210.5627 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], “Search for pair production of third-generation leptoquarks and top squarks in $pp$ collisions at $\sqrt{s}=7$ TeV,” Phys. Rev. Lett.  [**110**]{}, 081801 (2013) \[arXiv:1210.5629 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], “Search for pair production of first- and second-generation scalar leptoquarks in $pp$ collisions at $\sqrt{s}= 7$ TeV,” Phys. Rev. D [**86**]{}, 052013 (2012) \[arXiv:1207.5406 \[hep-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], “Search for third generation scalar leptoquarks in pp collisions at $\sqrt{s}$ = 7 TeV with the ATLAS detector,” JHEP [**1306**]{}, 033 (2013) \[arXiv:1303.0526 \[hep-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], “Search for first generation scalar leptoquarks in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector,” Phys. Lett. B [**709**]{}, 158 (2012) \[Erratum-ibid.  [**711**]{}, 442 (2012)\] \[arXiv:1112.4828 \[hep-ex\]\]. G. Aad [*et al.*]{} \[ATLAS Collaboration\], “Search for second generation scalar leptoquarks in $pp$ collisions at $\sqrt{s}=7$ TeV with the ATLAS detector,” Eur. Phys. J. C [**72**]{}, 2151 (2012) \[arXiv:1203.3172 \[hep-ex\]\]. \[CMS Collaboration\], “Search for Pair-production of Second generation Leptoquarks in 8 TeV proton-proton collisions.,” CMS-PAS-EXO-12-042. J. A. Evans and Y. Kats, “LHC Coverage of RPV MSSM with Light Stops,” JHEP [**1304**]{}, 028 (2013) \[arXiv:1209.0764 \[hep-ph\]\]. M. Kramer, A. Kulesza, R. van der Leeuw, M. Mangano, S. Padhi, T. Plehn and X. Portell, “Supersymmetry production cross sections in $pp$ collisions at $\sqrt{s}=7$ TeV,” arXiv:1206.2892 \[hep-ph\]. LHC SUSY Cross Section Working Group webpage,\ http://twiki.cern.ch/twiki/bin/view/LHCPhysics/SUSYCrossSections. [^1]: Such a minimal model was outlined as a possible low energy manifestation of $E_6$ GUT models in [@Mohapatra:1986aw]. [^2]: Throughout this paper, we use the term LSP to refer to the lightest supersymmetric particle [*relevant for collider physics*]{}. [^3]: For interpretation of these results for stop decays in explicit trilinear $R$-parity violation see [@Evans:2012bf]. [^4]: Experimental and background uncertainties place an approximate uncertainty on the stop mass lower bounds of $\pm 50$ GeV in Figure \[fig:stop.lower.bound\].
--- abstract: 'Let $\Sigma$ be a closed surface of genus $g\ge 2$ and $z\in\Sigma$ a marked point. We prove that the subgroup of the mapping class group $\operatorname{Map}(\Sigma,z)$ corresponding to the fundamental group $\pi_1(\Sigma,z)$ of the closed surface does not lift to the group of diffeomorphisms of $\Sigma$ fixing $z$. As a corollary, we show that the Atiyah-Kodaira surface bundles admit no invariant flat connection, and obtain another proof of Morita’s non-lifting theorem.' author: - 'Mladen Bestvina, Thomas Church & Juan Souto' title: | Some groups of mapping classes\ not realized by diffeomorphisms --- Introduction ============ Given a closed orientable surface $\Sigma$ and a finite, possibly empty, set ${\mathbf{z}}\subset\Sigma$ of marked points, consider the group $$\operatorname{Diff}_+(\Sigma,{\mathbf{z}})=\{f\in\operatorname{Diff}_+(\Sigma)\vert f({\mathbf{z}})={\mathbf{z}}\}$$ of orientation-preserving diffeomorphisms of $\Sigma$ which map the set of marked points to itself. (When ${\mathbf{z}}$ is empty we drop it from our notation.) We denote by $\operatorname{Diff}_0(\Sigma,{\mathbf{z}})$ the normal subgroup of $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$ consisting of those diffeomorphisms which are isotopic to the identity via an isotopy which fixes the set ${\mathbf{z}}$. The mapping class group is the quotient group $$\operatorname{Map}(\Sigma,{\mathbf{z}})=\operatorname{Diff}_+(\Sigma,{\mathbf{z}})/\operatorname{Diff}_0(\Sigma,{\mathbf{z}}).$$ In [@Morita], Morita proved that if $\Sigma$ has genus at least $18$ and the set of punctures is empty, then the exact sequence $$0\to\operatorname{Diff}_0(\Sigma) \to\operatorname{Diff}_+(\Sigma)\to\operatorname{Map}(\Sigma)\to 0$$ does not split. Recently Franks–Handel [@F-H] have extended this result so that it holds for genus at least $3$. Cantat–Cerveau [@Cantat] have proved that finite index subgroups of the mapping class group do not lift to the group of analytic diffeomorphisms. A much more powerful result is due to Marković [@Markovic] and Marković–Šarić [@Markovic-Saric], who have proved that for genus at least $2$, the mapping class group does not even lift to the group of homeomorphisms. The proofs of at least some of these results apply also to the case with marked points. Given a subgroup $\Gamma\hookrightarrow \operatorname{Map}(\Sigma,{\mathbf{z}})$, the *realization problem* asks whether $\Gamma$ lifts to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. In this paper, we exhibit rather small subgroups of $\operatorname{Map}(\Sigma,{\mathbf{z}})$ that do not lift to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. Specifically, in the case of a surface of genus at least $2$ with a single marked point we prove: \[weakmeat\] Let $\Sigma$ be a closed surface of genus $g\ge 2$ and $z\in\Sigma$ a marked point. No finite index subgroup of the point-pushing subgroup ${\pi_1(\Sigma,z)\subset \operatorname{Map}(\Sigma,z)}$ lifts to $\operatorname{Diff}_+(\Sigma,z)$. The point-pushing subgroup fits into the Birman exact sequence $$\label{birman} 1\to\pi_1(\Sigma,z)\overset{F}{\to}\operatorname{Map}(\Sigma,z) \to\operatorname{Map}(\Sigma)\to 1$$ as long as $g\geq 2$. Observe that if $(\Sigma,z)$ is a torus with a single marked point, then the mapping class group does lift to $\operatorname{Diff}_+(\Sigma,z)$. We sketch now the proof of Theorem \[weakmeat\]. Seeking a contradiction, assume that there is a homomorphism $\Phi$ such that the following diagram commutes, $$\xymatrix{ & & \operatorname{Diff}_+(\Sigma,z) \ar[d]\\ 1 \ar[r] & \pi_1(\Sigma,z) \ar[r]^{F}\ar@{-->}[ru]^{\Phi} & \operatorname{Map}(\Sigma,z) }$$ where $F$ is the inclusion from . The homomorphism $\Phi$ yields an action of $\pi_1(\Sigma,z)$ on $\Sigma$ by diffeomorphisms fixing $z$ and hence a representation of $\pi_1(\Sigma,z)$ in $\operatorname{GL}^+(T_z\Sigma)$. By Milnor’s inequality this representation has Euler-number bounded in absolute value by $g-1$. On the other hand, we compute that the Euler-number must be $2-2g$; this contradiction gives Theorem \[weakmeat\]. Combining Theorem \[weakmeat\] with some topological constructions, we construct a subgroup of $\operatorname{Map}(\Sigma)$ that does not lift to $\operatorname{Diff}_+(\Sigma)$, which can be taken to be isomorphic to ${\mathbb Z}/3{\mathbb Z}\times \pi_1(S,z)$ for some surface $S$. This relies on the existence of finite order elements and thus does not apply to finite index subgroups of $\operatorname{Map}(\Sigma)$. As a corollary to Theorem \[weakmeat\] and this construction, we derive the following version of Morita’s theorem: \[main\] Let $(\Sigma,{\mathbf{z}})$ be a surface of genus $g$ with ${\left\lvert{\mathbf{z}}\right\rvert}=k$ marked points. Assume either that $g\ge 8$ or that $g\ge 2$ and $k\ge 1$. Then the exact sequence $$\label{nosplit1} 0\to\operatorname{Diff}_0(\Sigma,{\mathbf{z}})\to\operatorname{Diff}_+(\Sigma,{\mathbf{z}})\to\operatorname{Map}(\Sigma,{\mathbf{z}})\to 0$$ does not split. In fact, if $g\ge 2$ and $k\ge 1$ then no finite index subgroup of $\operatorname{Map}(\Sigma,{\mathbf{z}})$ lifts to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. Morita originally proved his theorem by finding a surface bundle over an $6$–dimensional manifold that does not admit a flat connection. (All connections are taken to be smooth.) The theorem of Earle–Eells [@EarleEells] on the contractibility of $\operatorname{Diff}_0(\Sigma)$ implies that a $\Sigma$–bundle over a base $B$ admits a flat connection if and only if the topological monodromy representation $\pi_1(B)\to\operatorname{Map}(\Sigma)$ can be lifted to a map $\pi_1(B)\to\operatorname{Diff}_+(\Sigma)$. In particular, if the sequence split, then every surface bundle would admit a flat connection, so Morita’s theorem follows from his example. An open problem has been to find a surface bundle over a surface that does not admit a flat connection. The details of the proof of Theorem \[main\] give a partial solution to this problem. In the case of a punctured surface, Theorem \[weakmeat\] gives a surface group isomorphic to $\pi_1(\Sigma,z)$ inside $\operatorname{Map}(\Sigma,z)$ that does not lift to $\operatorname{Diff}_+(\Sigma,z)$. This yields a surface bundle with a distinguished section, with base space a closed surface, which admits no flat connection such that the distinguished section is parallel. (In fact, this bundle is just the trivial bundle $\Sigma\times\Sigma$, and the distinguished section is the diagonal.) We believe that this is the first such surface bundle with section over a closed surface known. In the case of a closed surface, the construction described above corresponds to a topological construction of Atiyah and Kodaira (see remarks preceding the proof for definitions), and we conclude: \[AK\] When $k\geq 3$, the Atiyah–Kodaira bundle $\Sigma\to M_k\to S'$ admits no flat connection invariant under the order–$k$ deck transformation ${\mathcal T}\colon M_k\to M_k$. Although this comes close to answering the question above, the full question remains open in the case when the surface is closed. Does there exist a closed surface bundle over a surface that admits no flat connection? [**Acknowledgements.**]{} The authors would like to thank Benson Farb and Vlad Marković for their interest in this project. The second author would like to thank Benson Farb for introducing him to the examples of Atiyah and Kodaira and to the questions surrounding flat surface bundles. A few facts about Euler-numbers =============================== Let $\Sigma$ be a closed surface of genus $g$ and let $\widetilde\Sigma\to\Sigma$ be its universal cover. Choose base points $z\in\Sigma$ and $\tilde z\in\widetilde\Sigma$ projecting to $z$. The choice of base points yields an identification between the fundamental group $\pi_1(\Sigma,z)$ with the deck-transformation group of the cover $\widetilde\Sigma\to\Sigma$. Before going any further, let us remark that the composition $\gamma\star\eta$ of two elements $\gamma,\eta\in\pi_1(\Sigma,z)$ is obtained by first running $\gamma$ and then $\eta$. By construction, the universal cover $\widetilde\Sigma$ consists of homotopy classes of continuous paths in $\Sigma$ beginning at $z$. Here we can identify $\tilde z$ with, for instance, the homotopy class of the constant path. The fundamental group $\pi_1(\Sigma,z)$ acts on $\widetilde\Sigma$ by precomposition, meaning that we first run a path representing the element in the fundamental group and then a path representing the element in $\widetilde\Sigma$. In particular, the obtained action of $\pi_1(\Sigma,z){\curvearrowright}\widetilde\Sigma$, the so-called action by deck-transformations, is a left action. Assume now that $\rho\colon\pi_1(\Sigma,z)\to\operatorname{Homeo}^+({\mathbb S}^1)$ is an action of the fundamental group of $\Sigma$ on the circle. Let $E_\rho$ be the quotient of $\widetilde\Sigma\times{\mathbb S}^1$ under the action $$\pi_1(\Sigma,z){\curvearrowright}(\widetilde\Sigma\times{\mathbb S}^1),\quad (\gamma,(x,\theta))\mapsto(\gamma x,\rho(\gamma)\theta).$$ The projection of $\widetilde\Sigma\times{\mathbb S}^1$ onto the first factor is equivariant and has fiber ${\mathbb S}^1$; this descends to give $E_\rho$ the structure of a circle bundle over $\Sigma$. The trivial connection on $\widetilde\Sigma\times {\mathbb S}^1$ induces a flat connection on $E_\rho$. Conversely, every flat circle bundle over $\Sigma$ is obtained in this way. The *Euler-number* $e(E_\rho)$ of the bundle $E_\rho\to\Sigma$ is the obstruction for the bundle $E_\rho$ to have a section, or equivalently, for the action $\rho$ to lift to an action on the universal cover ${\mathbb R}$ of ${\mathbb S}^1$. [Milnor–Wood inequality]{} Assume that $E_\rho$ is a flat orientable circle bundle over a closed surface $\Sigma$ of genus $g$. Then ${\left\lverte(E_\rho)\right\rvert}\le 2g-2$. It should be observed that there are flat circle bundles with Euler-number $2-2g$. For instance, endowing $\Sigma$ with a hyperbolic metric, we can identify the universal cover $\widetilde\Sigma$ with the hyperbolic plane. The action of $\pi_1(\Sigma,z)$ on ${\mathbb H}^2$ extends to an action on the circle at infinity ${\partial}_\infty{\mathbb H}^2$. The associated flat circle bundle is isomorphic to the unit tangent bundle of $\Sigma$ and hence has Euler-number equal to the Euler characteristic $\chi(\Sigma)=2-2g$. We record this fact for further reference (see [@MS Appendix C]): \[unit-tangent\] Let $\Sigma$ be a closed orientable hyperbolic surface of genus $g$ and identify $\pi_1(\Sigma,z)$ with the corresponding group of deck-transformations of ${\mathbb H}^2$. The circle bundle corresponding to the induced action of $\pi_1(\Sigma,z)$ on ${\partial}_\infty{\mathbb H}^2={\mathbb S}^1$ has Euler-number $2-2g$. Other examples of circle bundles over $\Sigma$ can be constructed as follows. A linear action $\rho\colon\pi_1(\Sigma,z)\to\operatorname{GL}_2^+{\mathbb R}$ of $\pi_1(\Sigma,z)$ on ${\mathbb R}^2$ induces an action on the space of directions $({\mathbb R}^2\setminus\{0\})/{\mathbb R}_+$ of ${\mathbb R}^2$. The latter can be identified with the circle and hence the same construction as above yields a circle bundle $E_\rho$. A circle bundle $E_\rho$ arising in this way is called a *flat linear circle bundle*. The linear action $\rho$ induces a different circle bundle $\hat E_\rho$ via the induced projective action on the projective line $P{\mathbb R}^2=({\mathbb R}^2\setminus\{0\})/({\mathbb R}\setminus\{0\})$, which can also be identified with the circle. By construction there is a two-to-one fiberwise covering $E_\rho\to\hat E_\rho$. In particular, $e(\hat E_\rho)=2e(E_\rho)$. We have then: [Milnor’s inequality]{} Assume that $E_\rho$ is a flat linear orientable circle bundle over a closed surface $\Sigma$ of genus $g$. Then ${\left\lverte(E_\rho)\right\rvert}\le g-1$. In [@Milnor], Milnor proved that if a $\operatorname{GL}_2^+{\mathbb R}$–bundle over a closed surface of genus $g$ admits a flat symmetric connection, then its Euler-number is bounded in absolute value by $g-1$. This is equivalent to Milnor’s inequality above. Later, Wood [@Wood] extended Milnor’s work to prove the Milnor–Wood inequality. For a general oriented circle bundle $S^1\to E \to B$, the Euler class is a characteristic class $e(E)\in H^2(B)$. When the base space is a surface, we identify this with the Euler-number by the identification $H^2(\Sigma)= {\mathbb Z}$. We will use the same symbol for the Euler-number and Euler class; it should be clear from context what is meant. Surfaces with one puncture ========================== Let $\Sigma$ be a closed surface of genus $g$, $z\in\Sigma$ a marked point and consider the group ${\mathcal G}(\Sigma,z)$ consisting of those homeomorphisms of $\Sigma$ which fix $z$ and are differentiable at $z$. In this section we prove the following generalization of Theorem \[weakmeat\]: \[meat\] Let $\Sigma$ be a closed surface of genus $g\ge 2$ and $z\in\Sigma$ a marked point. If $\Gamma\subset\pi_1(\Sigma,z)$ is a finite index subgroup, then the inclusion of $\Gamma$ into $\operatorname{Map}(\Sigma,z)$ under the homomorphism $F$ from does not lift to ${\mathcal G}(\Sigma,z)$. Observe that since $\operatorname{Diff}_+(\Sigma,z)$ is a subgroup of ${\mathcal G}(\Sigma,z)$, Theorem \[weakmeat\] follows directly from Proposition \[meat\]. In Section \[generalcase\] we will use some more or less obvious tricks to deduce the general case of Theorem \[main\] from the proposition. Before going any further we describe the homomorphism $$F\colon\pi_1(\Sigma,z)\to\operatorname{Map}(\Sigma,z)$$ from in detail. Given $\gamma\in\pi_1(\Sigma,z)$, let $\vec\gamma\colon[0,1]\to\Sigma$ be a loop in the corresponding homotopy class. The map $t\mapsto\vec\gamma(1-t)$ can be interpreted as an isotopy from the identity $\operatorname{Id}_z$ to itself. By the theorem on extension of isotopies we obtain an isotopy $f_t\colon\Sigma\to\Sigma$ with $f_0=\operatorname{Id}_\Sigma$ and $f_t(z)=\vec\gamma(1-t)$. The element $F_{\gamma}\in\operatorname{Map}(\Sigma,z)$ corresponding to $f_1\in\operatorname{Diff}_+(\Sigma,z)$ depends only on the element $\gamma\in\pi_1(\Sigma,z)$. Observing that $$F_{\gamma\star\eta}=F_{\gamma}\circ F_{\eta}$$ we have that $F\colon\pi_1(\Sigma,z)\to\operatorname{Map}(\Sigma,z)$ is a homomorphism. Starting with the proof of Proposition \[meat\], assume that there is a homomorphism $$\Phi\colon\pi_1(\Sigma,z)\to{\mathcal G}(\Sigma,z)$$ such that for each $\gamma\in\pi_1(\Sigma,z)$ the homeomorphism $\Phi_\gamma$ represents the mapping class $F_\gamma\in\operatorname{Map}(\Sigma,z)$. Endowing $\Sigma$ with a hyperbolic metric we identify ${\mathbb H}^2$ with its universal cover; choose a point $\tilde z$ covering $z$. We obtain then a homomorphism $$\tilde\Phi\colon\pi_1(\Sigma,z)\to{\mathcal G}({\mathbb H}^2,\tilde z)$$ mapping $\gamma$ to the unique lift of $\Phi_\gamma$ which fixes $\tilde z$. Here ${\mathcal G}({\mathbb H}^2,\tilde z)$ is the group of homeomorphisms of ${\mathbb H}^2$ which fix $\tilde z$ and are differentiable at $\tilde z$. \[extension-fix\] The homeomorphism $\tilde\Phi_\gamma\colon{\mathbb H}^2\to{\mathbb H}^2$ extends to a homeomorphism of the closed disk $\overline{{\mathbb H}}^2={\mathbb H}^2\cup{\partial}_\infty{\mathbb H}^2$. Moreover, the restriction of $\tilde\Phi_\gamma$ to ${\partial}_\infty{\mathbb H}^2$ coincides with the action of $\gamma$ as a deck-transformation. Lemma \[extension-fix\] is probably well-known to experts and non-experts alike. However, here is a proof: We start by observing that the action $\Phi$ can be lifted in a different way. By construction, if we forget the marked point, the homeomorphism $\Phi_\gamma$ is homotopic to the identity. Lifting this homotopy backwards, i.e. starting with the identity of ${\mathbb H}^2$ we obtain a new lift $\hat\Phi_\gamma$ of $\Phi_\gamma$. It follows directly from the construction of the homomorphism $F$ and from the fact that $\Phi_\gamma$ represents $F(\gamma)$ that $$\hat\Phi_\gamma(\tilde z)=\gamma^{-1}\tilde z$$ where we have identified $\gamma\in\pi_1(\Sigma,z)$ with the corresponding deck-transformation. In particular, the two lifts $\hat\Phi_\gamma$ and $\tilde\Phi_\gamma$ differ by the deck-transformation $\gamma$, meaning that $$\label{relation-lifts} \gamma\circ\hat\Phi_\gamma=\tilde\Phi_\gamma.$$ By construction, the lift $\hat\Phi_\gamma$ moves every point in ${\mathbb H}^2$ at uniformly bounded distance from itself. In particular $\hat\Phi$ extends continuously to the identity map on the boundary ${\partial}_\infty{\mathbb H}^2$ of the hyperbolic plane. The claim follows from this fact and . We come now to the meat of the proof of Theorem \[main\]. Recall that $\overline{{\mathbb H}}^2$ is the union of ${\mathbb H}^2$ with the circle at infinity. The half-open annulus $\overline{{\mathbb H}}^2\setminus\tilde z$ can be compactified in a canonical way by attaching to the open end the space of directions $(T_{\tilde z}{\mathbb H}^2\setminus\{0\})/{\mathbb R}_+$ of the tangent space at $\tilde z$. Let ${\mathcal A}$ be the so-obtained closed annulus. By Lemma \[extension-fix\], the action of $\pi_1(\Sigma,z)$ via $\tilde\Phi$ induces an action on $\overline{{\mathbb H}}^2\setminus\{\tilde z\}$. Moreover, the assumption that $\tilde\Phi_\gamma$ is differentiable at $\tilde z$ for all $\gamma\in\pi_1(\Sigma,z)$ implies that this action extends to an action on ${\mathcal A}$ which restricts to ${\partial}{\mathcal A}$ as follows. - On the component ${\partial}_1{\mathcal A}$ corresponding to ${\partial}_\infty{\mathbb H}^2$ the action of $\pi_1(\Sigma,z)$ the action is equal to the one induced by the deck-transformation group by Lemma \[extension-fix\]. - On the component ${\partial}_2{\mathcal A}$ corresponding to the space of directions of $T_{\tilde z}{\mathbb H}^2$, the action is induced by the representation $$\pi_1(\Sigma,z)\to\operatorname{GL}(T_{\tilde z}{\mathbb H}^2),\ \ \gamma\mapsto d\hat\Phi_\gamma\vert_{\tilde z}$$ In particular, it follows from Lemma \[unit-tangent\] that the circle bundle $E_1$ over $\Sigma$ induced by the action on ${\partial}_1{\mathcal A}$ has Euler-number $$e(E_1)=2-2g.$$ Similarly, it follows from Milnor’s inequality that the circle bundle $E_2$ over $\Sigma$ induced by the action on ${\partial}_2{\mathcal A}$ satisfies $$\left\vert e(E_2)\right\vert=g-1.$$ But since the annulus bundle ${\mathcal A}$ admits a fiberwise deformation retract onto $E_1$ and also onto $E_2$, these bundles have the same Euler-number $$e(E_1)=e({\mathcal A})=e(E_2).$$ This contradiction shows that the image of $\pi_1(\Sigma,z)$ under $F$ does not lift to ${\mathcal G}(\Sigma,z)$. The same argument applies to finite index subgroups; this concludes the proof of Proposition \[meat\]. As mentioned above, Theorem \[weakmeat\] follows directly from Proposition \[meat\]. An alternate perspective on Proposition \[meat\] {#an-alternate-perspective-on-propositionmeat .unnumbered} ------------------------------------------------ In the remainder of this section, we sketch an alternate perspective on the above proof in the language of surface bundles. This perspective will be used in the remarks following the proof of Theorem \[main\] and in the proof of Theorems \[AK\] and \[m-const\]. The previous section considered the flat linear circle bundle $E_{d\Phi}\to \Sigma$, which *a priori* depends on the lift $\Phi$ of $F$; however, the isomorphism type of $E_{d\Phi}$ as a topological circle bundle does not depend on $\Phi$. In fact, this circle bundle can be defined without reference to any lift, as we describe below. The theorem of Earle–Eells, extended to punctured surfaces by Earle–Schatz [@EarleSchatz], gives a one-to-one correspondence between $\Sigma$–bundles with distinguished section over a base $B$ (up to isomorphism) and their monodromy representation $\pi_1(B)\to\operatorname{Map}(\Sigma,z)$ (up to conjugacy). The “vertical Euler class” of a $\Sigma$–bundle with distingushed section is a characteristic class defined as follows. Given such a bundle $\Sigma\to E\overset{\pi}{\to} B$ with section $\sigma\colon B\to E$, the vectors tangent to the fibers span a 2–dimensional subbundle $T\pi\leq TE$. Passing to the space of directions and restricting to the section $\sigma$ induces a circle bundle $UT\pi|_\sigma\to B$. The vertical Euler class is defined to be the Euler class $e(UT\pi|_\sigma) \in H^2(B)$ of this circle bundle. This class is discussed in many references, including [@Morita]. We will need only the following property. If the monodromy $r\colon\pi_1(B)\to\operatorname{Map}(\Sigma,z)$ of a $\Sigma$–bundle with section lifts to $\rho\colon \pi_1(B)\to{\mathcal G}(\Sigma,z)$, yielding as above the flat linear circle bundle ${E_{d\rho}\to B}$, then $E_{d\rho}$ is isomorphic to $UT\pi|_\sigma$ as a circle bundle. To apply this fact to the map $F\colon \pi_1(\Sigma,z)\to \operatorname{Map}(\Sigma,z)$, we must identify the $\Sigma$–bundle with section over $\Sigma$ whose monodromy is $F$. It is easy to check that the desired bundle is the product bundle $p_1\colon \Sigma\times\Sigma\to \Sigma$, with section given by the diagonal $\Delta\colon \Sigma\to \Sigma\times \Sigma$. Along the diagonal, we can identify the tangent space $T_{(p,p)}(\Sigma\times \Sigma)$ with $T_p\Sigma\times T_p\Sigma$. Under this identification, $Tp_1=\ker dp_1$ consists of vectors of the form $(0,v)\in T_p\Sigma\times T_p\Sigma$. Mapping $(0,v)\mapsto (v,v)$ gives an isomorphism between $Tp_1|_\Delta$ and $T\Delta$, the subbundle spanned by vectors tangent to the diagonal. It follows that $e(UTp_1|_\Delta)=e(UT\Delta)=2-2g$. By Milnor’s inequality, this bundle is not isomorphic to any flat linear circle bundle. Thus the fact above implies that no lift $\Phi\colon \pi_1(\Sigma,z)\to{\mathcal G}(\Sigma,z)$ exists. For a finite index subgroup of $\pi_1(\Sigma,z)$ corresponding to the cover ${p\colon\Sigma'\to \Sigma}$, the same argument applies to the bundle $\Sigma'\times \Sigma\to \Sigma$, with section given by the graph of $p$. Some tricks and the proof of Theorem \[main\] {#generalcase} ============================================= In this section we deduce Theorem \[main\] from Proposition \[meat\], but before doing so we need some notation. [Theorem \[main\]]{} Let $(\Sigma,{\mathbf{z}})$ be a surface of genus $g$ with $k$ marked points. Assume that either $g\ge 8$ or that $g\ge 2$ and $k\ge 1$. Then the exact sequence $$0\to\operatorname{Diff}_0(\Sigma,{\mathbf{z}})\to\operatorname{Diff}_+(\Sigma,{\mathbf{z}})\to\operatorname{Map}(\Sigma,{\mathbf{z}})\to 0$$ does not split. In fact, if $g\ge 2$ and $k\ge 1$ then no finite index subgroup of $\operatorname{Map}(\Sigma,{\mathbf{z}})$ lifts to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. Given a surface as in Theorem \[main\], let ${\mathcal G}(\Sigma,{\mathbf{z}})$ be the group of those orientation preserving homeomorphisms $f$ of $\Sigma$ which fix the marked points ${\mathbf{z}}$ pointwise and are differentiable at each $z\in{\mathbf{z}}$. If ${\mathcal G}_0(\Sigma,{\mathbf{z}})$ denotes the normal subgroup of ${\mathcal G}(\Sigma,{\mathbf{z}})$ consisting of those elements which are isotopic to the identity relative to the set ${\mathbf{z}}$ then the quotient group $$\operatorname{PMap}(\Sigma,{\mathbf{z}})={\mathcal G}(\Sigma,{\mathbf{z}})/{\mathcal G}_0(\Sigma,{\mathbf{z}})$$ is the *pure mapping class group*, a finite index subgroup of the mapping class group $\operatorname{Map}(\Sigma,{\mathbf{z}})$. We can now start with the proof of Theorem \[main\]. We will divide the proof into cases depending on the genus $g$ and number of marked points $k$ in $(\Sigma,{\mathbf{z}})$; the proof for each case will depend upon the previous one. **Case 1.** $g\ge 2$ and $k=1$. Since the group $\operatorname{Diff}_+(\Sigma,z)$ is a subgroup of ${\mathcal G}(\Sigma,z)$, the claim follows directly from Proposition \[meat\]. **Case 2.** $g\ge 2$ and $k\ge 2$. Consider the configuration space $${\mathcal C}_k(\Sigma)= \big\{(x_1,\dots,x_k)\in\Sigma^k\big\vert x_i\neq x_j\ \hbox{if}\ i\neq j\big\}$$ of ordered $k$–tuples of pairwise distinct points in the closed surface $\Sigma$. We can consider ${\mathcal C}_k(\Sigma)$ as a fiber bundle over $\Sigma$ via the following projection: $$p_1\colon{\mathcal C}_k(\Sigma)\to\Sigma,\ \ p_1\colon(x_1,\dots,x_k)\mapsto x_1$$ In particular, we obtain a homomorphism $$\pi_1(p_1)\colon\pi_1({\mathcal C}_k(\Sigma),(z_1,\dots,z_k))\to\pi_1(\Sigma,z_1).$$ We claim that $\pi_1(p_1)$ has a right inverse: \[tom\] There is a homomorphism $$\eta\colon\pi_1(\Sigma,z_1)\to\pi_1({\mathcal C}_k(\Sigma),(z_1,\dots,z_k))$$ with $\pi_1(p_1)\circ\eta=\operatorname{Id}$. It suffices to construct a section $\Sigma\to{\mathcal C}_k(\Sigma)$ of the fiber bundle $p_1\colon{\mathcal C}_k(\Sigma)\to\Sigma$. In order to construct such a section, it suffices to find maps $\alpha_i\colon \Sigma\to \Sigma$ for $i=2,\dotsc,k$, each without fixed points and satisfying $\alpha_i(z_1)=z_i$ and $\alpha_i(x)\neq \alpha_j(x)$ for $i\neq j$. Given such $\alpha_i$, let $\sigma\colon\Sigma\to\Sigma^k$ be the map given by $\sigma(x)=(x,\alpha_2(x),\dotsc,\alpha_k(x))$. By construction, the image of $\sigma$ is contained in ${\mathcal C}_k(\Sigma)$. On the other hand, $p_1\circ\sigma=\operatorname{Id}$; in other words, $\sigma$ is the desired section. To find such maps, let $T\subset\Sigma$ be a compact subsurface homeomorphic to a torus with one boundary component and which contains all the points $z_1,\dots,z_k$. Let $C$ be a homotopically essential simple closed curve in $T\setminus{\partial}T$ with $z_i\in C$ for $i=1,\dotsc,k$; let also ${\mathbb T}$ be the closed torus obtained by collapsing the boundary of $T$ to a point. Equivalently, ${\mathbb T}$ is obtained by collapsing $\Sigma\setminus(T\setminus{\partial}T)$ to a point; this gives a map $\Sigma\to{\mathbb T}$. We can now identify $C$ with a section of a trivial ${\mathbb S}^1$–bundle over ${\mathbb S}^1$. Collapsing the fibers and composing with the map $\Sigma\to{\mathbb T}$ above, we obtain a retraction $a\colon\Sigma\to C$ which fixes each point in $C$. Fixing a parametrization of $C$, let $\alpha_i$ be the composition of $a$ with the rotation of $C$ taking $z_1$ to $z_i$. Since the image of each $\alpha_i$ is $C$, any fixed point of $\alpha_i$ must lie in $C$; since $\alpha_i$ acts by a nontrivial rotation on $C$, $\alpha_i$ has no fixed points. Similarly, since each $\alpha_i$ is the composition of $a$ with a different rotation, we have $\alpha_i(x)\neq \alpha_j(x)$ for $i\neq j$, as desired. Order now the points $z_1,\dots,z_k$ in ${\mathbf{z}}$ and let $\vec{\mathbf{z}}$ be the so-obtained point in ${\mathcal C}_k(\Sigma)$. Recall that $\operatorname{PMap}(\Sigma,{\mathbf{z}})$ is the pure mapping class group of $(\Sigma,{\mathbf{z}})$, i.e. the subgroup of the mapping class group consisting of mapping classes whose representatives in $\operatorname{Diff}_+(\Sigma)$ fix each one of the marked points. Forgetting all the marked points, and forgetting all the marked points but $z$, we obtain the following versions of the Birman exact sequence $$\xymatrix{ 1 \ar[r] & \pi_1({\mathcal C}^k(\Sigma),\vec{\mathbf{z}})\ar[r]\ar[d]^{\pi_1(p)} & \operatorname{PMap}(\Sigma,{\mathbf{z}})\ar[r]\ar[d] &\operatorname{Map}(\Sigma)\ar@{=}[d]\ar[r] & 1 \\ 1 \ar[r] & \pi_1(\Sigma,z_1)\ar@/^/[u]^\eta\ar[r] & \operatorname{Map}(\Sigma,z_1)\ar[r] &\operatorname{Map}(\Sigma) \ar[r] & 1}$$ Here $\eta$ is the homomorphism provided by Lemma \[tom\]. Assume now that $G$ is a finite index subgroup in $\operatorname{Map}(\Sigma,{\mathbf{z}})$ which lifts to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. Without loss of generality we may assume that $G$ is contained in the pure mapping class group $\operatorname{Map}(\Sigma,{\mathbf{z}})$. In particular, we obtain a finite index subgroup $\Gamma$ of $\pi_1(\Sigma,z_1)$ whose image $\eta(\Gamma)$ under the homomorphism $\eta$ provided by Lemma \[tom\] lifts to $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$. Since $\operatorname{Diff}_+(\Sigma,{\mathbf{z}})$ is a subgroup of $\operatorname{Diff}_+(\Sigma,z_1)$ and hence of ${\mathcal G}(\Sigma,z_1)$, this contradicts Proposition \[meat\]. This concludes the proof of Case 2. Before going further, observe that we have actually proved that, under the assumptions of Case 2, no finite index subgroup of $\operatorname{Map}(\Sigma,{\mathbf{z}})$ lifts to ${\mathcal G}(\Sigma,{\mathbf{z}})$. **Case 3.** $g\ge 8$ and $k=0$. In this case we will prove that a subgroup of the centralizer of a well-chosen element $\tau\in\operatorname{Map}(\Sigma)$ does not lift to $\operatorname{Diff}_+(\Sigma)$. The first step is to construct $\tau$. The reader can convince herself that the following is correct. If $g=3h$, $g=3h+2$, or $g=3h+4$ respectively, then there is a diffeomorphism $\tau\colon \Sigma\to\Sigma$ of order $3$ with $2$, $4$, or $6$ fixed points respectively so that the quotient $\Sigma/\langle\tau\rangle$ has genus $h$. Observe that if $g\ge 8$ then $g$ can be written as $g=3h$, $g=3h+2$ or $g=3h+4$ with $h\ge 2$. For the sake of concreteness, we assume from now on that $g=3h$ with $h\ge 2$; the other cases proceed almost identically. Let $\tau\colon\Sigma\to\Sigma$ be the diffeomorphism provided by the fact above, ${T\in\operatorname{Map}(\Sigma)}$ the corresponding mapping class, and $$C(T)=\{f\in\operatorname{Map}(\Sigma)\vert f\circ T=T\circ f\}$$ its centralizer. We claim that $C(T)$ does not lift to $\operatorname{Diff}_+(\Sigma)$. Seeking a contradiction, assume that such a lifting $$\Psi\colon C(T)\to\operatorname{Diff}_+(\Sigma)$$ exists. By definition, the diffeomorphism $\Psi(T)$ has order $3$ and is isotopic to $\tau$. In particular, both diffeomorphisms are conjugate and we may assume without loss of generality that $\Psi(T)=\tau$. The authors did not find a reference for this fact, so we give a short argument here. Each of $\tau$ and $\tau'=\Psi(T)$ is an isometry of some hyperbolic structure $X$ and $X'$ on $\Sigma$, respectively. Identifying the universal cover of $X$ and $X'$ with the hyperbolic plane, we obtain that the groups $G$ generated by all lifts of $\tau$ and $G'$ generated by all lifts of $\tau'$ are Fuchsian groups. In fact, the assumption that $\tau$ is isotopic to $\tau'$ implies that $G$ and $G'$ are isomorphic. Satz IV.10 in Zieschang–Vogt–Coldewey [@ZVC] implies that the actions of $G$ and $G'$ are conjugate. This yields a conjugation between $\tau$ and $\tau'$. Before moving on, we observe that a second and slightly more sophisticated proof follows from the fact that the fixed point set of the mapping class $T$ in Teichmüller space is totally geodesic with respect to the Teichmüller metric, and thus *a fortiori* connected. By construction, the quotient surface $S=\Sigma/\langle\tau\rangle$ has genus $h\ge 2$. Let $z_1,z_2\in S$ be the projection to $S$ of the two fixed points of $\tau$ and set ${\mathbf{z}}=\{z_1,z_2\}$. Every $f\in\operatorname{Diff}_+(\Sigma)$ which commutes with $\tau$ induces a homeomorphism of $(S,{\mathbf{z}})$. In particular, we obtain a homomorphism $$\alpha\colon C(T)\to\operatorname{Homeo}(S,{\mathbf{z}})$$ whose kernel is the cyclic group generated by $T$. Composing with the projection $\operatorname{Homeo}(S,{\mathbf{z}})\to\operatorname{Map}(S,{\mathbf{z}})$ we can identify $C(T)/\langle T\rangle$ with a subgroup $\Gamma$ of $\operatorname{Map}(S,{\mathbf{z}})$ which lifts to the subgroup $\alpha(C(T))\subset\operatorname{Homeo}(S,{\mathbf{z}})$. It is not difficult to see that $\Gamma$ has finite index in $\operatorname{Map}(S,{\mathbf{z}})$. In particular, the desired contradiction to the assumption that $C(T)$ exists once we show the following fact: \[push\] The image of $\alpha$ is contained in ${\mathcal G}(S,{\mathbf{z}})$. It is well-known that there is a conformal structure on $\Sigma$ such that $\tau$ is biholomorphic. In particular, if $x$ is one of the fixed points of $\tau$ we can find coordinates $\zeta$ around $x$ such that $\tau(\zeta)=\omega\cdot\zeta$ where $\omega$ is a primitive third root of unity. Since $\omega$ has order $3$ we deduce from this that every differentiable $f\colon\Sigma\to\Sigma$ which commutes with $\tau$ fixes $x$ and that its differential $$df_x\colon T_x\Sigma\to T_x\Sigma$$ is complex differentiable. This implies that the induced map $S\to S$ is also differentiable at the projection of $x$, whether it is $z_1$ or $z_2$. This concludes the proof of the lemma. By Lemma \[push\], the finite index subgroup $\Gamma$ of $\operatorname{Map}(S,{\mathbf{z}})$ can be realized by $\alpha(C(T))\subset {\mathcal G}(S,{\mathbf{z}})$. This contradicts the remark following the proof of Case 2; this contradiction completes the proof of Case 3, and thus concludes the proof of Theorem \[main\]. For a minimal example of a non-lifting subgroup, consider the intersection of $\Gamma\subset \operatorname{Map}(S,{\mathbf{z}})$ with the surface group $\eta(\pi_1(S,z_1))$; this gives a surface group inside $\operatorname{Map}(S,{\mathbf{z}})$ whose preimage in $C(T)$ does not lift to $\operatorname{Diff}_+(\Sigma)$. This preimage is an extension of a surface group by the cyclic group $\langle T\rangle$; by possibly passing to an index 3 subgroup, we may assume this extension is trivial, yielding a subgroup of $\operatorname{Map}(\Sigma)$ isomorphic to ${\mathbb Z}/3{\mathbb Z}\times \pi_1(S',z)$ which does not lift to $\operatorname{Diff}_+(\Sigma)$. Observations on the proof of Theorem \[main\] {#observations-on-the-proof-of-theoremmain .unnumbered} --------------------------------------------- In this section, we give an informal discussion interpreting the above proof in terms of surface bundles. We then use this perspective to give two observations, Theorems \[AK\] and \[m-const\] below. As discussed in the introduction, Case 1 above is equivalent to the statement that not every surface bundle with section admits a flat connection so that the section is parallel. This was proved in Proposition \[meat\] by exhibiting the product bundle $\Sigma\times\Sigma$ with section given by the diagonal $\Delta$. The content of Lemma \[tom\] is then that this bundle admits $k$ disjoint sections, one of which is the diagonal. The proof given above was chosen because it requires no conditions on the genus $g$ of $\Sigma$. In the special case when $k|(g-1)$, another construction is as follows. Let $\sigma\colon \Sigma\to \Sigma$ generate a free action of ${\mathbb Z}/k{\mathbb Z}$ on $\Sigma$; then the graphs $\Delta=\Gamma_{\text{id}}, \Gamma_\sigma, \Gamma_{\sigma^2},\ldots,\Gamma_{\sigma^{k-1}}$ give $k$ disjoint sections of $\Sigma\times\Sigma$. **Fiberwise branched covers.** In Case 3, we exploit the connection between $\operatorname{Map}(\Sigma)$ and $\operatorname{Map}(S,{\mathbf{z}})$, where $S=\Sigma/\langle\tau\rangle$ and ${\mathbf{z}}$ is the image of the fixed points of $\tau$. Topologically, this corresponds to a fiberwise branched cover, as follows. (Here we allow the order of $\tau$ to be any $k\geq 3$; as above, we assume that $\tau$ has two fixed points only for simplicity.) If $S\to E\to B$ is a surface bundle with two sections $\sigma_1,\sigma_2\colon B\to E$, the two sections together give a (disconnected) codimension 2 subspace of $E$. Depending on the bundle and sections, $E$ may admit a cyclic branched cover $\widetilde{E}\to E$ of order $k$, branched over the sections $\sigma_1$ and $\sigma_2$; in this case $\widetilde{E}$ becomes a $\Sigma$–bundle $\Sigma\to \widetilde{E}\to B$. The action of $\tau$ on $\Sigma$ then corresponds to the order–$k$ automorphism ${\mathcal T}\colon \widetilde{E}\to\widetilde{E}$ generating the deck transformations of the branched cover $\widetilde{E}\to E$. The observation above that $C(T)/\langle T\rangle$ has finite index in $\operatorname{Map}(S,{\mathbf{z}})$ becomes here the observation that even if $E$ does not, there is always some finite cover $B'\to B$ so that the pullback bundle $S\to E'\to B'$ admits such a branched cover, branched over the preimages in $E'$ of the sections. Note that the argument of Lemma \[push\] goes through as long as the order of $\tau$ is at least $3$.\ Combining this construction with the choice of sections $\Gamma_{\sigma^i}\subset \Sigma\times \Sigma$ recovers the classical example of Atiyah [@Atiyah] and Kodaira [@Kodaira]. Their surface bundle is constructed as follows: start with a surface $S$ admitting a free action of ${\mathbb Z}/k{\mathbb Z}$ generated by $\sigma$. The bundle $S\times S\to S$ does not admit a branched cover branched over the union of the sections $\Gamma_{\sigma^i}$. However, taking $\pi\colon S'\to S$ to be the cover corresponding to the kernel of $\pi_1(S)\to H_1(S)\to H_1(S;{\mathbb Z}/k{\mathbb Z})$, the pullback $S'\times S\to S'$ does admit a branched cover $M_k\to S'\times S$ of order $k$, branched over the union of the sections $\Gamma_{\sigma^i\circ \pi}$. Composing with the projection $S'\times S\to S'$ gives a bundle $\Sigma\to M_k\to S'$, where the fiber $\Sigma$ is a branched cover of the original fiber $S$ of order $k$, branched over $k$ points. (Note that the manifold $M_k$ fibers over a surface in two different ways; the fibering considered here is that of the original authors.) Aside from the choice of sections, these steps correspond exactly to the considerations above, and so the results of Case 3 apply identically to this case, giving the following theorem: [Theorem \[AK\]]{} When $k\geq 3$, the Atiyah–Kodaira bundle $\Sigma\to M_k\to S'$ admits no flat connection invariant under the order–$k$ deck transformation ${\mathcal T}\colon M_k\to M_k$. The surface group $\pi_1(S',z)\subset \operatorname{Map}(\Sigma)$ singled out in the previous section is the monodromy of this surface bundle. We remark that by returning to the choice of sections considered in Case 3, the same theorem is obtained for the surface bundles constructed by Gonz[á]{}lez-D[í]{}ez and Harvey in [@GD-H]. We now sketch a description of Morita’s $m$–construction; this is a generalization of the construction of Atiyah and Kodaira, used by Morita in [@Morita] to give the original proof of Morita’s theorem. Roughly, the $m$–construction begins with a surface bundle over a manifold of dimension $n$ satisfying certain conditions, then modifies it by pulling back along covers of the base, covers and branched covers of the fiber, and the bundle projection itself; the result is another surface bundle whose base has dimension $n+2$. More precisely, given an admissible surface bundle $s\to E\to B$, first pull back to the total space to obtain a bundle over $E$ with fiber $s$; this bundle naturally admits a “diagonal” section. Possibly passing to a finite cover of the base, we may take a fiberwise cover, obtaining a new bundle with fiber $S$, where $S\to s$ is a cover with deck transformation group ${\mathbb Z}/m{\mathbb Z}$. As discussed above, combining the “diagonal” section with this ${\mathbb Z}/m{\mathbb Z}$–action yields $m$ disjoint sections of this $S$–bundle. Again possibly passing to a finite cover of the base, we may take a fiberwise branched cover, yielding a bundle $\Sigma\to \widetilde{E}\to E'$, where $\Sigma\to S$ is a cyclic branched cover of order $m$ branched at $m$ points. Note that the deck transformation ${\mathcal T}\colon\widetilde{E}\to \widetilde{E}$ of this cyclic branched cover has order $m$. Fixing a single fiber of the original bundle $s\to E\to B$ and following through this construction, we see that the preimage of this fiber in $\widetilde{E}$ gives an Atiyah–Kodaira bundle $\Sigma\to M_m\to S'$ inside $\Sigma\to \widetilde{E}\to E'$. Thus we have the following consequence of Theorem \[AK\]. \[m-const\] When $m\geq 3$, given any admissible bundle $s\to E\to B$, the $\Sigma$–bundle $\Sigma\to \widetilde{E}\to E'$ resulting from Morita’s $m$–construction admits no flat connection invariant under the order–$m$ deck transformation ${\mathcal T}\colon \widetilde{E}\to \widetilde{E}$. For comparison, the corresponding form of Morita’s theorem is as follows. There exists a bundle $s\to E^6\to B^4$ so that the $\Sigma$–bundle $\Sigma\to \widetilde{E}^8\to E'^6$ resulting from Morita’s $m$–construction admits no flat connection. [9]{} M. F. Atiyah, *The signature of fibre-bundles*, in Global Analysis (Papers in Honor of K. Kodaira), University of Tokyo Press (1969). S. Cantat and D. Cerveau, *Analytic actions of mapping class groups on surfaces*, J. Topol. 1 (2008). C. J. Earle and J. Eells, *The diffeomorphism group of a compact Riemann surface*, Bull. Amer. Math. Soc. 73, (1967). C. J. Earle and A. Schatz, *Teichmüller theory for surfaces with boundary*, J. Diff. Geom. 4 (1970). J. Franks and M. Handel, *Global fixed points for centralizers and Morita’s Theorem*, Geom. Topol. 13 (2009). G. Gonz[á]{}lez-D[í]{}ez and W. J. Harvey, *Surface groups inside mapping class groups*, Topology 38 (1999). K. Kodaira, *A certain type of irregular algebraic surfaces*, J. Analyse Math. 19 (1967). V. Marković, *Realization of the mapping class group by homeomorphisms*, Invent. Math. 168 (2007). V. Marković and D. Šarić, *The mapping class group cannot be realized by homeomorphisms*, preprint (2008) J. Milnor, *On the existence of a connection with curvature zero*, Comment. Math. Helv. 32 (1958). J. Milnor and J. Stasheff, *Characteristic classes*, Annals of Mathematics Studies, No. 76. Princeton University Press (1974). S. Morita, *Characteristic classes of surface bundles*, Invent. Math. 90 (1987). J. Wood, *Bundles with totally disconnected structure group*, Comment. Math. Helv. 46 (1971) H. Zieschang, E. Vogt, and H.-D. Coldewey, *Flächen und ebene diskontinuierliche Gruppen* (German), Lecture Notes in Mathematics, Vol. 122 Springer-Verlag, Berlin-New York (1970) Mladen Bestvina, Department of Mathematics, University of Utah `bestvina@math.utah.edu` Thomas Church, Department of Mathematics, University of Chicago `tchurch@math.uchicago.edu` Juan Souto, Department of Mathematics, University of Michigan `jsouto@umich.edu`
--- abstract: 'A scheme is described in which the light gravitino in low energy SUSY breaking models mixes with neutrinos. The mixing between gravitino and neutrinos arises through the standard model symmetry breaking and an R-parity and lepton number violating bilinear term in the superpotential. It is shown that mixings compatible with the neutrino experiments can be obtained within the cosmological bound on the bilinear term set by the baryon asymmetry of the universe.' address: | Center for Theoretical Physics, Seoul National University\ Seoul 151-742, Korea author: - Taekoon Lee title: Gravitino as a Sterile Neutrino --- The most promising explanation for the solar and atmospheric neutrino flux deficit is the neutrino oscillation that arises when there are mixings among a group of fermions that includes the three families of neutrinos and possibly sterile fermions neutral under the standard model symmetry group. Although the scheme in which mixings occur only within the three families of neutrinos is most economic, neutrino mixings with sterile fermions are an interesting possibility [@STERILE0; @STERILE1]. Moreover, the mixings with a sterile neutrino becomes [*necessary*]{} when one attempts to explain the combined data of the solar, atmospheric and LSND neutrino experiments which suggest existence of three different oscillation lengths. Even if not all of these neutrino experiments turn out correct we can still think of neutrino oscillation with sterile fermions. The sterile fermions that mix with neutrinos must be light $\leq O(1 \,\mbox{eV})$ to be relevant for neutrino oscillation. Some of the candidates for the sterile fermions considered in the literature include axino and modulinos [@Chun0; @Chun; @Smirnov]. Being the superpartners of light scalars (axion and moduli, respectively) these fermions get small masses. In the low energy supersymmetry (SUSY) breaking models such as the gauge mediated SUSY breaking [@Dine] or the no-scale supergravity [@Nano], the gravitino can also be light [@Yanagida; @Nanopoulos], and so could play the role of a sterile neutrino. In this letter we consider mixings of light gravitino with neutrinos. Gravitino is the spin $\frac{3}{2}$ superpartner of graviton and becomes massive when it absorbs through the super-Higgs mechanism the Goldstino from the spontaneous global SUSY breaking. The gravitino interaction with matter is dominated by its spin $\frac{1}{2}$ longitudinal component, which is essentially the absorbed Goldstino, and the interaction strength depends primarily on the SUSY breaking scale whereas the transverse components interaction with matter is suppressed by Plank mass. Thus the mixings between gravitino and neutrinos can arise through the Goldstino-neutrino mixings. The easiest way to describe gravitino-neutrino mixing in supergravity is using the equivalence theorem [@Equiv] to replace the gravitino with Goldstino and work with the corresponding globally supersymmetric lagrangian. The equivalence theorem is applicable in our case since the neutrino energy is much larger than the gravitino mass in consideration. Our scheme for the Goldstino-neutrino mixing is as follows. Mixings between Goldstino and neutrinos can occur only when the electroweak symmetry is broken since Goldstino is neutral under the symmetry. When the electroweak symmetry is broken the Goldstino necessarily mixes with the Higgsinos. To convert this mixing to Goldstino-neutrino mixing we introduce a small R-parity and lepton number violating bilinear term in the superpotential through which a neutrino-Higgsino mixing arises, which in turn mediates the Goldstino-neutrino mixing. In our scheme the bilinear terms in the superpotential are given by $$W= \epsilon_{i} L_{i} H_{2} + \mu H_{1} H_{2}$$ where $i=e,\mu,\tau$ and $L_{i}, H$ denote the lepton and Higgs chiral fields, respectively. The first term violates R-parity and lepton number conservation. There is a strong constraint on the magnitude of $\epsilon_{i}$ from the baryon asymmetry of the universe. When $\epsilon_{i}$ are sizable, the lepton number violation combined with the $B+L$ violation through the sphaleron transitions washes out any relic $B-L$ inherited before the weak symmetry breaking and results in $B=L=0$ universe. The constraint from this consideration is given by [@Fukugita-Yanagida; @Campbell; @Ma] $$\epsilon_{i} \leq 10^{-6} \,\,\mbox{GeV}. \label{eq2}$$ We shall see that a gravitino-neutrino mixing can arise within this bound. When the electroweak symmetry is broken, the neutral Higgsinos mix with the Goldstino with a mixing angle of $O( F_{2}/F)$ where $F_{2}=\mu H_{1}$ and $F$ is the Goldstino decay constant. Thus the Higgsino $\widetilde{H}_{2}$ can be written in mass eigenstates as $$\widetilde{H}_{2}\sim \frac{F_{2}}{F} \chi + \cdots$$ where $\chi$ denotes the Goldstino. The Goldstino-Higgsino mixing and the neutrino-Higgsino mixing from the R-parity violating interaction gives the Goldstino-neutrino mixing $$\begin{aligned} {\cal L}_{\nu_{i}\chi} &\sim& m_{\nu_{i}\chi} \nu_{i}\chi + c.c. \label{e5}\end{aligned}$$ with $$m_{\nu_{i}\chi}= \frac{\epsilon_{i}\mu H_{1}}{F}= \frac{\epsilon_{i}\mu v \cos\beta}{F}\sim 2\left(\frac{\epsilon_{i}}{10^{-7} \mbox{GeV}}\right)\left(\frac{\mu}{300\,\mbox{GeV}}\right) \left(\frac{2\,\mbox{TeV}}{\sqrt{F}}\right)^{2}\cos\beta\,\, \mbox{(eV)}$$ and $$\tan\beta=\frac{H_{2}}{H_{1}}.$$ When the gravity is coupled, the Goldstino becomes the longitudinal component of the gravitino and obtains a mass $$m_{\chi}= \frac{F}{\sqrt{3}M_{P}}\sim 10^{-3} \left(\frac {\sqrt{F}}{2\,\mbox{TeV}}\right)^{2} \,\,\mbox{(eV)}$$ where $M_{P}$ is the Plank mass. Let us now see that the mixing given in (\[e5\]) can be compatible with the neutrino experiments. The neutrino masses may arise through the seesaw mechanism or others, but here we do not make any speculation on the pattern of the neutrino mass matrix. Instead we treat neutrino masses as parameters, and show that for a certain range of neutrino masses the above Goldstino-neutrino mixing can account for some of the missing neutrinos observed in the neutrino experiments. In the following we take $\mu=300\,\mbox{GeV}$ and $\cos\beta=1/\sqrt{2}$ as a reference. The just-so solution for the solar neutrino problem with large mixing angle and $\Delta m^{2}\sim 10^{-10} \,\mbox{eV}^{2}$ would be achieved when $m_{\chi}, m_{\nu_{e}} \leq m_{\nu_{e}\chi} \approx 10^{-5}$ eV which can be satisfied if $\sqrt{F} \sim 200$ GeV and $ \epsilon_{e} \sim 10^{-14}$ GeV are taken. Note that the SUSY breaking scale $\sqrt{F}$ is small but still above the low bound from the primordial nucleosynthesis [@Tony], and $\epsilon_{e}$ is well below the cosmological bound (\[eq2\]). The MSW small mixing solution for the solar neutrino puzzle requiring a mixing angle $\theta \approx 4 \times 10^{-2}$ and $ \Delta m^{2}\approx 4\times 10^{-6} \mbox{eV}^{2}$ would be realized provided $m _{\nu_{e}} < m_{\chi} \approx 2\times 10^{-3}$ eV, $m _{\nu_{e}\chi} \approx 10^{-4}$ eV. This can be satisfied if we take $ \sqrt{F} \sim 2$ TeV and $\epsilon_{e}\sim 10^{-11}$ GeV which is also well below the bound (\[eq2\]). Now the atmospheric neutrino data from Super-Kamiokande collaboration suggest a neutrino oscillation of almost maximal mixing and $\Delta m^{2}\approx 2 \times 10^{-3} \mbox{eV}^{2}$. This would be realized if $m_{\nu_{\mu}\chi} \gg m_{\chi}$ and $ m_{\chi} \geq m_{\nu_{\mu}}$ are satisfied. Then the maximal mixing arises from the pseudo Dirac nature of the neutrino mass matrix, and the mass splitting is given by $$\begin{aligned} \Delta m^{2}\sim 2 m_{\nu_{\mu}\chi}m_{\chi} \approx 3\times 10^{-3} \left(\frac{\epsilon_{\mu}}{10^{-7}\mbox{GeV}}\right) \,\, \mbox{(eV}^{2})\end{aligned}$$ which would give the required value if $\epsilon_{\mu}\sim 10^{-7}$ GeV is taken. Note again that $\epsilon_{\mu}$ satisfies the cosmological bound. It is remarkable that the mass splitting is independent of the SUSY breaking scale except the condition $\sqrt{F} \leq 20$ TeV obtained by assuming $ m_{\chi}/m_{\nu_{\mu}\chi} \leq 10^{-1}$. The combined data of the solar, atmospheric and LSND neutrino experiments suggest existence of three different $\Delta m^{2}$ which is possible only when there are more than four neutrinos. In this case the mixing of a sterile neutrino with the three active neutrinos is strongly constrained by the primordial nucleosynthesis [@Okada; @Bilenkii]. The requirement $N_{\nu} < 4$ at neutrino decoupling does not allow large mixing of a sterile neutrino with the active neutrinos, and so $\nu_{e}\leftrightarrow \nu_{s}$ and $\nu_{\mu}\leftrightarrow \nu_{\tau}$ oscillations are favored for the solar and atmospheric neutrino deficit, respectively. The implication of this conclusion on our gravitino-neutrino mixing is that the SUSY breaking scale should be $\sqrt{F} \leq 2\, \mbox{TeV}$. In conclusion we have pointed out that the gravitino in the low energy SUSY breaking models could mix with neutrinos through its spin half longitudinal component and a lepton number violating bilinear term in the superpotential. To be compatible with the neutrino experiments the SUSY breaking scale is required to be in the range $ .2 \,\mbox{TeV}\leq \sqrt{F} \leq 20\, \mbox{TeV}$, and the lepton number violating term was found to be small enough to evade the cosmological constraint. [**Acknowledgements:**]{} It is great pleasure to thank E.J. Chun for useful discussions and introducing me his paper [@Chun]. This work was in part supported by the Korean Science and Engineering Foundation (KOSEF). \#1\#2\#3[Nucl. Phys. [B\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. [D\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rev. Lett. [\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Lett. [B\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Phys. Rep. [\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Z. Phys. [C\#1]{}, \#2 (\#3)]{} \#1\#2\#3[Int. J. Mod. Phys. [A\#1]{}, \#2 (\#3)]{} E. Ma and P. Roy, Phys. Rev. [ D52]{}, 4780 (1995); R. Foot and R. R. Volkas, Phys. Rev. [ D52]{}, 6595 (1995); Z. Berezhiani and R. N. Mohapatra, Phys. Rev. [ D52]{}, 6607 (1995); E. Ma, Mod. Phys. Lett. [A1]{}, 1893 (1996). N. Arkani-Hamed and Y. Grossman, hep-ph/9806223; M. Bando and K. Yoshioka, hep-ph/9806400; S. Davidson and S.F. King, hep-ph/9808296; Q. Shafi and Z. Tavartkiladze, hep-ph/9811463; C. Liu and J. Song, hep-ph/9812381; Y. Okamoto and M. Yasue, hep-ph/9812403. E. J. Chun, A. S. Joshipura and A. Yu. Smirnov, ; . E.J. Chun, hep-ph/9901220. K. Benakli and A. Yu. Smirnov, ; G. Dvali and Y. Nir, J. High Energy Phys., 9810:014 (1998). M. Dine and A.E. Nelson, ; M. Dine, A.E. Nelson, and Y. Shirman, ibid. 51 (1995) 1362; M. Dine, A.E. Nelson, and Y. Shirman, ibid. 53 (1996) 2658. J. Ellis, A.B. Lahanas, D.V. Nanopoulos and K. Tamvakis, . Izawa K.-I, Y. Nomura and T. Yanagida, hep-ph/9901345. J. Ellis, K. Enqvist, and D.V. Nanopoulos, . R.Casabuoni, S. De Curtis, D. Dominici, F. Feruglio, and R. Gatto, . M. Fukugita and T. Yanagida, . B.A. Campbell, S. Davidson, J.E. Ellis and K. Olive, . E. Ma, M. Raidal and U. Sarkar, hep-ph/9901406. T. Gherghetta, . N. Okada and O. Yasuda, . S.M. Bilenkii, C. Giunti, W. Grimus and T. Schwetz, hep-ph/9804421.
--- address: | Department of Physics, University of Tokyo\ 7–3–1 Hongo, Bunkyo-ku, Tokyo 113–0033, Japan author: - 'T. TOMURA' title: | RARE HADRONIC $B$ DECAYS AND DIRECT CPV\ FROM BELLE AND BABAR --- Introduction ============ Recent measurements of the mixing-induced $CP$-violating asymmetry parameter $\sin2\phi_1$ (or $\sin2\beta$) [@sin2phi1] strongly supports the Kobayashi-Maskawa (KM) mechanism. [@KM] However, a full test of the KM mechanism requires additional measurements for the other angles $\phi_2$ ($\alpha$) and $\phi_3$ ($\gamma$) of the unitarity triangle. [@UT] Charmless hadronic decays of $B$ mesons contain enough information to measure these angles, but the extraction of unitarity angles from these decay modes has some difficulty caused by hadronic uncertainties. However, measurements of enough final states can provide sufficient constraints to the sizes of hadronic amplitudes and strong phases, which are necessary for the extraction of angles. In the KM scheme, the direct $CP$ violation (DCPV) is also expected and has already been observed in the $K$ meson system. [@K_dcpv] However, this phenomenon has not been observed yet in the $B$ meson system. The search for DCPV is an important issue at $B$-factory experiments. Charmless hadronic $B$ decays can provide rich sample for the DCPV search, because many of these decays are described by $b \to u$ tree and $b \to s$ penguin diagrams. The interference between the two diagrams can cause the partial-rate asymmetry [${A_{CP}}$]{} as $$\begin{aligned} {\ensuremath{{A_{CP}}}}&\equiv \frac{\Gamma({\ensuremath{{\overline{B}}}}\to \overline{f}) - \Gamma(B \to f)} {\Gamma({\ensuremath{{\overline{B}}}}\to \overline{f}) + \Gamma(B \to f)} \nonumber \\ &= \frac{2|P||T|\sin\Delta\phi\sin\Delta\delta} {|P|^2 + |T|^2 + 2|P||T|\cos\Delta\phi\cos\Delta\delta} ,\end{aligned}$$ where $\Gamma(B \to f)$ denotes the partial width of [${B^0}$]{} or [${B^+}$]{} decaying into a flavor-specific final state $f$ and $\Gamma({\ensuremath{{\overline{B}}}}\to \overline{f})$ denotes that of charge conjugate mode, $T$ and $P$ represent the tree and penguin amplitudes, respectively, and $\Delta\phi$ and $\Delta\delta$ are the differences in weak and strong phases between two amplitudes. DCPV is also sensitive to the new physics beyond the Standard Model (SM) through the contribution of new particles to the penguin loop. In this paper, mainly analyses for $B \to K\pi$, $\pi\pi$, and $KK$ decays, which are referred to as $B \to hh$, are described and the results for other rare hadronic $B$ decays are summarized. Analysis ======== The analyses of Belle are based on $78~{\ensuremath{\text{fb}^{-1}}}$ data sample collected at the [${\Upsilon(4S)}$]{} resonance, which corresponds to $85 \times 10^6$ [${B{\ensuremath{{\overline{B}}}}}$]{} pairs, by the Belle detector [@Belle] at the KEKB $e^+e^-$ storage ring. [@KEKB] The analyses of are based on $81.2~{\ensuremath{\text{fb}^{-1}}}$ data sample corresponding to $88 \times 10^6$ [${B{\ensuremath{{\overline{B}}}}}$]{}pairs collected with the detector [@BaBar] at the PEP-II asymmetric-energy $e^+e^-$ collider. [@PEP-II] The detail of the reconstruction of $B$ mesons and the event selection is described elsewhere. [@Casey; @babar_prl] Reconstructed $B$ candidates are identified using two kinematic variables: the beam-energy constrained mass (or the beam-energy substituted mass) ${\ensuremath{{M_\text{bc}}}}( = {\ensuremath{{m_\text{ES}}}}) \equiv \sqrt{({\ensuremath{{E_\text{beam}^\text{cms}}}})^2-({\ensuremath{{p_B^\text{cms}}}})^2}$ and the energy difference ${\ensuremath{{\Delta E}}}\equiv {\ensuremath{{E_B^\text{cms}}}}- {\ensuremath{{E_\text{beam}^\text{cms}}}}$, where ${\ensuremath{{E_\text{beam}^\text{cms}}}}$ is the beam energy, ${\ensuremath{{p_B^\text{cms}}}}$ and ${\ensuremath{{E_B^\text{cms}}}}$ are the momentum and energy of the reconstructed $B$ meson in the center-of-mass system (cms). The dominant background comes from $e^+e^- \to q\overline{q}$ ($q=u$, $d$, $s$, $c$) continuum process. These backgrounds are suppressed by the event topology. Belle uses the likelihood ratio calculated from two variables: the modified Fox-Wolfram moments [@FW; @Casey] that are combined using a Fisher discriminant into a single variable and the angle of the $B$ flight direction with respect to the beam axis. uses the angle between the sphericity axis of the $B$ candidate and the sphericity axis of the remaining particles in that event and the Fisher discriminant calculated from the momenta of remaining particles and the angles between their momenta and the thrust axis of $B$ candidate in the cms. For the final states that include a charged pion or kaon, high momentum particle identification (PID) is important. PID of Belle is based on the light yield in the aerogel Cherenkov counter (ACC) and [${dE/dx}$]{} measurements in the central drift chamber (CDC). PID of is accomplished with the Cherenkov angle measurement from a detector of internally reflected Cherenkov light (DIRC). Result ====== Branching Fraction ------------------ Figure \[fig:belle\_hh\] shows the [${\Delta E}$]{} distributions obtained by the Belle experiment for $B \to hh$ modes in the [${M_\text{bc}}$]{} signal region. \ \ \ \ \ \ The signal yields are extracted by a binned maximum likelihood fit to the [${\Delta E}$]{} distribution. The [${\Delta E}$]{} fits include four components: signal, crossfeed from other misidentified signals, continuum background, and backgrounds from multibody and radiative charmless $B$ decays. The results of the fits are also shown in Fig. \[fig:belle\_hh\]. Figure \[fig:babar\_hh\] shows the distributions of [${m_\text{ES}}$]{} and [${\Delta E}$]{} obtained by the experiment after the selection to enhance the signal purity. [@babar_prl; @babar_ichep02] \ \ \ \ \ uses an unbinned extended maximum likelihood fit to extract the signal yields. The input variables to the fit are [${m_\text{ES}}$]{}, [${\Delta E}$]{}, Fisher discriminant of the event shape parameter, Cherenkov angles for the charged tracks. The projections of the fit results are also shown in Fig. \[fig:babar\_hh\]. Using the signal yields obtained from the fit and the reconstruction efficiencies, the branching fractions are derived and listed in Table \[tab:bf\_kpi\]. -------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ${\ensuremath{{B^0}}}\to {\ensuremath{{K^+}}}{\ensuremath{\pi^-}}$ $17.9 \pm 0.9 \pm 0.7$ $18.5\pm 1.0\pm 0.7$ ${\ensuremath{{B^+}}}{\hspace{-0.7mm}}\to {\ensuremath{{K^+}}}{\ensuremath{\pi^0}}$ $12.8 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}1.2}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}1.1}${\hspace{+0.2mm}}}} \pm 1.0$ $12.8\pm 1.4{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}1.4}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}1.0}${\hspace{+0.2mm}}}}$ ${\ensuremath{{B^+}}}{\hspace{-0.7mm}}\to {\ensuremath{{K^0}}}{\ensuremath{\pi^+}}$ $17.5 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}1.8}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}1.7}${\hspace{+0.2mm}}}} \pm 1.3$ $22.0\pm 1.9\pm 1.1$ ${\ensuremath{{B^0}}}\to {\ensuremath{{K^0}}}{\ensuremath{\pi^0}}$ $10.4 \pm 1.5 \pm 0.8$ $12.6\pm 2.4\pm 1.4$ ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$ ${\hphantom{0}}4.7 \pm 0.6 \pm 0.2$ ${\hphantom{0}}4.4\pm 0.6\pm 0.3$ ${\ensuremath{{B^+}}}{\hspace{-0.7mm}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^0}}$ ${\hphantom{0}}5.5 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}1.0}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.9}${\hspace{+0.2mm}}}} \pm 0.6$ ${\hphantom{0}}5.3\pm 1.3\pm 0.5$ ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^0}}{\ensuremath{\pi^0}}$ ${\hphantom{0}}1.6 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}0.7}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.6}${\hspace{+0.2mm}}}} {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}0.6}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.3}${\hspace{+0.2mm}}}} < 3.6$ ${\hphantom{0}}1.8{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}1.4}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}1.3}${\hspace{+0.2mm}}}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}0.5}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.7}${\hspace{+0.2mm}}}} < 4.4$ ${\ensuremath{{B^0}}}\to {\ensuremath{{K^+}}}{\ensuremath{{K^-}}}$ ${\ensuremath{{B^+}}}{\hspace{-0.7mm}}\to {\ensuremath{{K^+}}}{\ensuremath{{\overline{K}{}^0}}}$ $-0.6 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}0.6}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.7}${\hspace{+0.2mm}}}} \pm 0.3 < 1.3$ ${\hphantom{0}}1.7 \pm 1.2 \pm 0.1 < 3.4$ ${\ensuremath{{B^0}}}\to {\ensuremath{{K^0}}}{\ensuremath{{\overline{K}{}^0}}}$ ${\hphantom{0}}0.8 \pm 0.8 \pm 0.1 < 3.2$ -------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Branching fractions [${\mathcal{B}}$]{} for $B \to hh$ modes obtained from Belle and . uses $54~{\ensuremath{\text{fb}^{-1}}}$ data sample for ${\ensuremath{{B^+}}}\to {\ensuremath{{K^0}}}{\ensuremath{\pi^+}}$ and ${\ensuremath{{K^+}}}{\ensuremath{{\overline{K}{}^0}}}$ modes. \[tab:bf\_kpi\] The branching fractions for other hadronic rare decay modes are reported by Belle and as listed in Table \[tab:bf\_other\]. [@rare_hfag] --------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ${\ensuremath{{B^0}}}\to {\ensuremath{{\eta^\prime}}}{\ensuremath{{K^0}}}$ $55.4\pm {\hphantom{0}}5.2\pm 4.0$ $68{{\hphantom{.}}{\hphantom{0}}}\pm 10{{\hphantom{.}}{\hphantom{0}}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}9{{\hphantom{.}}{\hphantom{0}}}}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}8{{\hphantom{.}}{\hphantom{0}}}}$}}$ ${\hspace{\tmp}}\eta{\ensuremath{{K^{*0}}}}$ $19.8{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}6.5}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}5.6}$}}\pm 1.7$ $21.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}5.4}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}4.7}$}}\pm 2.0$ ${\hspace{\tmp}}\eta{\ensuremath{{K^0}}}$ ${\hspace{\tmp}}{\ensuremath{{K^0}}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$ $47{{\hphantom{.}}{\hphantom{0}}}\pm 5{{\hphantom{.}}{\hphantom{0}}}\pm 6{{\hphantom{.}}{\hphantom{0}}}$ $50{{\hphantom{.}}{\hphantom{0}}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}10{{\hphantom{.}}{\hphantom{0}}}}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}{\hphantom{0}}9{{\hphantom{.}}{\hphantom{0}}}}$}}\pm 7{{\hphantom{.}}{\hphantom{0}}}$ ${\hspace{\tmp}}{\ensuremath{{K^0}}}{\ensuremath{{K^+}}}{\ensuremath{{K^-}}}$ $29.3\pm 3.4\pm 4.1$ ${\hspace{\tmp}}{\ensuremath{{K^0}}}\phi$ ${\hphantom{0}}8.7{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.7}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.5}$}}\pm 0.9$ $13.0{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}6.1}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}5.2}$}}\pm 2.6$ ${\hspace{\tmp}}{\ensuremath{{K_S^0}}}{\ensuremath{{K_S^0}}}{\ensuremath{{K_S^0}}}$ ${\hphantom{0}}4.3{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.6}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.4}$}}\pm 0.8$ ${\hspace{\tmp}}{\ensuremath{{K^{*0}}}}\phi$ $11.1{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.3}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.2}$}}\pm 1.1$ ${\hspace{\tmp}}{\ensuremath{{\rho^+}}}{\ensuremath{\pi^-}}$ $28.9\pm 5.4\pm 4.3$ $20.8{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}6.0}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}6.3}$}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.8}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}3.1}$}}$ ${\ensuremath{{B^+}}}{\hspace{-0.7mm}}\to {\ensuremath{{\eta^\prime}}}{\ensuremath{{K^+}}}$ $76.9\pm 3.5\pm 4.4$ $78{{\hphantom{.}}{\hphantom{0}}}\pm 6{{\hphantom{.}}{\hphantom{0}}}\pm 9$ ${\hspace{\tmp}}\eta{\ensuremath{{K^+}}}$ ${\hphantom{0}}3.8{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.8}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.5}$}}\pm 0.2$ ${\hphantom{0}}5.3{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.8}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.5}$}}\pm 0.6$ ${\hspace{\tmp}}\eta{\ensuremath{{K^{*+}}}}$ $22.1{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}11.1}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}{\hphantom{0}}9.2}$}}\pm 3.3$ ${\hspace{\tmp}}\omega{\ensuremath{{K^+}}}$ ${\hphantom{0}}1.4{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.3}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.0}$}}\pm 0.3$ ${\hphantom{0}}9.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.6}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.3}$}}\pm 1.0$ ${\hspace{\tmp}}{\ensuremath{{K^{*0}}}}{\ensuremath{\pi^+}}$ $15.5\pm 3.4\pm 1.8$ ${\hspace{\tmp}}{\ensuremath{{K^+}}}{\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$ $59.2\pm 4.7\pm 4.9$ $53.9\pm 3.1\pm 5.7$ ${\hspace{\tmp}}{\ensuremath{{K^{*+}}}}{\ensuremath{{\rho^0}}}$ ${\hphantom{0}}7.7{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.1}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.0}$}}\pm 1.4$ ${\hspace{\tmp}}{\ensuremath{{K^+}}}{\ensuremath{{K^-}}}{\ensuremath{{K^+}}}$ $34.7\pm 2.0\pm 1.8$ $33.0\pm 1.8\pm 3.2$ ${\hspace{\tmp}}{\ensuremath{{K^+}}}\phi$ ${\hphantom{0}}9.2\pm 1.0\pm 0.8$ $14.6\pm 3.0{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.8}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.0}$}}$ ${\hspace{\tmp}}{\ensuremath{{K^+}}}{\ensuremath{{K_S^0}}}{\ensuremath{{K_S^0}}}$ $13.4\pm 1.9\pm 1.5$ ${\hspace{\tmp}}{\ensuremath{{K^{*+}}}}\phi$ $12.1{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.1}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.9}$}}\pm 1.5$ $11.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}3.3}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.9}$}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.3}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.7}$}}$ ${\hspace{\tmp}}{\ensuremath{{\rho^0}}}{\ensuremath{\pi^+}}$ $24{{\hphantom{.}}{\hphantom{0}}}\pm 8{{\hphantom{.}}{\hphantom{0}}}\pm 3{{\hphantom{.}}{\hphantom{0}}}$ ${\hphantom{0}}8.0{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.3}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.0}$}}\pm 0.7$ ${\hspace{\tmp}}{\ensuremath{{\rho^+}}}{\ensuremath{{\rho^0}}}$ ${\hphantom{0}}9.9{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.6}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}2.5}$}}\pm 2.5$ $39{{\hphantom{.}}{\hphantom{0}}}\pm 11{{\hphantom{.}}{\hphantom{0}}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}6{{\hphantom{.}}{\hphantom{0}}}}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}5{{\hphantom{.}}{\hphantom{0}}}}$}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}3{{\hphantom{.}}{\hphantom{0}}}}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}8{{\hphantom{.}}{\hphantom{0}}}}$}}$ ${\hspace{\tmp}}\omega{\ensuremath{\pi^+}}$ ${\hphantom{0}}6.6{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.1}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.8}$}}\pm 0.7$ ${\hphantom{0}}4.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.0}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.8}$}}\pm 0.5$ ${\hspace{\tmp}}\eta{\ensuremath{\pi^+}}$ ${\hphantom{0}}2.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}1.8}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.6}$}}\pm 0.1$ ${\hphantom{0}}5.2{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+1.6mm}}2.0}_{{\hspace{+1.1mm}}-{\hspace{+1.6mm}}1.7}$}}\pm 0.6$ --------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Branching fractions [${\mathcal{B}}$]{} for rare hadronic $B$ decays other than $B \to hh$ modes. \[tab:bf\_other\] For modes with significance below three standard deviations, 90% confidence level (C.L.) upper limits are reported. The ratios of partial widths for $B \to hh$ decays are calculated using the measurements of branching fractions from Belle and listed in Table \[tab:belle\_bf\_ratio\]. ---------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- ${\hphantom{0}}\Gamma({\ensuremath{\pi^+}}{\ensuremath{\pi^-}}) {\hspace{+1.0mm}}/ {\hphantom{0}}\Gamma({\ensuremath{{K^+}}}{\ensuremath{\pi^-}})$ $0.24 \pm 0.04 \pm 0.02$ $2 \Gamma({\ensuremath{{K^+}}}{\ensuremath{\pi^0}}) {\hspace{+0.5mm}}/ {\hphantom{0}}\Gamma({\ensuremath{{K^0}}}{\ensuremath{\pi^+}})$ $1.16 \pm 0.16 {\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+3.0mm}}0.14}_{{\hspace{+1.1mm}}-{\hspace{+3.0mm}}0.11}$}}$ ${\hphantom{0}}\Gamma({\ensuremath{{K^+}}}{\ensuremath{\pi^-}}) / {\hphantom{0}}\Gamma({\ensuremath{{K^0}}}{\ensuremath{\pi^+}})$ $0.91 \pm 0.09 \pm 0.06$ ${\hphantom{0}}\Gamma({\ensuremath{{K^+}}}{\ensuremath{\pi^-}}) / 2 \Gamma({\ensuremath{{K^0}}}{\ensuremath{\pi^0}})$ $0.74 \pm 0.15 \pm 0.09$ ${\hphantom{0}}\Gamma({\ensuremath{\pi^+}}{\ensuremath{\pi^-}}) {\hspace{+1.0mm}}/ 2 \Gamma({\ensuremath{\pi^+}}{\ensuremath{\pi^0}})$ $0.45 \pm 0.13 \pm 0.05$ ${\hphantom{0}}\Gamma({\ensuremath{\pi^0}}{\ensuremath{\pi^0}}) {\hspace{+3.0mm}}/ {\hphantom{0}}\Gamma({\ensuremath{\pi^+}}{\ensuremath{\pi^0}})$ ---------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------- : Ratios of partial widths calculated from results of Belle. \[tab:belle\_bf\_ratio\] For the calculation of the ratio between [${B^0}$]{} and [${B^+}$]{} decays, $\tau_{\ensuremath{{B^+}}}/\tau_{\ensuremath{{B^0}}}= 1.083 \pm 0.017$ [@PDG] and $f_+/f_0 = 1$ are applied, where $\tau_{\ensuremath{{B^+}}}$ ($\tau_bz$) is the lifetime of [${B^+}$]{} ([${B^0}$]{}) and $f_+$ ($f_0$) is the branching fraction of ${\ensuremath{{\Upsilon(4S)}}}\to {\ensuremath{{B^+}}}{\ensuremath{{B^-}}}$ (${\ensuremath{{B^0}}}{\ensuremath{{\overline{B}{}^0}}}$). These ratios of branching fractions can be used to give constraints on the weak phases. [@theory_hh] For example, the QCD factorization gives model-dependent constraint on $\phi_3$ ($\gamma$). Figure \[fig:phi3\] shows the dependence of these ratios on $\phi_3$ ($\gamma$) obtained from the BBNS QCD factorization approach. [@bbns] \ The results of ratios of branching fractions from Belle are also displayed in Fig. \[fig:phi3\]. The branching fractions for ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$, ${\ensuremath{{B^+}}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^0}}$, and ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^0}}{\ensuremath{\pi^0}}$ can be used for the constraint on the size of the penguin “pollution” in the $\phi_2$ ($\alpha$) measurement using time-dependent asymmetry in ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$ decay. [@theta] The upper bound on $|2\theta| \equiv |2(\phi_2^\text{eff} - \phi_2)|$, where $\phi_2^\text{eff}$ is the measured parameter from the $CP$ asymmetry in the time-evolution of ${\ensuremath{{B^0}}}\to {\ensuremath{\pi^+}}{\ensuremath{\pi^-}}$ decay, is calculated using the results from Belle: $R \equiv {\ensuremath{{\mathcal{B}}}}({\ensuremath{\pi^0}}{\ensuremath{\pi^0}})/{\ensuremath{{\mathcal{B}}}}({\ensuremath{\pi^+}}{\ensuremath{\pi^-}}) = 0.41 < 1.00 \text{ (90\% C.L.)}$ and $A_{\pi\pi} = 0.57$ obtained with the constraint to the physically allowed region. [@belle_pipi] The obtained allowed region for $|2\theta|$ and $R$ is shown in Fig. \[fig:phi2\], and the typical values of the upper limit on $|2\theta|$ are $$\begin{aligned} {2} |2\theta| &< 61.4{\ensuremath{{}^\circ}}& \text{ (for $R = 0.41$)} \\ &< 135.4{\ensuremath{{}^\circ}}& \text{ (for $R = 1.00$)} .\end{aligned}$$ The lower bound on ${\ensuremath{{\mathcal{B}}}}({\ensuremath{\pi^0}}{\ensuremath{\pi^0}})$ can be estimated to be ${\ensuremath{{\mathcal{B}}}}({\ensuremath{\pi^0}}{\ensuremath{\pi^0}}) \ge 1.0 \times 10^{-6}$ from Fig. \[fig:phi2\]. Direct $CP$ Violation --------------------- Belle measures the partial-rate asymmetry [${A_{CP}}$]{} by fitting the [${\Delta E}$]{} distributions and extracting signal yields separately for [${B^0}$]{} ([${B^+}$]{}) and [${\overline{B}{}^0}$]{} ([${B^-}$]{}). Figure \[fig:belle\_acp\] shows the [${\Delta E}$]{} distributions separately for [${B^0}$]{} ([${B^+}$]{}) and [${\overline{B}{}^0}$]{} ([${B^-}$]{}) modes for $B \to hh$ decays from Belle. \ \ \ \ \ \ \ The fitting results and partial-rate asymmetries from Belle are listed in Table \[tab:acp\_belle\]. [($90\%$ C.I.)]{} -------------------------------------------- -------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------- ${\ensuremath{{K^+}}}{\ensuremath{\pi^-}}$ $235.4{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}19.8}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}19.1}$}}$ $270.2{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}19.7}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}18.9}$}}$ $-0.07\pm 0.06\pm 0.01$ ($-0.18 < {\ensuremath{{A_{CP}}}}< 0.04$) ${\ensuremath{{K^+}}}{\ensuremath{\pi^0}}$ $122.0\pm 15.8$ ${\hphantom{0}}76.5\pm 14.5$ $0.23\pm 0.11{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}0.01}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}0.04}$}}$ ($-0.01 < {\ensuremath{{A_{CP}}}}< 0.42$) ${\ensuremath{{K^0}}}{\ensuremath{\pi^+}}$ $119.1{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}13.8}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}13.1}$}}$ $104.4{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}13.2}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}12.5}$}}$ $ 0.07{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}0.09}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}0.08}$}}{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}0.01}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}0.03}$}}$ ($-0.10 < {\ensuremath{{A_{CP}}}}< 0.22$) ${\ensuremath{\pi^+}}{\ensuremath{\pi^0}}$ ${\hphantom{0}}31.2\pm 11.9$ ${\hphantom{0}}41.3\pm 12.7$ $-0.14\pm 0.24{\mbox{$^{{\hspace{+1.5mm}}+{\hspace{+3.0mm}}0.05}_{{\hspace{+1.5mm}}-{\hspace{+3.0mm}}0.04}$}}$ ($-0.57 < {\ensuremath{{A_{CP}}}}< 0.30$) : Number of signal events for ${\ensuremath{{\overline{B}}}}$ ([${\overline{B}{}^0}$]{} or [${B^-}$]{}) and $B$ ([${B^0}$]{} or [${B^+}$]{}), and partial-rate asymmetry [${A_{CP}}$]{} for $B \to hh$ modes from Belle. 90% confidence interval (C.I.) for [${A_{CP}}$]{} is also shown. \[tab:acp\_belle\] uses an unbinned maximum likelihood fit to determine the partial-rate asymmetry [${A_{CP}}$]{}. The input parameters are the same as those used in the measurement of branching fractions. Table \[tab:acp\_babar\] lists [${A_{CP}}$]{} for $B \to hh$ modes from . [@babar_prl; @babar_ichep02] ------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ [${K^+}$]{}[$\pi^-$]{} $-0.102 \pm 0.050 \pm 0.016$ [${K^+}$]{}[$\pi^0$]{} $-0.09 {\hphantom{0}}\pm 0.09 {\hphantom{0}}\pm 0.01$ [${K^0}$]{}[$\pi^+$]{} $-0.17 {\hphantom{0}}\pm 0.10 {\hphantom{0}}\pm 0.02$ [${K^0}$]{}[$\pi^0$]{} $ 0.03 {\hphantom{0}}\pm 0.36 {\hphantom{0}}\pm 0.09$ [$\pi^+$]{}[$\pi^0$]{} $-0.03 {\hphantom{0}}{\mbox{$^{{\hspace{+1.1mm}}+{\hspace{+2.0mm}}0.18 {\hphantom{0}}}_{{\hspace{+1.1mm}}-{\hspace{+2.0mm}}0.17 {\hphantom{0}}}$}} \hspace*{1pt} {\hphantom{0}}\pm 0.02$ ------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : Summary of [${A_{CP}}$]{} for $B \to hh$ modes from . \[tab:acp\_babar\] Figure \[fig:acp\_sum\] shows the summary plots of [${A_{CP}}$]{} for rare hadronic $B$ decays from Belle and including the modes other than $B \to hh$. [@rare_hfag] Summary ======= The $B$-factory experiments provide the most precise measurements of branching fractions and partial-rate asymmetries for rare hadronic $B$ decays. The measured partial-rate asymmetries are consistent with zero with the current statistics. The statistical precisions for these measurements have reached below 10% level in several decay modes. References {#references .unnumbered} ========== [99]{} Belle Collaboration, K. Abe [*et al.*]{}, ;\ Collaboration, B. Aubert [*et al.*]{}, . M. Kobayashi and T. Maskawa, . H. R. Quinn and A. I. Sanda, . NA48 Collaboration, J. R. Batley [*et al.*]{}, ;\ KTeV Collaboration, A. Alavi-Harati [*et al.*]{}, . Belle Collaboration, K. Abe [*et al.*]{}, . E. Kikutani ed., . Collaboration, B. Aubert [*et al.*]{}, . PEP-II Conceptual Design Report, SLAC-R-418 (1993). Belle Collaboration, B. C. K. Casey [*et al.*]{}, . Collaboration, B. Aubert [*et al.*]{}, . G. C. Fox and S. Wolfram, . Collaboration, B. Aubert [*et al.*]{}, hep-ex/0206053 (2002); hep-ex/0207063 (2002); hep-ex/0207065 (2002). Averages prepared by the Heavy Flavor Averaging Group for rare decays for Winter 2003 Conferences, [http://www.slac.stanford.edu/xorg/hfag/rare/]{}. Particle Data Group, K. Hagiwara [*et al.*]{}, . M. Gronau and J. L. Rosner, . M. Beneke, G. Buchalla, M. Neubert, and C. T. Sachrajda, . M. Gronau, D. London, N. Sinha, and R. Sinha, . Belle Collaboration, K. Abe [*et al.*]{}, hep-ex/0301032 (2003) \[submitted to \].
--- author: - Steven Carlip title: | Black Hole Thermodynamics\ and Statistical Mechanics --- Introduction ============ Black holes are black bodies. Since the seminal work of Hawking [@carHawking] and Bekenstein [@carBekenstein], we have understood that black holes behave as thermodynamic objects, with characteristic temperatures and entropies. Hawking radiation has not yet been directly observed, of course; a typical stellar mass black hole has a Hawking temperature of well under a microkelvin, far lower than that of the cosmic microwave background. But the thermodynamic properties of black holes are well understood, having been been confirmed by a great many independent methods that all yield the same quantitative results: a temperature $$kT_{\scriptscriptstyle\mathit Hawking} = \frac{\hbar\kappa}{2\pi} \label{carBHradiate3}$$ and an entropy $$S_{\scriptscriptstyle\mathit BH} = \frac{\ A_{\mathit\scriptstyle horizon}}{4\hbar G} , \label{carintro1}$$ where $A_{\mathit\scriptstyle horizon}$ is the horizon area and $\kappa$ is the surface gravity. In a typical thermodynamic system, thermal properties are the macroscopic echoes of microscopic physics. Temperature is a measure of the average energy of microscopic constituents; entropy counts the number of microstates. It is natural to ask whether the same is true for the black hole. This is an important question: the Bekenstein-Hawking entropy depends on both Planck’s and Newton’s constants, and a statistical mechanical description of black hole thermodynamics might tell us something profound about quantum gravity. Until about ten years ago, virtually nothing was known about black hole statistical mechanics. Today, in contrast, we suffer an embarrassment of riches: we have many competing microscopic pictures, describing different states and different dynamics but all predicting the same thermodynamic behavior. In these lectures, I will review what is currently know—and not known—about black hole thermodynamics and statistical mechanics. This is a large subject, and I will have to skip many interesting aspects. In particular, I will not discuss stability analysis, the peculiarities of negative heat capacity, or the complicated question of black hole phase transitions, and I will only lightly touch upon the profound issues of information loss and holography. Even so, my approach will necessarily be sketchy and idiosyncratic, though I will also try to suggest further references with different emphases and different degrees of detail. I will aim for a broad overview, rather than focusing on the fine points of any one particular approach. Some books and review articles that I have found helpful include [@carWald; @carJacobson; @carWaldbk; @carFrolov; @carLesHouches]. In an appendix, I discuss basic black hole properties and explain my notation. Black Hole Thermodynamics ========================= I will begin with two somewhat intuitive routes to black hole thermodynamics. One of these is based on the second law of thermodynamics, the other on the four laws of black hole mechanics. Neither route is completely convincing, but together they provide a good foundation for some of the harder quantitative approaches that I shall discuss later. Entropy and the second law \[carsecEntropy\] -------------------------------------------- Imagine dropping a small box of hot gas into a black hole. The initial state includes the gas and the black hole; the final state consists solely of a slightly larger black hole. The initial state certainly has nonzero entropy, in the form of the entropy of the gas. If the second law of thermodynamics is to hold, the final state must have nonzero entropy as well: the larger black hole must gain enough entropy to compensate for the entropy lost when the gas disappears behind the horizon. We can make this argument somewhat more quantitative [@carKieferb]. Suppose the box of gas has linear size $L$, mass $m$, and temperature $T$, and that the black hole has mass $M$ and horizon radius $R=2GM$ (and thus horizon area $A = 16\pi G^2M^2$). The box of gas will merge with the black hole when its proper distance $\rho$ from the horizon is of order $L$, at which point the disappearance of the gas will lead to a loss of entropy $$\Delta S \sim -m/T .$$ For a Schwarzschild black hole, the proper distance from the horizon is $$\rho = \int_{2GM}^{2GM + \delta r} \frac{dr}{\sqrt{1- 2GM/r}} \sim \sqrt{GM\delta r} ,$$ so $\rho \sim L$ when $\delta r \sim L^2/GM$. The gas initially has mass $m$, but its energy as seen from infinity is red shifted as the box falls toward the black hole; when the box reaches $r = 2GM + \delta r$, the black hole will gain a mass $$\Delta M \sim m \sqrt{1 - \frac{2GM}{2GM + \delta r}} \sim \frac{mL}{GM}.$$ If we now suppose that the box must be as large as the thermal wavelength of the gas, $L\sim \hbar/T$, we see that $$\Delta S \sim -\frac{mL}{\hbar} \sim -\frac{GM\Delta M}{\hbar} \sim -\frac{\Delta A}{\hbar G} .$$ To preserve the second law of thermodynamics, the black hole must gain an entropy of at least order $\Delta A/\hbar G$. One can perform a similar analysis for a single particle falling into a Kerr black hole (assuming the particle contains at least one bit of entropy) [@carBekenstein], a box containing a simple harmonic oscillator [@carBekenstein], and, using a more sophisticated analysis, a much more general system falling through a horizon [@carWald; @carBekensteinb; @carZurek; @carPage]. In each case, a “generalized second law” holds, provided one includes a change of entropy of order $\Delta A/\hbar G$ for the black hole. Such reasoning led Bekenstein to suggest in 1972 that a black hole should itself be attributed an entropy of order $A/\hbar G$ [@carBekenstein]. At the time, there seemed to be a compelling argument against such a hypothesis. Classical black holes are, after all, black: when placed in contact with a heat bath they will absorb energy while emitting none, thus behaving as if they have a temperature of zero [@carBardeen]. Two years later, Hawking showed that this problem was cured by quantum theory. I shall return to this result below, but let us first consider another classical argument for black hole thermodynamics. The four laws of black hole mechanics \[carFourLaws\] ----------------------------------------------------- In four spacetime dimensions, a stationary asymptotically flat black hole is uniquely characterized by its mass $M$, angular momentum $J$, and charge $Q$. (In the presence of nonabelian gauge fields or certain exotic scalar fields, other kinds of black hole “hair” can occur [@carWinstanley], but this does not change the basic argument.) In the early 1970s, a set of relations among neighboring solutions were found, culminating in Bardeen, Carter, and Hawking’s “four laws of black hole mechanics” . These take a form strikingly similar to the four laws of thermodynamics: 1. The surface gravity $\kappa$ is constant over the event horizon. 2. For any two stationary black holes differing only by small variations in the parameters $M$, $J$, and $Q$, $$\delta M = \frac{\kappa}{8\pi G}\delta A + \Omega_H\delta J + \Phi_H\delta Q , \label{carFourLaws1}$$ where $\Omega_H$ is the angular velocity and $\Phi_H$ is the electric potential at the horizon. 3. The area of the event horizon of a black hole never decreases, $$\delta A \ge 0 .$$ 4. It is impossible by any procedure to reduce the surface gravity $\kappa$ to zero in a finite number of steps. As in ordinary thermodynamics, there are a number of formulations of the third law, which are not strictly equivalent; for a proof of the version given here, which is analogous to the Nernst form of the third law of thermodynamics, see [@carIsrael]. These laws can be generalized beyond the particular four-dimensional “electrovac” setting in which they were first formulated; the first law, in particular, holds for arbitrary isolated horizons [@carAshtekar], and for much more general gravitational actions, for which the entropy can be understood as a Noether charge [@carWaldc]. Bardeen, Carter, and Hawking noted that these laws closely parallel the ordinary laws of thermodynamics, with the horizon area playing the role of entropy and the surface gravity playing the role of temperature. But they added, “It should however be emphasized that $\kappa/8\pi$ and $A$ are distinct from the temperature and entropy of the black hole. In fact the effective temperature of a black hole is absolute zero.…In this sense a black hole can be said to transcend the second law of thermodynamics.”[^1] Black holes radiate \[carBHradiate\] ------------------------------------ The first suggestion that black holes might emit radiation was made by Zel’dovich [@carZeldovich], but his argument was qualitative, and applied only to superradiant modes of rotating black holes. In 1974, though, Hawking demonstrated that all black holes emit blackbody radiation [@carHawking; @carHawkingb]. The result was startling, and according to Page [@carPageb], Hawking himself did not initially believe it. In hindsight, though, one can give a somewhat intuitive description of the effect [@carSchutz]. Such a description has two main ingredients. The first is that the quantum mechanical vacuum is filled with virtual particle-antiparticle pairs that fluctuate briefly into and out of existence. Energy is conserved, so one member of each pair must have negative energy. (To avoid a common confusion, note that either the particle or the antiparticle can be the negative-energy partner.) Normally, negative energy is forbidden—in a stable quantum field theory, the vacuum must be the lowest energy state—but energy has a quantum mechanical uncertainty of order $\hbar/t$, so a virtual pair of energy $\pm E$ can exist for a time of order $\hbar/E$. The existence of such virtual pairs is experimentally well-tested: for example, virtual pairs of charged particles make the vacuum a polarizable medium, and vacuum polarization is observed in such phenomena as the Lamb shift and in energy levels of muonic atoms [@carWeinberg]. The second ingredient is the observation that in general relativity, energy—and, in particular, the sign of energy—can be frame-dependent. The easiest way to see this is to note that the Hamiltonian is the generator of time translations, and thus depends on one’s choice of a time coordinate. One must therefore be careful about what one means by positive and negative energy for a virtual pair. In particular, consider the Schwarzschild metric, $$ds^2 = \left(1 - \frac{2GM}{r}\right)dt^2 - \left(1 - \frac{2GM}{r}\right)^{-1}dr^2 - r^2d\Omega^2 . \label{carBHradiate1}$$ Outside the event horizon, $t$ is the usual time coordinate, measuring the proper time of an observer at infinity. Inside the horizon, though, components of the metric change sign, and $r$ becomes a time coordinate, while $t$ becomes a spatial coordinate: an observer moving forward in time is one moving in the direction of decreasing $r$, and not necessarily increasing $t$.[^2] Hence an ingoing virtual particle that has negative energy relative to an external observer may have positive energy relative to an observer inside the horizon. The uncertainty principle can thus be circumvented: if the negative-energy member of a virtual pair crosses the horizon, it need no longer vanish in a time $\hbar/E$, and its positive-energy partner may escape to infinity. We can again make this argument a bit more quantitative. Consider a virtual pair momentarily at rest at a coordinate distance $\delta r$ from the horizon. As in section \[carsecEntropy\], the proper time for one member of the pair to reach the horizon will be $$\tau \sim \sqrt{GM\delta r} .$$ Setting this equal to the lifetime $\hbar/E$ of the pair, we find that $$|E| \sim \frac{\hbar}{\sqrt{GM\delta r}},$$ which should also be the energy of the escaping positive-energy partner. This is the energy at $2GM + \delta r$, though; the energy at infinity will be red shifted to $$E_\infty \sim \frac{\hbar}{\sqrt{GM\delta r}}\sqrt{1 - \frac{2GM}{2GM+\delta r}} \sim \frac{\hbar}{GM} , \label{carBHradiate2}$$ independent of the initial position $\delta r$. We might thus expect a black hole to radiate with a characteristic temperature $kT\sim\hbar/GM$. In fact, the precise computations I shall describe below yield a temperature $kT_{\scriptscriptstyle\mathit Hawking} = \hbar\kappa/2\pi$, which for a Schwarzschild black hole is $\hbar/8\pi GM$. Inserting the Hawking temperature (\[carBHradiate3\]) into the first law of black hole mechanics (\[carFourLaws1\]), we see that black holes can indeed be viewed as thermal objects, with an entropy (\[carintro1\]). This result is fundamentally quantum mechanical—the Hawking temperature depends explicitly on $\hbar$—and in some sense quantum gravitational, since the Bekenstein-Hawking entropy depends on $G$ as well. Can Hawking radiation be observed? ---------------------------------- I will return to the more precise and detailed derivations of Hawking radiation below. But let us first address the question of whether this effect can be observed. For a black hole of mass $M$, the Hawking temperature (\[carBHradiate2\]) is $$T_{\scriptscriptstyle\mathit Hawking} \sim 6\times10^{-8}\left(\frac{M_\odot}{M}\right)\, K,$$ some eight orders of magnitude smaller than the cosmic microwave background temperature for a stellar mass black hole and far smaller for a supermassive black hole. While there is a chance that we could see Hawking radiation from the final stages of evaporation of primordial black holes [@carMacGibbon; @carCline], such events are expected to be rare and difficult to identify. Another highly speculative possibility for the detection of Hawking radiation comes from models of TeV-scale gravity. In such models—which typically arise from “brane world” scenarios in which our four-dimensional universe is a submanifold of a higher-dimensional spacetime—gravity may become strong at energies far below the Planck scale. If this is the case, black holes might be produced copiously at accelerators such as the LHC, and their quantum properties could be studied in detail [@carGiddings; @carKanti]. A third, less direct, route is to look for analogs of Hawking radiation in condensed matter systems. As Unruh first pointed out [@carUnruh], one can create a sonic event horizon in a fluid flow by allowing the flow to become supersonic beyond some boundary. The same analysis that predicts Hawking radiation from a black hole leads to a prediction of phonon radiation from the sonic horizon of such a “dumb hole.” Similar phenomena can occur in a variety of condensed matter systems, from Bose-Einstein condensates to “slow light” to superfluid quasiparticles, and a number of experimental efforts are underway; for reviews, see [@carBarcelo; @carNovello]. It is worth emphasizing that while such experiments could provide strong evidence for Hawking radiation, which is essentially a kinematical property, they would not test the Bekenstein-Hawking entropy, which depends critically on the dynamics of general relativity [@carVisser]. The many derivations of Hawking radiation \[carManyT\] ------------------------------------------------------ In the absence of direct experimental evidence, how confident should we be about Hawking radiation and black hole thermodynamics? Although Hawking’s derivation involves only standard quantum field theory, we can see from the arguments of section \[carBHradiate\] that the radiation involves modes with arbitrarily high energies: while the asymptotic energies (\[carBHradiate2\]) may be small, they come from red-shifted quanta with much higher energies near the horizon. This has led some to suggest that the derivations might involve an extrapolation of quantum field theory beyond the range it can be trusted [@carUnruh; @carJacobsonb; @carHelfer]. I will return this issue below, but for now let me suggest a partial answer. If only one derivation of Hawking radiation existed, we would clearly need to look very carefully for hidden assumptions and unjustified extrapolations. In fact, though, we have a rather large number of different derivations, which involve very different assumptions and extrapolations and nevertheless all agree. Some of these derivations look at eternal black holes, others at black holes formed from collapse; some involve explicit, detailed computations in particular field theories, others use general properties of axiomatic quantum field theory; some involve Planck-scale fluctuations, others cut off energies well below the Planck scale; some some predict only the Hawking temperature, others also allow a computation of the Bekenstein-Hawking entropy. While it is still possible that these derivations all share a common flawed assumption, it seems unlikely that so many methods would converge on the same answer if that answer were wrong. None of this vitiates the need for observational tests—after all, the entire general relativistic description of black holes could be wrong—but it suggests that a failure of black hole thermodynamics would have to be either very subtle or very radical. I will describe some of these derivations below. Given the nature of these lectures, I will not attempt a full description of any one method; my aim is to give a broad overview, with references that will allow the reader to delve into individual approaches in more detail. ### Bogoliubov transformations and inequivalent vacua \[carBogol\] As noted above, a crucial ingredient in understanding Hawking radiation is the fact that energy—and, in particular, “positive” and “negative” energy—is frame-dependent. Consider, for simplicity, a free real scalar field $\varphi$. Recall that in ordinary quantum field theory in flat spacetime, we quantize $\varphi$ by first decomposing the field into Fourier modes, $$\varphi = \sum_{\bf k} \left(a_{\bf k}u_{\bf k}(t,{\bf x}) + a_{\bf k}^\dagger u_{\bf k}^*(t,{\bf x})\right) \ \ \ \mathrm{with} \ \ u_{\bf k} = e^{i{\bf k}\cdot{\bf x} - i\omega_{\bf k}t},\ \ \omega_{\bf k} = \left(|{\bf k}|^2 + m^2\right)^{1/2} , \label{carBog1}$$ and then interpret the $a_{\bf k}$ as annihilation operators and the $a_{\bf k}^\dagger$ as creation operators. The Fourier modes $u_{\bf k}$ can be understood as a set of orthonormal functions satisfying $$(\Box + m^2) u_{\bf k}(t,{\bf x}) = 0, \qquad \partial_t u_{\bf k}(t,{\bf x}) = -i\omega_{\bf k}u_{\bf k}(t,{\bf x}), \label{carBog2}$$ where the second condition determines what we mean by positive and negative frequency, and thus allows us to distinguish creation and annihilation operators. The vacuum is then defined as the state annihilated by all of the $a_{\bf k}$, $$a_{\bf k} |0\rangle = 0 .$$ In a curved spacetime, or a noninertial coordinate system in flat spacetime, standard Fourier modes are no longer available. With a choice of time coordinate $t$, though, one can still find modes of the form (\[carBog2\]) and perform a decomposition (\[carBog1\]) to obtain creation and annihilation operators. Given two different reference frames with time coordinates $t$ and ${\bar t}$, two such decompositions exist: $$\varphi = \sum_i \left(a_iu_i + a_i^\dagger u_i^*\right) = \sum_i\left({\bar a}_i{\bar u}_i + {\bar a}_i^\dagger{\bar u}_i^*\right) , \label{carBog3}$$ and since the $(u_i,u_i^*)$ are a complete set of functions, we can write $${\bar u}_j = \sum_i \left(\alpha_{ji}u_i + \beta_{ji}u_i^*\right) . \label{carBog3a}$$ This relation is known as a Bogoliubov transformation, and the coefficients $\alpha_{ji}$ and $\beta_{ji}$ are Bogoliubov coefficients [@carBogoliubov]. We now have two vacuum states, one annihilated by the $a_i$ and one by the ${\bar a}_i$, and two number operators $N_i=a_i^\dagger a_i$ and ${\bar N}_i={\bar a}_i^\dagger{\bar a}_i$. Using the orthonormality of the mode functions, it is straightforward to show that $$\langle {\bar 0}| N_i |{\bar 0}\rangle = \sum_j |\beta_{ji}|^2 . \label{carBog4}$$ Thus if the coefficients $\beta_{ji}$ are not all zero, the “barred” vacuum will have a nonvanishing “unbarred” particle content. In [@carHawking] and [@carHawkingb], Hawking considered a mass collapsing to form a black hole, and computed the Bogoliubov coefficients connecting an initial vacuum far outside the collapsing matter to a final vacuum after the black hole formed. He found that the “barred” observer at future null infinity will observe a thermal distribution of particles, with a temperature (\[carBHradiate3\]).[^3] I will not go into details here; three very nice reviews can be found in [@carJacobson; @carTraschen; @carVisser]. The essential physical feature is that ingoing vacuum modes “pile up” at the horizon, giving an exponential relationship between ingoing and outgoing surfaces of constant phase; the integrals that determine the Bogoliubov coefficients $\beta_{ji}$ take the form $$\int dv\,e^{i\omega v} e^{-i\frac{\omega}{\kappa}\ln v} ,$$ yielding gamma functions of complex arguments whose absolute squares give the exponential behavior of a thermal distribution. Hawking’s derivation was based on a particular choice of vacuum state, but generalizations are possible. For example, one may compare the vacuum of a freely falling observer near the horizon to the vacuum of an observer at future null infinity [@carUnruhb]. One can also look beyond the expectation value of the number operator, and express the full final state in terms of initial modes; one finds that it is exactly thermal [@carWaldb; @carParker]. Generalizations to spinor and gauge fields are straightforward, and yield the correct fermionic and bosonic distribution functions. It is also possible to simplify the problem, by looking at the easier model of an accelerated observer in flat spacetime. Such an observer is naturally described in Rindler coordinates [@carRindler] $$ds^2 = e^{2a\xi}\left(d\eta^2 - d\xi^2\right) ,$$ in which the exponential relationship between the unaccelerated and accelerated modes is easy to verify. A straightforward calculation of Bogoliubov coefficients shows that the accelerated observer will see a thermal bath of “Unruh radiation” with a temperature $kT = \hbar a/2\pi$, where $a$ is proper acceleration [@carUnruhb]. By the principle of equivalence, an observer at rest near the horizon of a black hole should experience the same effect, with the acceleration $a$ replaced by the appropriately blue shifted surface gravity $\kappa$, the acceleration necessary to hold the observer at rest. As I noted in the preceding section, the exponential relationship between “barred” and “unbarred” modes may be a cause of concern. The modes observed as Hawking radiation by an observer far from the black hole are red shifted from Planck-scale modes near the horizon, and it seems that one has extrapolated quantum field theory far beyond the range in which it is known to be valid. To address this question, a number of authors have looked at the effect of modifying the dispersion relations in a way that removes very high energy modes (see, for example, [@carUnruh; @carBrout; @carCorley; @carJacobsonc]). For example [@carCorleyb], one can replace the standard expression for the energy of a massless field, $\omega_{\bf k} = |{\bf k}|$, with $$\omega_{\bf k}^2 = |{\bf k}|^2 - \frac{|{\bf k}|^4}{k_0{}^2},$$ eliminating modes with trans-Planckian energies. Both numerical and analytical computations show that despite these drastic changes in the high-frequency behavior, thermal Hawking radiation persists. We now have strong evidence that a few simple assumptions—a vacuum near the horizon as seen in a freely falling frame, fluctuations that start in the ground state, and adiabatic evolution of the modes—are sufficient to guarantee thermal radiation [@carUnruhc]. ### Particle detectors in a black hole background The definitions of vacuum and particle number in the preceding section were taken from ordinary quantum field theory. But finding observables in quantum gravity is notoriously difficult, and one might worry about the applicability of these definitions in a highly curved spacetime. To address this issue, Unruh [@carUnruhb] and DeWitt [@carDeWitt] considered the response of a particle detector in a black hole background, and showed that such a detector sees thermal radiation at the Hawking temperature. Similarly, a static atom outside a black hole will be excited as one would expect in a thermal bath [@carYu]. ### The stress-energy tensor \[carStress\] One can obtain further invariant information about black hole radiation by evaluating the expectation value of the stress-energy tensor of a quantum field in a black hole background. This is a large subject; good introductions can be found in the books [@carFrolov] and [@carBirrell]. For these lectures, the most relevant result is that an ingoing negative energy flux at the horizon balances the outgoing flux of Hawking radiation observed at infinity, leading to a back-reaction in which the black hole’s mass decreases (as expected from the intuitive argument of section \[carBHradiate\]) and ensuring energy conservation. The computation of $\langle T_{\mu\nu}\rangle$ in a black hole background is generally very difficult (see, for example, [@carPagec] or chapter 11 of [@carFrolov]). In the special case of a massless scalar field—or more generally, a conformally invariant field—in two dimensions, the calculation drastically simplifies [@carChristensen]. The key difference is that in two dimensions, conservation of the stress-energy tensor is sufficient to determine the full expectation value in terms of the trace anomaly $\langle T^\mu{}_\mu\rangle$, which, in turn, depends only on characteristics of the field in a flat background. The resulting expectation values are thermal, and the total flux can be used to determine the temperature, which matches the Hawking temperature (\[carBHradiate3\]). Quite recently, Robinson and Wilczek have shown how to extend this result to more than two dimensions, by dimensionally reducing an arbitrary field to two dimensions (or equivalently looking at a partial wave expansion) and trading the trace anomaly for a chiral anomaly [@carRobinson]. Their method, with some variations (for example, [@carBanerjee]), has been quickly extended to a wide variety of black holes. In a beautiful piece of work, Iso, Morita, and Umetsu have further shown that by looking at higher order correlators, one can use similar techniques to obtain not just the total flux, but the full blackbody spectrum of Hawking radiation [@carIso; @carIsob]. ### Tunneling through the horizon \[carTunnel\] For many physical systems, we know that classically forbidden processes can occur through quantum tunneling. This is the case for Hawking radiation. The idea of a tunneling description dates back to at least 1975 [@carDamour], but the nicest form is more recent, coming from Parikh and Wilczek’s insight that one can think of the horizon tunneling past the emitted radiation rather than vice versa [@carParikh; @carParikhb]. Consider a spherically symmetric system of mass $M$ consisting of a Schwarzschild black hole of mass $M-\omega$ emitting a shell of radiation of mass $\omega \ll M$. In Painlev[é]{}-Gullstrand coordinates, chosen because they are stationary and nonsingular at the horizon, the shell moves in a spacetime with metric $$ds^2 = \left(1-\frac{2G(M-\omega)}{r}\right)dt^2 - 2\sqrt{\frac{2G(M-\omega)}{r}}dt\,dr - dr^2 - r^2d\Omega^2 ,$$ and outgoing radial null geodesics satisfy $${\dot r} = 1 - \sqrt{\frac{2G(M-\omega)}{r}} .$$ Now consider the imaginary part of the action for an outgoing positive energy shell—to be interpreted as an s-wave particle—crossing the horizon from $r_{\scriptscriptstyle\mathit{in}}$ to $r_{\scriptscriptstyle\mathit{out}}$: $$\mathop{Im} I = \mathop{Im} \int_{r_{\scriptscriptstyle\mathit{in}}}^% {r_{\scriptscriptstyle\mathit{out}}} p_r dr = \mathop{Im} \int_{r_{\scriptscriptstyle\mathit{in}}}^% {r_{\scriptscriptstyle\mathit{out}}} \int_0^{p_r}dp_r'\,dr = \mathop{Im}\int_M^{M-\omega} \int_{r_{\scriptscriptstyle\mathit{in}}}^% {r_{\scriptscriptstyle\mathit{out}}} \frac{dr}{{\dot r}}dH , \label{carTun1}$$ where I have used Hamilton’s equations of motion to write $dp_r = dH/{\dot r}$ and noted that the horizon moves inward from $GM$ to $G(M-\omega)$ as the particle is emitted. Setting $H = M-\omega$ and inserting the value of $\dot r$ obtained from the null geodesic equation, one can perform the integral easily through a contour deformation, obtaining $$\mathop{Im} I = 4\pi\omega G\left(M - \frac{\omega}{2}\right) \label{carTun2}$$ with $r_{\scriptscriptstyle\mathit{in}}>r_{\scriptscriptstyle\mathit{out}}$. Again, the physical picture is that the horizon tunnels inward as the black hole’s mass decreases. By standard quantum mechanics, the tunneling rate in the WKB approximation is then $$\Gamma = e^{-2\mathop{Im} I/\hbar} = e^{-8\pi\omega G\left(M - \frac{\omega}{2}\right)/\hbar} = e^{\Delta S_{BH}} \label{carTun3}$$ where $\Delta S_{BH}$ is the change in the Bekenstein-Hawking entropy (\[carintro1\]). By the first law of black hole mechanics, this is $\hbar\omega/T_H$, and we recover thermal Hawking radiation. The tunneling derivation may be easily extended to other classes of black holes, and consistently reproduces the standard results. Its relationship to Hawking’s original derivation is not obvious, but Parikh and Wilczek note that the same analysis can describe a negative-energy particle tunneling into the black hole, thus offering a similar physical picture. ### Periodic Greens functions Consider the two-point function of a scalar field $\varphi$ in a thermal ensemble of inverse temperature $\beta$: $$\begin{aligned} G_\beta(x,0;x',t) &= \mathop{Tr}\left(e^{-\beta H}\varphi(x,0)\varphi(x',t)\right) =\mathop{Tr}\left(\varphi(x,0)e^{-\beta H}e^{\beta H}\varphi(x',t)% e^{-\beta H}\right)\nonumber\\ &=\mathop{Tr}\left(\varphi(x,0)e^{-\beta H}\varphi(x',t+i\beta)\right) = G_\beta(x',t+i\beta;x,0) , \label{carPerGreens1}\end{aligned}$$ where I have used cyclicity of the trace and the fact that the Hamiltonian generates time translations, so $e^{\beta H}\varphi(x',t)e^{-\beta H} = \varphi(x',t+i\beta)$. In particular, (\[carPerGreens1\]) implies that if a thermal Greens function is symmetric in its arguments, it must be periodic in time with period $i\beta$. This argument may be run backwards, and such periodicity in imaginary time may be taken as the *definition* of a thermal Greens function; in axiomatic quantum field theory, this is formalized as the KMS condition [@carKubo; @carMartin; @carHaag]. As early as 1976, Bisognano and Wichmann showed that the Greens function for a uniformly accelerated observer obeys the KMS condition [@carBis]. By the equivalence principle, the same should hold for an observer at rest near the horizon of a black hole. This is indeed the case, as shown by Gibbons and Perry , who further demonstrated that the periodicity corresponds exactly to the expected Hawking temperature (\[carBHradiate3\]). ### Gravitational instantons \[carInstantons\] The periodicity of Greens functions described above suggests that it might be worthwhile to consider the analytic continuation of black hole spacetimes to “imaginary time.” Near the horizon $r=r_+$, a stationary black hole metric takes the approximate form $$ds^2 = 2\kappa(r-r_+)dt^2 - \frac{1}{2\kappa(r-r_+)}dr^2 - r_+^2d\Omega^2 .$$ Continuing to imaginary time $t=i\tau$ and replacing $r$ by the proper distance $$\rho = \frac{1}{\kappa}\sqrt{2\kappa(r-r_+)}$$ to the horizon, we obtain the “Euclidean black hole” metric $$ds^2 = d\rho^2 + \kappa^2\rho^2d\tau^2 + r_+^2d\Omega^2 . \label{carGravinst1}$$ The $\rho$–$\tau$ portion of this metric may be recognized as that of a flat two-plane in polar coordinates, with imaginary time $\tau$ serving as the angular coordinate. The horizon $\rho=0$ has shrunk to a point. To avoid a conical singularity at the origin, we must require that $\kappa\tau$ have period $2\pi$, i.e., that $\tau$ have period $2\pi/\kappa = 1/kT_{\scriptscriptstyle\mathit Hawking}$. This result provides a simple way to understand the periodicity of the Lorentzian Greens functions in imaginary time. But it does more: it allows a steepest descent (“instanton”) approximation to the gravitational path integral and a semiclassical derivation of the Bekenstein-Hawking entropy [@carGibbonsd]. The key ingredient is the observation that on a manifold with boundary, the ordinary Einstein-Hilbert action must be supplemented by a boundary term, without which it may have no extrema [@carGibbonsd; @carRegge]. At an extremum, the “bulk” contribution to the action, $$\frac{1}{16\pi G}\int d^4x\,\sqrt{|g|}R ,$$ vanishes, but the boundary term can give a nonzero contribution. In the original work in this field, the boundary term was taken at infinity [@carGibbonsd; @carHawkingc], but it may more intuitively be placed at the origin of the Euclidean black hole, that is, at the horizon [@carBTZa; @carTeitelboim; @carHawkingHorowitz]. This boundary term may be evaluated in a number of ways—a particularly elegant approach involves dimensional reduction to a disk in the $\rho$–$\tau$ plane [@carBTZa]—and yields an extremal action $${\bar I}_{\scriptscriptstyle\mathit Euc} = \frac{\ A_{\mathit\scriptstyle horizon}}{4\hbar G} - \beta(M + \Omega J + \Phi Q) . \label{carGravinst2}$$ This Euclidean saddle point contributes $e^{{\bar I}_{\scriptscriptstyle\mathit Euc}}$ to the partition function, and from (\[carGravinst2\]), we can recognize the result as the grand canonical partition function for a system with entropy $S_{\scriptscriptstyle\mathit BH} = A_{\mathit\scriptstyle horizon}/4\hbar G$. These results can be extended to much more general stationary configurations containing horizons [@carHawkingHunter]. The essential ingredient is a Killing vector with zeros, which become boundaries upon continuation to Euclidean signature. One can also obtain an equivalent result by canonically quantizing the system while including the boundary terms; the boundary term at the horizon gives rise to a new term in the Wheeler-DeWitt equation, from which one can again recover the Bekenstein-Hawking entropy [@carCarTeit]. ### Black hole pair creation A further path integral derivation of black hole entropy comes from studying the spontaneous pair creation rate for black holes in a background magnetic field [@carGarfinkle], electric field [@carBrown], de Sitter space [@carMannRoss], or more complicated combinations of external fields [@carBooth]. One consistently finds that the production rate is enhanced by a factor of $e^{S_{BH}}$, exactly the phase space factor one would expect for a system in which the Bekenstein-Hawking entropy gives the logarithm of the number of states. ### Quantum field theory and the eternal black hole \[carEternal\] Yet another derivation of Hawking radiation comes from considering quantum field theory on an eternal black hole background. Recall that in Kruskal coordinates, a black hole spacetime splits into four regions, as shown in figure \[carFig1\]. (200,160)(0,-5) (10,10)[(1,1)[140]{}]{} (150,10)[(-1,1)[140]{}]{} (10,150)(20,0)[7]{}[(1,1)[10]{}]{} (20,160)(20,0)[7]{}[(1,-1)[10]{}]{} (10,10)(20,0)[7]{}[(1,-1)[10]{}]{} (20,0)(20,0)[7]{}[(1,1)[10]{}]{} (10,10)[(-1,1)[70]{}]{} (-60,80)[(1,1)[70]{}]{} (150,10)[(1,1)[70]{}]{} (220,80)[(-1,1)[70]{}]{} (-60,80)(0,70)(80,80) (80,80)(160,90)(220,80) (5,93)[II]{} (145,98)[I]{} (190,87)[$\Sigma$]{} Consider a state defined on a Cauchy surface $\Sigma$ that passes through the bifurcation sphere. Region $II$ is invisible to an observer living in region $I$, so such an observer should trace over the degrees of freedom in that region. Even if the initial state is pure, such a trace will lead to a density matrix describing the physics in region $I$. This makes it plausible that the region $I$ observer will see thermal behavior, and detailed calculations show that this is indeed the case. In particular, for a free quantum field there is at most one quantum state, the Hartle-Hawking vacuum state, that is regular everywhere on the horizon [@carWaldbk; @carKay]. For a scalar field, a direct computation shows that the density matrix obtained by tracing over region $II$ is thermal, with a temperature $T_{\scriptscriptstyle\mathit Hawking}$ [@carIsraelb]. For more general fields, the same can be shown by means of fairly sophisticated quantum field theory [@carWaldbk; @carKay], or by general path integral arguments [@carJacobsond]. ### Quantum gravity in 2+1 dimensions \[car3D\] Most standard derivations of black hole thermodynamics hold in an arbitrary number of dimensions, with changes only in the greybody factors for Hawking radiation. In three spacetime dimensions, though, many approaches become much simpler. The BTZ solution [@carBTZ; @carCarlipa] is a vacuum solution of the Einstein field equations in 2+1 dimensions with a negative cosmological constant. It has all of the standard features of a rotating black hole—an event horizon, an inner Cauchy horizon, the same causal structure as that of a (3+1)-dimensional asymptotically anti-de Sitter black hole—but is, at the same time, a space of constant negative curvature. This latter feature greatly simplifies many derivations: for example, Greens functions can be computed exactly and their periodicity in imaginary time exhibited explicitly (see [@carCarlipa] for a review). As was first suggested in [@carCarlipb], it might even be possible to use the relationship between three-dimensional general relativity and two-dimensional conformal field theory [@carWitten] to find an exact description of the quantum states of the BTZ black hole; the present status of this conjecture is discussed in [@carCarlipc]. The simplicity of the (2+1)-dimensional setting also permits an approach that is not readily available in higher dimensions. The methods I have described so far are based on properties of quantum fields in a classical, or at best semiclassical, black hole background. In three dimensions, one can work in the opposite direction, starting with a *quantum* black hole coupled to a *classical* source. As I shall discuss further in section \[carStates\], three-dimensional gravity with a negative cosmological constant is closely related to a two-dimensional field theory living at the “boundary” of asymptotically anti-de Sitter space. Emparan and Sachs have shown how to couple this two-dimensional field theory to a classical scalar field, allowing the computation of transition rates among black hole states due to emission and absorption of the classical field [@carEmparan]. By using detailed balance arguments, they recover standard Hawking radiation, including the correct greybody factors, from this fundamentally quantum gravitational picture. ### Other microscopic approaches The derivations I have described so far are essentially “thermodynamic,” based on macroscopic properties of black holes. As I shall discuss in the following sections, we now also have a large number of “statistical mechanical” derivations, based on analyses of the microscopic states of the black hole. These microscopic approaches are not complete—string theory derivations, for example, are most reliable for extremal and near-extremal black holes, while loop quantum gravity derivations contain an order one parameter that, so far, must be adjusted by hand—but they seem to work well within their ranges of validity. When combined with the macroscopic approaches above, they provide strong evidence for the reality of black hole thermodynamics. Black Hole Statistical Mechanics ================================ In ordinary thermodynamic systems, thermal properties are macroscopic reflections of the underlying microscopic physics. Temperature is a measure of the average energy of the constituents of a system, for instance, while entropy is essentially the logarithm of the number of states with specified macroscopic properties. The connection between the microscopic and macroscopic properties, given by statistical mechanics, has been remarkably successful across physics. Given the thermodynamic properties of black holes, it is natural to ask whether these, too, have a statistical mechanical interpretation. Such an explanation would almost certainly involve quantum gravity—the Bekenstein-Hawking entropy (\[carintro1\]) involves both Planck’s constant $\hbar$ and Newton’s constant $G$—and we might hope to learn something about the deep mysteries of quantum gravity. To find such a statistical mechanical description, one should, in principle, carry out a number of steps: 1. find a candidate quantum theory of gravity (not an easy task); 2. identify black holes in the theory (also not easy); 3. identify observables such as horizon area (surprisingly hard —finding physical observables in a quantum theory of gravity is notoriously difficult [@carCarliprev]); 4. count the microstates for a black hole configuration (perhaps easier, but still not trivial); 5. compare to the Bekenstein-Hawking entropy (perhaps relatively easy); 6. compute interactions with external fields, evaluate Hawking radiation, etc. (not at all easy); 7. try to identify new quantum gravitational effects (the horizon area spectrum? evaporation remnants? higher order corrections to the Bekenstein-Hawking entropy? correlations across the horizon?). Until recently, these steps seemed far beyond reach. In 1996, though, Strominger and Vafa published a remarkable paper in which they explicitly computed the entropy of a class of extremal black holes in string theory from the microscopic quantum theory [@carStrominger]. Since then, a flood of new microscopic derivations of black hole thermodynamics has appeared. The new puzzle—the “problem of universality”—is that although these derivations seem to be using very different methods to count very different states, they all obtain the same thermodynamic properties. The many faces of black hole statistical mechanics -------------------------------------------------- In this section, I will briefly review some of the statistical mechanical approaches to black hole thermodynamics, and in particular the Bekenstein-Hawking entropy. As in section \[carManyT\], I will not go into detail, but will instead try to provide an overall flavor of the work, along with references for further study. ### String theory: weakly coupled strings and branes \[carWeak\] The first breakthrough in the counting of black hole microstates came with the work of Strominger and Vafa on extremal black holes in string theory [@carStrominger]. Their approach can be summarized as follows. The effective low-energy field theory coming from string theory contains a number of gauge fields, each of which can give a charge to a black hole. An extremal supersymmetric (BPS) black hole is uniquely characterized by its charges; in particular, its horizon area can be expressed in terms of these charges. Given such a black hole, one can imaging tuning down the couplings, weakening gravity until the black hole “dissolves” into a gas of weakly coupled strings and branes. In this weakly coupled system, the charges can be expressed in terms of the number of strings and branes and the quantized momentum carried by strings. Furthermore, the states—the excitations of the string-brane system—can be explicitly counted [@carMathur0]. We can therefore write the number of states in terms of the numbers of strings and branes, and thus the charges. Comparing this number to the horizon area, we recover the standard Bekenstein-Hawking entropy as the logarithm of the number of states. One might worry that the number of states might not be the same in the weakly coupled system as in the strongly coupled black hole. For the supersymmetric case, though, this number is protected by nonrenormalization theorems. For black holes far from extremality, on the other hand, the computations are much more difficult; there are qualitative arguments that give an entropy proportional to the horizon area, but the exact proportionality factor of $1/4$ is difficult to obtain [@carSusskind; @carHorowitzPol]. It was quickly realized that the Strominger-Vafa results could be extended to a wide variety of extremal and near-extremal black holes, and through duality relations to a number of nonextremal black holes as well. Nice reviews can be found in [@carPeet] and [@carDas]; for recent progress on the four-dimensional Kerr black hole, see [@carHorowitz]. This string theory approach has been remarkably successful, determining not only the Bekenstein-Hawking entropy for extremal and near-extremal black holes, but also describing their interactions with other fields and their emission of Hawking radiation. The method has one peculiarity, though, to which I will return below. Suppose you ask me for the entropy of a three-charge black hole in five dimensions. I will compute the horizon area in the strongly coupled theory in terms of the charges, compute the number of states in the weakly coupled theory in terms of the charges, compare the two, and reply that the entropy is one-fourth of the horizon area. If you now ask me for the entropy of a four-charge black hole, or a black hole in six dimension, I cannot simply tell you that it is one-fourth of the horizon area; I must recompute the horizon area and the number of states in terms of the new parameters and compare the answers again. Each new black hole requires a new calculation: the theory tells us that the number of microstates of a black hole matches the Bekenstein-Hawking entropy (\[carintro1\]), but it tells us so one black hole at a time. ### String theory: “fuzzballs” One can run the argument in the preceding section backwards: given a particular excitation of the weakly coupled string and brane system, one can ask exactly what geometry results at strong coupling. The result is a “fuzzball” picture, in which particular black hole states correspond to complicated geometries that have *no* horizon or singularity, but that look very much like black hole geometries outside the would-be horizon . In special cases, one can count the number of such “fuzzball” geometries and reproduce the Bekenstein-Hawking entropy, and it seems likely that this result can be extended to more general black holes, although it is an open question whether simple geometric descriptions will always suffice [@carSkenderis]. Samir Mathur has discussed this approach extensively in his lectures, to which I refer the reader [@carMathur0]. ### String theory: the AdS/CFT correspondence \[carAdSCFT\] Yet another string theory approach to black hole statistical mechanics is based on Maldacena’s celebrated AdS/CFT correspondence [@carMalda; @carAGMOO]. This very well-supported conjecture asserts a duality between string theory in $d$-dimensional asymptotically anti-de Sitter spacetime and a conformal field theory in a flat $(d-1)$-dimensional space that can, in a sense, be viewed as the boundary of the AdS spacetime. This correspondence is naturally “holographic” (see section \[carHolography\]), describing the black hole in terms of a lower-dimensional theory and thus offering a framework for understanding the dependence of entropy on area rather than volume. For asymptotically anti-de Sitter black holes, this correspondence makes it possible to compute entropy by counting states in a (nongravitational) dual conformal field theory. The simplest case is the (2+1)-dimensional BTZ black hole discussed in section \[car3D\], whose dual is a two-dimensional conformal field theory. As I shall discuss in section \[carCardyformula\], the density of states in such a theory has an asymptotic behavior controlled by a single parameter, the central charge $c$. For asymptotically anti-de Sitter gravity in 2+1 dimensions, this central charge is dominated by a classical contribution, which was discovered some time ago by Brown and Henneaux [@carBrownHen]. Strominger [@carStromingerb] and Birmingham, Sachs, and Sen [@carBSS] independently realized that this result could be used to compute the BTZ black hole entropy, reproducing the Bekenstein-Hawking expression. While this result applies directly only to the special case of three-dimensional spacetime, it has an important generalization. Many of the higher dimensional near-extremal black holes of string theory—including black holes that are not themselves asymptotically anti-de Sitter—have a near-horizon geometry of the general form $\mathit{BTZ}\times\mathit{trivial}$, where the “trivial” part merely renormalizes constants in the calculation of entropy. As a consequence, the BTZ results can be used to find the entropy of a large class of stringy black holes, including most of the black holes whose states can be counted in the weak coupling approach of section \[carWeak\] [@carSkenderisb]. ### Loop quantum gravity \[carLoop\] In the quest for quantum gravity, the leading alternative to string theory is loop quantum gravity [@carRovelli]. The fundamental “position” variable in this theory is a three-dimensional $\mathrm{SU}(2)$ connection; a state is a complex-valued function of (generalized) connections. A useful basis of states consists of spin networks, graphs with edges labeled by $\mathrm{SU}(2)$ representations (“spins”) and vertices labeled by intertwiners. A spin network state can be evaluated on a given connection to give a complex number by computing the holonomies along the edges in the specified representations and combining them with the intertwiners at the vertices. Given a surface $\Sigma$, one can define an area operator ${\hat A}_\Sigma$ that acts on loop quantum gravity states. It may be shown that spin networks are eigenfunctions of these operators, with eigenvalues of the form $$A_\Sigma = 8\pi\gamma G \sum_j \sqrt{j(j+1)} ,$$ where the sum is over the spins $j$ of edges of the spin network that cross $\Sigma$. The parameter $\gamma$, the Barbero-Immirzi parameter, represents a quantization ambiguity, and its physical significance is poorly understood; theories with different values of $\gamma$ may be inequivalent, but it has been suggested that $\gamma$ may not appear in properly renormalized observables [@carJacobsone] or in a slightly different approach to quantization [@carAlexandrov]. Given this structure, a natural first attempt to count black hole states is to enumerate inequivalent spin networks crossing the horizon that yield a specified area [@carKrasnov; @carRovellib]. The more careful variation of this idea [@carABCK; @carABK] takes into account the fact that when one restricts to a black hole spacetime, one must place “boundary conditions” on the horizon to ensure that it is, in fact, a horizon. These conditions, in turn, require the addition of boundary terms to the Einstein-Hilbert action, which induce a three-dimensional Chern-Simons action on the horizon. The number of states of this Chern-Simons theory is closely related to the number of spin networks that induce the correct horizon area, but with slightly more subtle combinatorics. The ultimate result is that the black hole entropy takes the form [@carDomagala; @carMeissner] $$S = \frac{\gamma_M}{\gamma} \frac{\ A_{\mathit\scriptstyle horizon}}{4\hbar G} , \label{carLoopQG1}$$ where $\gamma$ is the Barbero-Immirzi parameter and $$\gamma_M \approx .23753$$ is a numerical constant determined as the solution of a particular combinatoric problem. If one chooses $\gamma=\gamma_M$, one thus recovers the standard Bekenstein-Hawking entropy. The physical significance of this rather peculiar value of the Barbero-Immirzi parameter is not understood, and it may reflect an inadequacy in the quantization procedure or the definition of the area operator [@carAlexandrov]. Note, though, that $\gamma$ only appears in the combination $G\gamma$, so this choice may be viewed as a finite renormalization of Newton’s constant. If the same shift occurs in the attraction between two masses, its interpretation becomes straightforward. Unfortunately, the Newtonian limit of loop quantum gravity is not yet well enough understood to see whether this is the case. In any case, though, once $\gamma$ is fixed for one type of black hole—the static Schwarzschild solution, say—the loop quantum gravity computations give the correct entropy for a wide variety of others, including charged black holes, rotating black holes, black holes with dilaton couplings, black holes with higher genus horizons, and black holes with arbitrarily distorted horizons [@carAshtekarLew; @carEngle]. In particular, there is no need to restrict oneself to near-extremal black holes. Hawking radiation, on the other hand, is not yet very well understood in this approach, although there has been some progress [@carBarreira; @carKrasnovb]. An alternate approach to black hole entropy also exists within the framework of loop quantum gravity [@carLivine]. Here, one again looks at a horizon area determined by edges of a spin network, but instead of counting states in an induced boundary theory, one merely asks the number of ways the spins can be joined to a single interior vertex. This amounts, in essence, to completely coarse-graining the interior state of the black hole, and is comparable in spirit to the thermodynamic derivation of section \[carEternal\]. One again obtains an entropy proportional to the horizon area, although with a different value of the Barbero-Immirzi parameter. ### Induced gravity In 1967, Sakharov suggested that the Einstein-Hilbert action for gravity might not be fundamental [@carSakharov]. If one starts with a theory of fields propagating in a curved spacetime, counterterms from renormalization will automatically induce a gravitational action, which will almost always include an Einstein-Hilbert term at lowest order [@carAdler]. Gravitational dynamics would then be, in Sakharov’s terms, a sort of “metric elasticity” induced by quantum fluctuations. One can write down an explicit set of “heavy” fields that can be integrated out in the path integral to induce the Einstein-Hilbert action. By including nonminimally coupled scalar fields, one can obtain finite values of Newton’s constant and the cosmological constant. It is then possible to go back and count states of the heavy fields in a black hole background [@carFrolovb]. The nonminimal couplings lead to some subtleties in the definition of entropy, but in the end the computation reproduces the standard Bekenstein-Hawking value. Furthermore, the reduction to a two-dimensional conformally invariant system near the horizon, in the spirit of the thermodynamic approach of section \[carStress\], allows a counting of states by standard methods of conformal field theory [@carFrolovc]. We thus obtain a new, and apparently quite different, view of the microstates of a black hole as those of the ordinary quantum fields responsible for inducing the gravitational action. ### Entanglement entropy As discussed in section \[carEternal\], one way to obtain the thermodynamic properties of a black hole is to trace out the degrees of freedom behind the horizon, generating a density matrix for the external observer from a globally pure state. This process also produces a quantum mechanical “entanglement entropy,” which can be thought of as a measure of the loss of information about correlations across the horizon. The suggestion that this entanglement entropy might account for the Bekenstein-Hawking entropy is an old one [@carBombelli; @carSrednicki], and it is not hard to show that for many (although not all [@carRequardt]) states, the entanglement entropy is proportional to the horizon area: the main contribution comes from correlations among degrees of freedom very close to the horizon, and does not involve “bulk” degrees of freedom. The *coefficient* of this entropy, on the other hand, is infinite, and must be cut off [@cartHooft], leading to an expression that depends strongly on both the nongravitational content of the theory (the number and species of “entangled” fields contributing to the entropy) and the value of the cutoff. The same modes that cause the entanglement entropy to diverge also give divergent contributions to the renormalization of Newton’s constant, and it has been suggested that the two divergences may compensate [@carSusskindb]. This notion has recently gained new life with a proposal by Ryu and Takayanagi for a “holographic” description of entanglement entropy [@carRyu; @carHubeny], in which the $d$-dimensional spacetime containing a black hole is embedded at the asymptotic boundary of $(d+1)$-dimensional anti-de Sitter space. The idea is inspired by the string theory AdS/CFT correspondence, and can be largely proven to work in situations in which such a correspondence exists [@carFursaev]; the bulk anti-de Sitter metric provides a natural cutoff, yielding finite contributions to both $S$ and $G$. When applied to a black hole, the proposal correctly reproduces the standard Bekenstein-Hawking entropy [@carEmparanb], providing yet another physical picture of the relevant microstates. ### Other approaches A variety of other microscopic descriptions of black hole thermodynamics have also been proposed. In the causal set formulation of quantum gravity—in which a continuous spacetime is replaced by a discrete set of points with prescribed causal relations—there is evidence that the Bekenstein-Hawking entropy is given by the number of points in the future domain of dependence of a spatial cross-section of the horizon [@carRideout]. York has estimated the entropy obtained by quantizing the quasinormal modes [@carSiopsis] of the Schwarzschild black hole, finding a result that lies within a few percent of the Bekenstein-Hawking value [@carYork]. Black hole entropy can be related to the Kolmogorov-Sinai entropy of a string spreading out on the black hole horizon [@carRopotenko]. A number of mini- and midisuperspace models—models in which most of the degrees of freedom of the gravitational field are frozen out—have also been proposed to explain black hole statistical mechanics [@carVaz; @carMakela; @carKiefer], though none is yet very convincing. One can also build “phenomenological” models of black hole microstates, in which the horizon area is simply assumed to be quantized [@carBekensteind; @carKastrup; @carBarvinsky; @carBekensteine]. Such models do not, of course, tell us *why* area is quantized, and thus do not address the fundamental physical questions of black hole statistical mechanics, but they can suggest useful directions for further research. Suppose, for example, that the black hole area spectrum is discrete and equally spaced, and that the exponential of the entropy (\[carintro1\]) gives an exact count of the number of states at a given horizon area. Then the difference between two adjacent values must be an integer; that is, $$\Delta A = 4\hbar G \ln k \label{carOther1}$$ for some integer $k$. Hod has pointed out [@carHod] that for the Schwarzschild black hole, the most highly damped quasinormal modes [@carSiopsis]—the damped “ringing modes” of an excited black hole—have frequencies whose real part approaches $$\mathop{Re}\omega = \ln3/8\pi GM$$ (a numerical result later verified analytically [@carMotl]). If one applies the Bohr correspondence principle and argues that area eigenstates of the black hole should change by emission of quanta of energy $\hbar\omega$, one obtains $$\Delta A = 32\pi G^2M\Delta M = 4\hbar G\ln 3 ,$$ matching (\[carOther1\]) with $k=3$. It is not yet clear whether this result has deep significance. It seems to extend to general single-horizon black holes [@carDaghigh] and in a more complicated way to many “stringy” black holes [@carBirminghamCar], but results for charged and rotating black holes are unclear (for an optimistic view, see [@carHodb]). One can also describe the Bekenstein-Hawking entropy as a count of the number of distinct ways that a black hole with specified macroscopic properties can be made from collapsing matter [@carZurek]. Like the phenomenological models of area quantization, this result does not really describe the microscopic degrees of freedom of the black hole itself (except perhaps in the “membrane paradigm” [@carThorne]), but it strongly suggests that if the formation of a black hole is a unitary process, such degrees of freedom must exist. The problem of universality =========================== One of the main lessons of the preceding section is that a great many different models of black hole microphysics yield the same thermodynamic properties. Some of these models are clearly ad hoc, but others are carefully worked out consequences of serious approaches to quantum gravity. So the new question is why everyone is getting the same answer. To some extent, this “problem of universality” is a selection issue: there are undoubtedly computations that gave the “wrong” answer for black hole entropy and were discarded without being published. But as noted in section \[carWeak\], even within a particular well-motivated and successful string theory model we do not yet understand the universality of the entropy-area relationship. And regardless of what one may think about any one particular approach, one must still explain why *any* microscopic model reproduces the results of Hawking’s original thermodynamic computation, a computation that seems to require no information about quantum gravity at all. There are other situations, of course, in which thermodynamic properties do not depend too delicately on an underlying quantum theory. For example, for a large range of parameters the entropy of a box of gas depends only very weakly on whether the molecules are fermions or bosons. But in cases like this, we have a *classical* microscopic description, and the correspondence principle guarantees that the quantum theory will give a good approximation for the classical results. For a black hole, things are different: the only classical description we have is one in which black holes have no hair—no phase space volume—and thus no entropy. We need something new, some new principle that determines the quantum mechanical density of states in terms of the classical characteristics of a black hole. I do not know the ultimate explanation for this universal behavior, but in the remainder of this section, I will make a tentative suggestion and offer some evidence that it may be correct. The Cardy formula \[carCardyformula\] ------------------------------------- I only know of one well-understood case in which universality of the sort we see in black hole statistical mechanics appears elsewhere in physics. Consider a two-dimensional conformal field theory, that is, a theory in two spacetime dimensions that is invariant under diffeomorphisms (“generally covariant”) and Weyl transformations (“locally scale invariant”). If we choose complex coordinates $z$ and ${\bar z}$, the basic symmetries of such a theory are the holomorphic and antiholomorphic diffeomorphisms $z\rightarrow f(z)$, ${\bar z}\rightarrow{\bar f}({\bar z})$. These are canonically generated by “Virasoro generators” $L[\xi]$ and ${\bar L}[{\bar \xi}]$ [@carCFT]. Such a theory has two conserved charges, $L_0 = L[\xi_0]$ and ${\bar L}_0 = {\bar L}[{\bar\xi}_0]$, which can be thought of as “energies” with respect to constant holomorphic and antiholomorphic transformations, or alternatively as linear combinations of energy and angular momentum. As generators of diffeomorphisms, the Virasoro generators have an algebra that is almost unique [@carTeitelboimb]: $$\begin{aligned} &\left\{L[\xi],L[\eta]\right\} = L[\eta\xi' - \xi\eta'] + \frac{c}{48\pi}\int dz\left( \eta'\xi'' - \xi'\eta''\right) \nonumber \\ &\left\{L[\xi],{\bar L}[{\bar\eta}]\right\} = 0 \label{carCardyform1} \\ &\left\{{\bar L}[{\bar\xi}],{\bar L}[{\bar\eta}]\right\} = {\bar L}[{\bar\eta}{\bar\xi'} - {\bar\xi}{\bar\eta'}] + \frac{{\bar c}}{48\pi}\int d{\bar z}\left({\bar\eta}'{\bar\xi}'' - {\bar\xi}'{\bar\eta}''\right)\nonumber .\end{aligned}$$ The central charges $c$ and $\bar c$ determine the unique central extension of the ordinary algebra of diffeomorphisms. These constants can occur classically, coming, for instance, from boundary terms in the generators [@carBrownHen], or can appear upon quantization. Now consider a conformal field theory for which the lowest eigenvalues of $L_0$ and ${\bar L}_0$ are nonnegative numbers $\Delta_0$ and ${\bar\Delta}_0$. In 1986, Cardy discovered a remarkable result [@carCardy; @carCardyb]: the density of states $\rho(\Delta,\bar\Delta)$ at eigenvalues $(\Delta,{\bar\Delta})$ of $L_0$ and ${\bar L}_0$ has the simple asymptotic behavior $$\ln\rho(\Delta,\bar\Delta) \sim 2\pi\left\{ \sqrt{\frac{c_{\hbox{\scriptsize\it eff}}\Delta}{6}} + \sqrt{\frac{{\bar c}_{\hbox{\scriptsize\it eff}}{\bar\Delta}}{6}}\, \right\}, \ \ \ \hbox{with}\ \ c_{\hbox{\scriptsize\it eff}} = c-24\Delta_0, \ {\bar c}_{\hbox{\scriptsize\it eff}} = {\bar c}-24{\bar\Delta}_0 . \label{carCardyform2}$$ The entropy is thus determined by the symmetry, independent of any other details—exactly the sort of universality we are looking for. A typical black hole is neither two-dimensional nor conformally invariant, of course, so this result may at first seem irrelevant. But there is a sense in which black holes become *approximately* two-dimensional and conformal near the horizon. For fields in a black hole background, for instance, excitations in the $r$–$t$ plane become so blue shifted relative to transverse excitations and dimensionful quantities that an effective two-dimensional conformal description becomes possible [@carBirm; @carGupta; @carCamblong]. Indeed, as noted in section \[carStress\], the full Hawking radiation spectrum can be derived from such an effective description [@carIso; @carIsob]. Martin, Medved, and Visser have further shown that a generic near-horizon region has a conformal symmetry, in the form of an approximate conformal Killing vector [@carMartinMed; @carMartinMedb]. Horizons and constraints ------------------------ For the special case of the (2+1)-dimensional BTZ black hole, the Cardy formula can be used directly to count states. For this solution, the boundary at infinity is geometrically a two-dimensional flat cylinder, and the asymptotic diffeomorphisms that respect boundary conditions satisfy a Virasoro algebra with a classical central charge [@carBrownHen], which can be used in the Cardy formula [@carStromingerb; @carBSS]. As described in section \[carAdSCFT\], this calculation can be extended to a number of near-extremal black holes whose near-horizon geometry contains a $\mathit{BTZ}$ factor. For more general black holes, though, something new is needed. One key question, I believe, is how to specify that one is talking about a black hole in quantum gravity. One cannot simply require a fixed metric: the components of the metric do not all commute, and cannot be simultaneously specified in a quantum theory. For the BTZ case, the key element is a set of boundary conditions at infinity, but in general it seems more natural to consider conditions at the horizon. Two approaches to this question are currently under investigation, each leading to an effective two-dimensional conformal description in which the Cardy formula might be applicable. ### The horizon as a boundary The first approach [@carCarlipd; @carCarlipe] is to introduce “boundary conditions” at the horizon. The horizon is not, of course, a genuine boundary, but it is a place at which we must restrict the value of the metric, precisely to ensure that it is a horizon. As in the BTZ case, such a restriction forces us to add new boundary terms to the canonical generators of diffeomorphisms, changing their algebra. One finds a conformal symmetry in the $r$–$t$ plane with a classical central charge. For a large variety of black holes, it has been shown that the Cardy formula then yields the correct entropy.[^4] On the other hand, the diffeomorphisms whose algebra yields that central charge, essentially those that leave the lapse function invariant, are generated by vector fields that blow up at the horizon. This is not necessarily a bad thing—from the perspective of an external observer, many physical quantities diverge at the horizon—but the status of these transformations is not clear. In addition, the “horizon as boundary” method has trouble with the two-dimensional black hole, and some normalization issues are not completely sorted out. A related approach is to look for approximate conformal symmetry near the horizon [@carSolo; @carCarlipg]; one again finds a Virasoro algebra with a central charge that seems to lead to the correct entropy, but there are again some normalization ambiguities. ### Horizon constraints \[carHorizoncon\] A more recent approach [@carCarliph; @carCarlipi] is to impose the presence of a horizon by adding “horizon constraints” in the canonical formulation of gravity, that is, introducing new constraints that restrict data on a specified surface to be that of a black hole horizon. In outline, the procedure is this: 1. dimensionally reduce to the two-dimensional $r$–$t$ plane near the horizon; 2. continue to Euclidean signature, shrinking the horizon to a point as in section \[carInstantons\], and evolve radially; 3. impose constraints on a small circle around the horizon that force the initial data be that of a “stretched horizon”; 4. adjust the diffeomorphism constraints on the stretched horizon a la Bergmann and Komar [@carBergmann; @carDirac; @carDiracb] to make them commute with the new horizon constraints; 5. find the resulting algebra and central charge. The Cardy formula again reproduces the correct Bekenstein-Hawking entropy. ### Universality again If either of these approaches is to be an answer to the “problem of universality,” it must be that the horizon conformal symmetries are secretly present in the various other computations of black hole entropy. I do not know whether this is the case; it is a subject of continuing research. One fairly simple test is to compare the near-horizon Virasoro algebra of section \[carHorizoncon\] with the asymptotic Virasoro algebra of the BTZ black hole, which is the key element in the AdS/CFT computations of section \[carAdSCFT\]. It is shown in [@carCarlipi] that after a suitable matching of coordinate choices, the central charges and conformal weights exactly coincide, providing one piece of evidence for the proposed explanation of universality. There is also an intriguing link to the loop quantum gravity approach of section \[carLoop\]: the induced horizon Chern-Simons theory in loop quantum gravity is naturally associated with a two-dimensional conformal field theory [@carWittenb], whose central charge matches the horizon central charge of section \[carHorizoncon\]. Searches for hidden conformal symmetry in loop quantum gravity, the fuzzball approach, and induced gravity are currently underway. What are the states? \[carStates\] ---------------------------------- In light of the problem of universality, is there anything general we can say about the states responsible for black hole thermodynamics? At first sight, the answer must be “no”: if a universal underlying structure controls the density of states, there should be many different models with different degrees of freedom but with the same thermodynamic properties. Nevertheless, it may still be possible to find an *effective* description that is valid across models. To see this, let us first return to the BTZ black hole. In three spacetime dimensions, general relativity has a peculiar feature: it is a topological theory, with no propagating degrees of freedom [@carCarlipj]. Where, then, do the black hole degrees of freedom come from? The answer to this paradox is at least partially understood [@carCarlipc]. For the (2+1)-dimensional Einstein-Hilbert action to have any black hole extrema, one must impose anti-de Sitter boundary conditions at infinity. Diffeomorphisms that do not respect these boundary conditions are no longer true invariances of the theory, and states one might naively take to be physically equivalent—states that differ only by a diffeomorphism—must be considered distinct if the diffeomorphism connecting them is incompatible with the boundary conditions. New physical degrees of freedom thus appear, which can be labeled by diffeomorphisms that fail to respect the anti-de Sitter boundary conditions. The action for these new degrees of freedom can be extracted explicitly from the Einstein-Hilbert action [@carCarlipk], and the resulting dynamics is that of a Liouville theory, a two-dimensional conformal field theory whose central charge matches the classical value obtained by Brown and Henneaux [@carBrownHen]. Whether one can actually count the states in this theory to reproduce the Bekenstein-Hawking entropy remains an open question [@carCarlipc; @carChen]. For higher dimensional black holes, the problem is quite a bit more difficult. One possible approach is to start with the Virasoro algebra (\[carCardyform1\]) for the near-horizon conformal algebra of section \[carHorizoncon\]. In Dirac quantization, the existence of a constraint ordinarily restricts the physical states: we should require that $$L[\xi]|\mathit{phys}\rangle = {\bar L}[{\bar\xi}]|\mathit{phys}\rangle = 0 . \label{carWhatstates1}$$ But if the central charge $c$ is nonzero, these conditions are incompatible with the algebra (\[carCardyform1\]). The solution is known in conformal field theory—one can, for instance, require only that the positive frequency parts of the Virasoro generators annihilate physical states [@carCFT]—but the result is much the same as for the BTZ black hole: certain states that were originally counted as nonphysical have now become physical. While it is not exactly the same, this phenomenon is reminiscent of the Goldstone mechanism [@carWeinbergb], in which a spontaneously broken symmetry leads to massless excitations in the “broken” directions. And like the Goldstone mechanism, it can provide an effective description of degrees of freedom that is independent of their fundamental physical makeup. One way to see whether this picture makes sense is to examine the path integral measure. The effect of adding a central charge to the Virasoro algebra is to make certain constraints second class [@carDirac; @carDiracb]. The presence of such second class constraints leads to a new term in the measure, similar to the Faddeev-Popov determinant in quantum field theory [@carHenneauxTeit]. Such a determinant can be interpreted as a contribution to the phase space volume, or the density of states, and might explain the counting of black hole states. For the present case, the relevant determinant is of the form $$\det \left|-\frac{c}{12}\frac{d^3\ }{dx^3} + \frac{d\ }{dx}L + L\frac{d\ }{dx}\right|^{1/2} \qquad \hbox{with}\ \ L = L_0 + L_1e^{2ix} + L_{-1}e^{-2ix} .$$ Work on evaluating and understanding this expression is in progress. Perhaps the most important test of this idea would be to couple the effective horizon degrees of freedom to external matter and see if one could reproduce Hawking radiation. In 2+1 dimensions, this can be done [@carEmparan]. In higher dimensions, it may be possible to take advantage of the conformal description of Hawking radiation discussed in section \[carStress\], but this remains to be seen. Open Questions ============== Some thirty-five years after the seminal papers of Hawking and Bekenstein, black hole equilibrium thermodynamics is a mature subject. The role of trans-Planckian excitations near the horizon, discussed in section \[carBogol\], is not yet fully understood, and questions of possible observational tests remain of great interest, but I will risk the claim that the macroscopic thermodynamic properties of black holes are largely under control. The microscopic, statistical mechanical, picture of the black hole, in contrast, is poorly understood, and is the subject of a great deal of research. This is hardly surprising—black hole microstates are almost certainly quantum gravitational, and we are still far from a complete, compelling theory of quantum gravity. Much of the current research focuses on particular microscopic models of black holes, from string theory, loop quantum gravity, and a number of other perspectives. But there are also some broader open questions. In these lectures, I have emphasized one of these, the problem of universality, mainly because it is a focus of my own research. But I will close by briefly mentioning two other deep questions. The information loss paradox ---------------------------- Consider a configuration of matter in a pure state—a spherically symmetric state of a scalar field, for instance—that collapses to form a black hole, which then evaporates by Hawking radiation. If Hawking radiation is exactly thermal, and if the black hole evaporates completely, the ultimate result will be a transition from an initial pure state to a final mixed (thermal) state [@carHawkingd]. Such an evolution is not unitary, and seems to violate the basic principles of quantum mechanics. Similarly, we can imagine a black hole held at equilibrium by the continual ingestion of mass to balance its Hawking radiation; this would seem to allow us to convert an arbitrarily large amount of matter from a pure to a mixed state. The solution to this paradox is heavily debated . If the black hole horizon is fundamental (as it is not in, for instance, the “fuzzball” proposal discussed in Mathur’s lectures [@carMathur0]), there is wide agreement that any answer must involve a breakdown of locality; see, for example, [@carGiddingsb; @carBalasubramanian; @carAshtekarTav]. But there is certainly no consensus as to how such a breakdown might occur. The answer is likely to involve deep problems of quantum gravity, a setting in which nonlocality is both inevitable and very poorly understood [@carCarliprev]. Holography \[carHolography\] ---------------------------- As a count of microscopic degrees of freedom, the Bekenstein-Hawking entropy (\[carintro1\]) has a peculiar feature: the number of degrees of freedom is determined by the area of a surface rather than the volume it encloses. This is very different from conventional thermodynamics, in which entropy is an extensive quantity, and it implies that the number of degrees of freedom grows much more slowly with size than one would expect in an ordinary thermodynamic system. This “holographic” behavior [@cartHooftc; @carSusskindc] seems fundamental to black hole statistical mechanics, and it has been conjectured that it is a general property of quantum gravity. It may be that the generalized second law of thermodynamics requires a similar bound for any matter that can be dropped into a black hole; a nice review of such entropy bounds can be found in [@carBousso]. The AdS/CFT correspondence discussed in section \[carAdSCFT\] is perhaps the cleanest realization of holography in quantum gravity, but it requires specific boundary conditions. A more general formulation proposed by Bousso [@carBoussob] is supported by classical computations [@carFlanagan], and is currently a very active subject of research, extending far beyond its birthplace in black hole physics to cosmology, string theory, and quantum gravity. These lectures were given during an appointment to the Kramers Chair at Utrecht University, for whose hospitality I am very grateful. This work was supported in part by U.S. Department of Energy grant DE-FG02-91ER40674. Appendix: Black Hole Basics {#appendix-black-hole-basics .unnumbered} =========================== Intuitively, a black hole is a “region of no return,” an area of spacetime from which not even light can escape. For a spacetime that looks asymptotically close enough to Minkowski space, this intuitive picture is formalized by the notion of an event horizon, the boundary of the past of future null infinity, that is, the boundary beyond which no light ray can reach infinity [@carHawkingEllis]. The event horizon has been extensively studied, and has many interesting global properties: for example, it cannot bifurcate and cannot decrease in area. Unfortunately, while the event horizon has nice properties, it does not seem to be quite the right object to capture local physics. The problem is that the event horizon is teleological: that is, its definition requires knowledge of the indefinite future. To illustrate this with a thought experiment, imagine that we are at the center of a highly energetic ingoing spherical shell of light, currently two light years from Earth. Suppose this shell is so energetic that it has a Schwarzschild radius of one light year.[^5] If I now shine a flashlight into the sky, one year from now the light will have traveled one light year, where it will meet the incoming shell just as the shell reaches its own Schwarzschild radius. At that point, the pulse of light from the flashlight will be trapped at the horizon of an ordinary Schwarzschild black hole, and will be unable to travel any farther outward. In other words, in this scenario we are *now* at the event horizon of a black hole, even though we will detect no change in our local observations until we are abruptly crushed out of existence two years from now. Since it seems implausible that Hawking radiation “now” can depend on such future events, the event horizon is probably not quite the right object for the study of black hole thermodynamics. Over the past few years, a number of attempts have been made to suitably “localize” the horizon; a nice review can be found in [@carBoothb]. In these lectures, I will mainly use the concept of an “isolated horizon” [@carAshtekarc], a locally defined surface that seems appropriate for equilibrium black hole thermodynamics. An isolated horizon is essentially a null surface whose area remains constant in time, as the horizon of a stationary black hole does. A thought experiment may again be helpful. Imagine a spherical lattice studded with equally spaced flashbulbs, set to all go off at the same time (as measured in the lattice rest frame). When the bulbs flash, they will emit two spherical shells of light, one ingoing and one outgoing. In ordinary nearly flat spacetime, the area of the outgoing sphere increases with time. At the horizon of a Schwarzschild black hole, on the other hand, it is not hard to check that the area of the outgoing sphere remains constant, while inside the horizon, both spheres decrease in area.[^6] To generalize this example, we first define a nonexpanding horizon $\cal H$ in a $d$-dimensional spacetime to be a $(d-1)$-dimensional submanifold such that [@carAshtekar; @carAshtekarc] 1. $\cal H$ is null, with null normal $\ell_a$; 2. the expansion of $\cal H$ vanishes: $\vartheta_{(\ell)} = q^{ab}\nabla_a\ell_b = 0$, where $q_{ab}$ is the induced metric on $\cal H$; 3. $-T^a{}_b\ell^b$ is future-directed and causal. These conditions imply the existence of a one-form $\omega_a$ such that $$\nabla_a\ell^b = \omega_a\ell^b \quad\hbox{on $\cal H$} .$$ The surface gravity for the normal $\ell^a$ is then defined as $$\kappa_{(\ell)} = \ell^a\omega_a . \label{carAppendix1}$$ Note, though, that the normal $\ell^a$ is not unique: a null vector has no canonical normalization, so if $\ell^a$ is a null normal to $\cal H$ and $\varphi$ is an arbitrary function, $e^\varphi\ell^a$ is also a null normal to $\cal H$. We can partially fix this scaling ambiguity by demanding further time independence: we define a weakly isolated horizon by adding the requirement 1. ${\cal L}_\ell\omega = 0$ on $\cal H$ , where $\cal L$ denotes the Lie derivative. This constraint implies the zeroth law of black hole mechanics, that the surface gravity is constant on the horizon. Even with this last condition, the null normal $\ell^a$ may be rescaled by an arbitrary constant. Such a rescaling also scales the surface gravity, so the numerical value of $\kappa_{(\ell)}$ remains undetermined. This reflects a genuine physical ambiguity in the choice of time at the horizon. Note that the first law of black hole mechanics (\[carFourLaws1\]) requires such an ambiguity: mass is only defined relative to a choice of time, so for consistency, rescaling time must also rescale the surface gravity. For a stationary black hole, $\ell^a$ can be chosen to coincide with the Killing vector that generates the horizon, whose normalization is fixed at infinity—that is, we can use the global properties of the solution to adjust clocks at the horizon by comparing them to clocks at infinity. In this case, the isolated horizon coincides with the Killing horizon discussed in Gernot Neugebauer’s lectures [@carNeugebauer]. If, on the other hand, we wish to focus on physics only at or very near the horizon, the normalization becomes more problematic. One can use the known properties of exact solutions to write an expression for the surface gravity in terms of other quantities at the horizon, thereby fixing $\ell^a$ [@carAshtekar], but so far the procedure seems somewhat artificial. As noted in section \[carFourLaws\], weakly isolated horizons obey the four laws of black hole mechanics, the second law in the strong form that the area, by definition, remains constant. Generalization to dynamical, evolving horizons are also possible, and could provide a setting for nonequilibrium black hole thermodynamics; for a recent review, see [@carKrishnan]. [999]{} S. W. Hawking, Nature 248, 30 (1974). J. D. Bekenstein, Phys. Rev. D7, 2333 (1973). R. M. Wald, Living Rev. Relativity 4, 6 (2001), URL: http://www.livingreviews.org/lrr-2001-6, eprint gr-qc/9912119. T. Jacobson, in [*Valdivia 2002, Lectures on quantum gravity*]{}, edited by A. Gomberoff and D. Marolf (Springer, 2005), eprint gr-qc/0308048. R. M. Wald, [*Quantum field theory in curved spacetime and black hole thermodynamics*]{} (University of Chicago Press, 1994). V. P. Frolov and I. D. Novikov, [*Black Hole Physics*]{} (Springer, 1998). , Proceedings of the 1972 Les Houches summer school, edited by C. DeWitt and B. S. DeWitt (Gordon and Breach, 1973). C. Kiefer, in [*Classical and Quantum Black Holes*]{}, edited by P. Fr[é]{}, V. Gorini, G. Magli, and U. Moschella (IOP Publishing, 1999). J. D. Bekenstein, Phys. Rev. D9, 3292 (1974). W. H. Zurek and K. S. Thorne, Phys. Rev. Lett. 54, 2171 (1985). V. P. Frolov and D. N. Page, Phys. Rev. Lett. 71, 3902 (1993), eprint gr-qc/9302017. J. M. Bardeen, B. Carter, and S. W. Hawking, Commun. Math. Phys. 31, 161 (1973). E. Winstanley, this volume, eprint arXiv:0801.0527. W. Israel, Phys. Rev. Lett. 57, 397 (1986). A. Ashtekar, S. Fairhurst, and B. Krishnan, Phys. Rev. D62, 104025 (2000), eprint gr-qc/0005083. R. M. Wald, Phys. Rev. D48, 3427 (1993), eprint gr-qc/9307038. Ya. B. Zel’dovich, Sov. Phys. JETP Lett. 14, 180 (1970). S. W. Hawking, Commun. Math. Phys. 43, 199 (1975). D. N. Page, New J. Phys. 7, 203 (2005), eprint hep-th/0409024. B. F. Schutz, [*A first course in general relativity*]{} (Cambridge University Press, 1990), section 11.4. S. Weinberg, [*The quantum theory of fields*]{} (Cambridge University Press, 1995), chap. 11.2. J. H. MacGibbon and B. J. Carr, Astrophys. J. 371, 447 (1991). D. B. Cline, Phys. Rept. 307, 173 (1998). S. B. Giddings, AIP Conf. Proc. 957, 69 (2007), eprint arXiv:0709.1107. P. Kanti, this volume. W. G. Unruh, Phys. Rev. Lett. 46, 1351 (1981). C. Barcel[ó]{}, S. Liberati, and M. Visser, Living Rev. Relativity 8, 12 (2005), URL: http://www.livingreviews.org/lrr-2005-12, eprint gr-qc/0505065. M. Novello, M. Visser, and G. E. Volovik, [*Artificial black holes*]{} (World Scientific, 2002). M. Visser, Int. J. Mod. Phys. D12, 649 (2003), eprint hep-th/0106111. T. Jacobson, Phys. Rev. D44, 1731 (1991). A. D. Helfer, Rept. Prog. Phys. 66, 943 (2003), eprint gr-qc/0304042. N. N. Bogoliubov, Sov. Phys. JETP 7, 51 (1958). J. H. Traschen, in [*Mathematical methods in physics*]{}, Proceedings of the 1999 Londrona Winter School, edited by A. A. Bytsenko and F. L. Williams (World Scientific, 2000), eprint gr-qc/0010055. W. G. Unruh, Phys. Rev. D14, 870 (1976). R. M. Wald, Commun. Math. Phys. 45, 9 (1975). L. Parker, Phys. Rev. D12, 1519 (1975). W. Rindler, Am. J. Phys. 34, 1174 (1966). R. Brout, S. Massar, R. Parentani, and Ph. Spindel, Phys. Rev. D52, 4559 (1995), eprint hep-th/9506121. S. Corley, Phys. Rev. D57, 6280 (1998), eprint hep-th/9710075. T. Jacobson, Phys. Rev. D53, 7082 (1996), eprint hep-th/9601064. S. Corley and T. Jacobson, Phys. Rev. D54, 1568 (1996), eprint hep-th/9601073. W. G. Unruh and R. Schutzhold, Phys. Rev. D71, 024028 (2005), eprint gr-qc/0408009. B. S. DeWitt, in [*General relativity: an Einstein centenary survey*]{}, edited by S. W. Hawking and W. Israel (Cambridge University Press, 1979). H. Yu and W. Zhou, Phys. Rev. D76, 044023 (2007), eprint arXiv:0707.2613. N. D. Birrell and P. C. W. Davies, [*Quantum fields in curved space*]{} (Cambridge University Press, 1982). D. N. Page, Phys. Rev. D25, 1499 (1982). S. M. Christensen and S. A. Fulling, Phys. Rev. D15, 2088 (1977). S. P. Robinson and F. Wilczek, Phys. Rev. Lett. 95, 011303 (2005), eprint gr-qc/0502074. R. Banerjee and S. Kulkarni, Phys. Rev. D77, 024018 (2008), eprint arXiv:0707.2449. S. Iso, T. Morita, and H. Umetsu, Phys. Rev. D76, 064015 (2007), eprint arXiv:0705.3494. S. Iso, T. Morita, and H. Umetsu, eprint arXiv:0710.0456. T. Damour and R. Ruffini, Phys. Rev. D14, 332 (1976). M. K. Parikh and F. Wilczek, Phys. Rev. Lett. 85, 5042 (2000), eprint hep-th/9907001. M. K. Parikh, Int. J. Mod. Phys. D13, 2351 (2004) and Gen. Rel. Grav. 36, 2419 (2004), eprint hep-th/0405160. R. Kubo, J. Phys. Soc. Japan 12, 570 (1957). P. C. Martin and J. Schwinger, Phys. Rev. 115, 1342 (1959). R. Haag, [*Local quantum physics*]{} (Springer, 1993). J. J. Bisognano and E. H. Wichmann, J. Math. Phys. 17, 303 (1976). G. W. Gibbons and M. J. Perry, Phys. Rev. Lett. 36, 985 (1976). G. W. Gibbons and M. J. Perry, Proc. Roy. Soc. Lond. A358, 467 (1978). G. W. Gibbons and S. W. Hawking, Phys. Rev. D15, 2752 (1977). T. Regge and C. Teitelboim, Annals Phys. 88, 286 (1974). S. W. Hawking, in [*General relativity: an Einstein centenary survey*]{}, edited by S. W. Hawking and W. Israel (Cambridge University Press, 1979). M. Banados, C. Teitelboim, and J. Zanelli, Phys. Rev. Lett. 72, 957 (1994), eprint gr-qc/9309026. C. Teitelboim, Phys. Rev. D51, 4315 (1995), eprint hep-th/9410103. S. W. Hawking and G. T. Horowitz, Class. Quant. Grav. 13, 1487 (1996), eprint gr-qc/9501014. S. W. Hawking and C. J. Hunter, Phys. Rev. D59, 044025 (1999), eprint hep-th/9808085. S. Carlip and C. Teitelboim, Class. Quant. Grav. 12, 1699 (1995), eprint gr-qc/9312002. D. Garfinkle, S. B. Giddings, and A. Strominger, Phys. Rev. D49 (1994) 958, eprint gr-qc/9306023. J. D. Brown, Phys. Rev. D51, 5725 (1995), eprint gr-qc/9412018. R. B. Mann and S. F. Ross, Phys. Rev. D52, 2254 (1995), eprint gr-qc/9504015. I. S. Booth and R. B. Mann, Phys. Rev. Lett. 81, 5052 (1998), eprint gr-qc/9806015 B. S. Kay and R. M. Wald, Phys. Rept. 207, 49 (1991). W. Israel, Phys. Lett. A 57, 107 (1976). T. Jacobson, Phys. Rev. D50, 6031 (1994), eprint gr-qc/9407022. M. Banados, C. Teitelboim, and J. Zanelli, Phys.  Rev. Lett. 69, 1849 (1992), eprint hep-th/9204099. S. Carlip, Class. Quant. Grav. 12, 2853 (1995), eprint gr-qc/9506079. S. Carlip, Phys. Rev. D51, 632 (1995), eprint gr-qc/9409052. E. Witten, Nucl. Phys. B311, 46 (1988). S. Carlip, Class. Quant. Grav. 22, R85 (2005), eprint gr-qc/0503022. R. Emparan and I. Sachs, Phys. Rev. Lett. 81, 2408 (1998), eprint hep-th/9806122. S. Carlip, Rept. Prog. Phys. 64, 885 (2001), eprint gr-qc/0108040. A. Strominger and C. Vafa, Phys. Lett. B379, 99 (1996), eprint hep-th/9601029. S. Mathur, this volume. L. Susskind, in [*The black hole: 25 years after*]{}, edited by C. Teitelboim and J. Zanelli (World Scientific, 1988), eprint hep-th/9309145. G. T. Horowitz and J. Polchinski, Phys.  Rev. D55, 6189 (1997), eprint hep-th/9612146. A. W. Peet, in [*TASI 99: Strings, branes, and gravity*]{}, edited by J. Harvey, S. Kachru, and E. Silverstein (World Scientific, 2001), eprint hep-th/0008241. S. R. Das and S. D. Mathur, Ann. Rev. Nucl. Part. Sci. 50, 153 (2000), eprint gr-qc/0105063. G. T. Horowitz and M. M. Roberts, Phys. Rev. Lett. 99, 221601 (2007), eprint arXiv:0708.1346. S. D. Mathur, Fortsch. Phys. 53, 793 (2005), eprint hep-th/0502050. S. D. Mathur, Class. Quant. Grav. 23, R115 (2006), eprint hep-th/0510180. I. Kanitscheider, K. Skenderis, and M. Taylor, JHEP 0706, 056 (2007), eprint arXiv:0704.0690. J. M. Maldacena, Adv. Theor. Math. Phys. 2, 231 (1998) and Int. J. Theor. Phys. 38, 1113 (1999), eprint hep-th/9711200. O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri, and Y. Oz, Phys. Rept. 323, 183 (2000), eprint hep-th/9905111. J. D. Brown and M. Henneaux, Commun. Math.  Phys. 104, 207 (1986). A. Strominger, JHEP 9802, 009 (1998), eprint hep-th/9712251. D. Birmingham, I. Sachs, and S. Sen, Phys. Lett. B424, 275 (1998), eprint hep-th/9801019. K. Skenderis, Lect. Notes Phys. 541, 325 (2000), eprint hep-th/9901050. C. Rovelli, Living Rev. Relativity 1, 1 (1998), URL: http://www.livingreviews.org/lrr-1998-1, eprint gr-qc/9710008. T. Jacobson, Class. Quant. Grav. 24, 4875 (2007), eprint arXiv:0707.4026. S. Alexandrov and E. R. Livine, Phys. Rev. D67, 044009 (2003), eprint gr-qc/0209105. K. V. Krasnov, Phys. Rev. D55, 3505 (1997), eprint gr-qc/9603025. C. Rovelli, Phys. Rev. Lett. 77, 3288 (1996), eprint gr-qc/9603063. A. Ashtekar, J. Baez, A. Corichi, and K. Krasnov, Phys. Rev. Lett. 80, 904 (1998), eprint gr-qc/9710007. A. Ashtekar, J. C. Baez, and K. Krasnov, Adv. Theor. Math. Phys. 4, 1 (2000), eprint gr-qc/0005126. M. Domagala and J. Lewandowski, Class. Quant. Grav. 21, 5233 (2004), eprint gr-qc/0407051. K. A. Meissner, Class. Quant. Grav. 21, 5245 (2004), eprint gr-qc/0407052. A. Ashtekar and J. Lewandowski, Class. Quant. Grav. 21, R53 (2004), eprint gr-qc/0404018. A. Ashtekar, J. Engle, and C. Van Den Broeck, Class. Quant. Grav. 22, L27 (2005), eprint gr-qc/0412003. M. Barreira, M. Carfora, and C. Rovelli, Gen. Rel. Grav. 28, 1293 (1996), eprint gr-qc/9603064. K. V. Krasnov, Class. Quant. Grav. 16, 563 (1999), eprint gr-qc/9710006. E. R. Livine and D. R. Terno, Nucl. Phys. B741, 131 (2006), eprint gr-qc/0508085. A. D. Sakharov, Sov. Phys. Dokl. 12, 1040 (1968), reprinted in Gen. Rel. Grav. 32, 365 (2000). S. L. Adler, Rev. Mod. Phys. 54, 729 (1982); Erratum ibid. 55, 837 (1983). V. P. Frolov and D. V. Fursaev, Phys. Rev. D56, 2212 (1997), eprint hep-th/9703178. V. P. Frolov, D. Fursaev, and A. Zelnikov, JHEP 0303, 038 (2003), eprint hep-th/0302207. L. Bombelli, R. K. Koul, J. Lee, and R. D.  Sorkin, Phys. Rev. D [34]{}, 373 (1986). M. Srednicki, Phys. Rev. Lett. 71, 666 (1993), eprint hep-th/9303048. M. Requardt, eprint arXiv:0708.0901. G. ’t Hooft, Nucl. Phys. B256, 727 (1985). L. Susskind and J. Uglum, Phys. Rev. D50, 2700 (1994), eprint hep-th/9401070. S. Ryu and T. Takayanagi, Phys. Rev. Lett. 96, 181602 (2006), eprint arXiv:hep-th/0603001. V. E. Hubeny, M. Rangamani, and T. Takayanagi, JHEP 0707, 062 (2007), eprint arXiv:0705.0016. D. V. Fursaev, JHEP 0609, 018 (2006), eprint hep-th/0606184. R. Emparan, JHEP 012 0606 (2006), eprint hep-th/0603081. D. Rideout and S. Zohren, Class. Quant. Grav. 23, 6195 (2006), eprint gr-qc/0606065. G. Siopsis, this volume. J. W. York, Phys. Rev. D28, 2929 (1983). K. Ropotenko, eprint arXiv:0711.3131. C. Vaz, Phys. Rev. D61, 064017 (2000), eprint gr-qc/9903051. J. Makela and A. Peltola, Phys. Rev. D69, 124008 (2004), eprint gr-qc/0307025. C. Kiefer, J. Mueller-Hill, T. P. Singh, and C. Vaz, Phys. Rev. D75, 124010 (2007), eprint gr-qc/0703008. J. D. Bekenstein, Lett. Nuovo Cim. 11, 467 (1974). H. A. Kastrup, Phys. Lett. B413, 267 (1997), eprint gr-qc/9707009. A. Barvinsky, S. Das, and G. Kunstatter, Phys. Lett. B517, 415 (2001), eprint hep-th/0102061. J. D. Bekenstein and G. Gour, Phys. Rev. D66, 024005 (2002), eprint gr-qc/0202034. S. Hod, Phys. Rev. Lett. 81, 4293 (1998), eprint gr-qc/9812002. L. Motl and A. Neitzke, Adv. Theor. Math. Phys. 7, 307 (2003), eprint hep-th/0301173. R. G. Daghigh and G. Kunstatter, Class. Quant.  Grav. 22, 4113 (2005), eprint gr-qc/0505044. D. Birmingham and S. Carlip, Phys. Rev. Lett. 92, 111302 (2004), eprint hep-th/0311090. S. Hod, Class. Quant. Grav. 24, 4871 (2007), eprint arXiv:0709.2041. K. S. Thorne, R. H. Price, and D. A. Macdonald, [*Black holes: the membrane paradigm*]{} (Yale University Press, 1986). P. Di Francesco, P. Mathieu, and D. S[é]{}n[é]{}chal, [*Conformal field theory*]{} (Springer, 1997). C. Teitelboim, in [*Quantum theory of gravity*]{}, edited by S. M. Christensen (Adam Hilger, 1984). J. A. Cardy, Nucl. Phys. B270, 186 (1986). H. W. J. Bl[ö]{}te, J. A. Cardy, and M. P.  Nightingale, Phys. Rev. Lett. 56, 72 (1986). D. Birmingham, K. S. Gupta, and S. Sen, Phys.  Lett. B505, 191 (2001), eprint hep-th/0102051. K. S. Gupta and S. Sen, Phys. Lett. B526, 121 (2002), eprint hep-th/0112041. H. E. Camblong and C. R. Ord[ó]{}[ñ]{}ez, Phys. Rev. D71, 104029 (2005), eprint hep-th/0411008. A. J. M. Medved, D. Martin, and M. Visser, Class. Quant. Grav. 21, 3111 (2004), eprint gr-qc/0402069. A. J. M. Medved, D. Martin, and M. Visser, Phys. Rev. D70, 024009 (2004), eprint gr-qc/0403026. S. Carlip, Phys. Rev. Lett. 82, 2828 (1999), eprint hep-th/9812013. S. Carlip, Class. Quant. Grav. 16, 3327 (1999), eprint gr-qc/9906126. S. Carlip, Int. J. Theor. Phys. 46, 2192 (2007), eprint gr-qc/0601041. S. N. Solodukhin, Phys. Lett. B454, 213 (1999), eprint hep-th/9812056. S. Carlip, Phys. Rev. Lett. 88, 241301 (2002), eprint gr-qc/0203001. S. Carlip, Class. Quant. Grav. 22, 1303 (2005), eprint hep-th/0408123. S. Carlip, Phys. Rev. Lett. 99, 021301 (2007), eprint gr-qc/0702107. P. G. Bergmann and A. B. Komar, Phys. Rev.  Lett. 4, 432 (1960). P. A. M. Dirac, Can. J. Math. 2, 129 (1950). P. A. M. Dirac, Can. J. Math. 3, 1 (1951). E. Witten, Commun. Math. Phys. 121, 351 (1989). S. Carlip, Living Rev. Relativity 8, 1 (2005), URL: http://www.livingreviews.org/lrr-2005-1, eprint gr-qc/0409039. S. Carlip, Class. Quant. Grav. 22, 3055 (2005), eprint gr-qc/0501033. Y.-J. Chen, Class. Quant. Grav. 21, 1153 (2004), eprint hep-th/0310234. S. Weinberg, [*The quantum theory of fields*]{} (Cambridge University Press, 1995), chap. 19.2. M. Henneaux and C. Teitelboim, [*Quantization of gauge systems*]{} (Princeton University Press, 1992). S. W. Hawking, Phys. Rev. D14, 2460 (1976). J. Preskill, in [*Black holes, membranes, wormholes and superstrings*]{}, edited by S. Kalara and D. V. Nanopoulos (World Scientific, 1993), eprint hep-th/9209058. T. Banks, Nucl. Phys. Proc. Suppl. 41, 21 (1995), eprint hep-th/9412131. C. R. Stephens, G. ’t Hooft, and B. F. Whiting, Class. Quant. Grav. 11, 621 (1994.), eprint gr-qc/9310006. S. B. Giddings, Phys. Rev. D74, 106005 (2006), eprint hep-th/0605196. V. Balasubramanian, D. Marolf, and M. Rozali, Gen. Rel. Grav. 38, 1529 (2006) and Int. J. Mod. Phys. D15, 2285 (2006), eprint hep-th/0604045. A. Ashtekar, V. Taveras, and M. Varadarajan, eprint arXiv:0801.1811. G. ’t Hooft, in [*Salamfestschrift: a collection of talks*]{}, edited by A. Ali, J. Ellis, and S. Randjbar-Daemi (World Scientific, 1993), eprint gr-qc/9310026. L. Susskind, J. Math. Phys. 36, 6377 (1995), eprint hep-th/9409089. R. Bousso, Rev. Mod. Phys. 74, 825 (2002), eprint hep-th/0203101. R. Bousso, JHEP 9907, 004 (1999), eprint hep-th/9905177. E. E. Flanagan, D. Marolf, and R. M. Wald, Phys. Rev. D62, 084035 (2000), eprint hep-th/9908070. S. W. Hawking and G. F. R. Ellis, [*The large scale structure of space-time*]{} (Cambridge University Press, 1973). I. Booth, Can. J. Phys. 83, 1073 (2005), eprint gr-qc/0508107. A. Ashtekar, C. Beetle, and S. Fairhurst, Class. Quant. Grav. 16, L1 (1999), eprint gr-qc/9812065. G. Neugebauer, this volume. A. Ashtekar and B. Krishnan, Living Rev. Relativity 7, 10 (2004), URL: http://www.livingreviews.org/lrr-2004-10, eprint gr-qc/0407042. [^1]: See [@carBardeen], p. 168. [^2]: Strictly speaking, the coordinates labeled $r$ and $t$ for $r>2GM$ are different from those with the same labels for $r<2GM$, since the Schwarzschild coordinate system is only defined in nonoverlapping patches inside and outside the horizon. But one can rephrase the argument in terms of proper time of infalling observers in a way that dodges this mathematical subtlety [@carSchutz]. [^3]: The final distribution is actually not quite thermal, but contains a “greybody factor” that reflects the backscattering of some of the emitted radiation into the black hole. [^4]: For this section, see [@carCarlipf] for further references. [^5]: This is admittedly not very likely, but note that it cannot be ruled out observationally: no signal could propagate faster than such a shell, so we would not know of its existence until it reached us. [^6]: The outgoing sphere remains outgoing with respect to the lattice, of course; as the lattice collapses, its area decreases even faster than that of the outgoing light sphere.
--- abstract: | Active Learning is concerned with the question of how to identify the most useful samples for a Machine Learning algorithm to be trained with. When applied correctly, it can be a very powerful tool to counteract the immense data requirements of Artificial Neural Networks. However, we find that it is often applied with not enough care and domain knowledge. As a consequence, unrealistic hopes are raised and transfer of the experimental results from one dataset to another becomes unnecessarily hard. In this work we analyse the robustness of different Active Learning methods with respect to classifier capacity, exchangeability and type, as well as hyperparameters and falsely labelled data. Experiments reveal possible biases towards the architecture used for sample selection, resulting in suboptimal performance for other classifiers. We further propose the new “Sum of Squared Logits” method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection. author: - 'Lukas Hahn^,^' - 'Lutz Roese-Koerner' - Peet Cremer - Urs Zimmermann - | \ Ori Maoz - Anton Kummert bibliography: - 'literature.bib' title: On the Robustness of Active Learning --- at (current page.south) ; Introduction {#sec:intro} ============ The term Active Learning describes the field of selecting samples from a given pool of data in order to subsequently train a Machine Learning algorithm with. This can be done for two major reasons: Firstly, deciding which subset of collected data will be annotated in order to create training and validation set for a supervised Machine Learning task. While it can be comparatively easy and inexpensive to record and gather sensor data and ever decreasing cost makes it affordable to possibly neglect storage expenses, reliable ground truth annotation still requires manual labour and is therefore the crucial factor. In industrial application of Machine Learning to various tasks, budget and time constraints play a significant role and performance can depend on choosing the best $n$ samples to train on. Secondly, one could think of using Active Learning methods as a form of regularization. While increasing the number of available training samples is in general regarded as helpful, certain factors can lead to an impaired performance when doing so. The more objects in a recognition task are standardized, the more redundant information is potentially added to the dataset with each new sample, which can result in worsened generalization. Active Learning methods can also be applied to sanitize a dataset from falsely labelled samples, as a suitable strategy will not pick samples with a conspicuous difference between label and prediction. However, despite the great potential of Active Learning it also bears significant risks. If applied in an incorrect way it could lead to a sub-optimal sample selection, and, in the worst case, rendering the complete Machine Learning task unsuccessful. In order to point out how to avoid these pitfalls, we examine a set of known Active Learning query strategies, as well as some extensions of our own, and their performance on various different image classification datasets. We then view their performance under different aspects including changing hyperparameters, influence of falsely labelled data and the replaceability of varying CNN architectures. Eventually we regard the performance of the same strategies when applied to a problem of hierarchical classifiers. Our main contributions are:\ 1.) A robustness investigation of state-of-the-art Active Learning strategies with respect to the impact of falsely labelled data, hyperparameters and the impact of changing the classifier model during the selection phase. 2.) An extension of the Active Learning method based on Entropy computation using the Simpson Diversity. 3.) Theoretical insights and experimental results for Active Learning on Hierarchical Neural Networks. Related Work {#sec:relWork} ============ An overview of methods from the pre Deep Learning area can be found in the very comprehensive review of [@settles_10]. Many approaches originating from that time (e.g. Uncertainty sampling, Margin based sampling, Entropy Sampling, ...) have been later adapted to neural networks. Additional examples for this include the approach of [@roy-mccallum_01], who applied a Monte Carlo method to compute an estimated error reduction that can be used for sample selection as well as clustering approaches like those described in [@nguyen-smeulders_04] and [@dasgupta-hsu_08]. [@wang-etal_17] and [@rottmann-etal_18] propose a semi-supervised approach. They use Active Learning to query samples which the network has not yet understood and use label propagation to also utilize well understood samples with “pseudo-labels”. In the field of supervised learning, [@korattikara-etal_15] used a Bayes approach to distil a Monte Carlo approximation of the posterior predictive density for sample selection. In the theoretical work of [@kabkab-etal_16], Active Learning was rephrased as a convex optimisation problem and the balancing of the selection of samples with high diversity and those that are very representative for a subset are discussed. Unlike many other methods, the core-set approach of [@sener-savarese_18] does not use the output layer of a network for Active Learning. Instead they solve a relaxed $k$-centres problem to minimize the maximal distance to the closest cluster centre for each sample in a space that is spanned by the neurons of a hidden layer of a network. As discussed later, this approach has a very high independence of the actual classes of a network, which can be helpful when dealing with hierarchical networks [@weyers-etal_18] for example. [@gal-etal_17] introduced the concept of live-dropout to Active Learning. The idea is to approximate the behaviour of an ensemble of Bayesian estimators by activating dropout during inference and multiple forward passes. They furthermore developed an Active Learning framework which is able to use this and other deep Bayesian methods. In the same line of thought, [@ducoffe-precioso_17] investigated live dropout and Query-by-committee methods. However, [@beluch-etal_18] used ensembles of CNNs with identical architectures but different weight initiations to show that ensembles work better than “ensemble approximation methods” like the above mentioned MC dropout of [@gal-etal_17] or approaches based on geometric distributions like [@sener-savarese_18]. Some recent approaches also utilize “meta” knowledge for Active Learning. [@fang-etal_17] introduced “Policy based Active Learning”. There, reinforcement learning is used for stream based Active Learning in a language processing setting. This is very similar to the approach of [@bachman-etal_17] who proposed “Learning Algorithms for Active Learning”. They also used Reinforcement Learning to jointly learn a data representation, an item selection heuristic and a method for constructing prediction functions from labelled training sets. [@heilbron-etal_18] reuse knowledge from previously annotated datasets to improve the Active Learning performance. Methods ======= In the following we review existing methods from the field of pool-based Active Learning and propose a suggestion of our own. Given a classification model $\theta$ and a dataset $\mathcal{D}$, consisting of a feature and label pair $\langle x\in X, y\in Y \rangle$, such an algorithm has the following structure: Considering a large dataset, one can query numerous samples at once. This set of the chosen samples is denoted by $\mathcal{B} \subset U$ [@kabkab-etal_16]. We take a closer look at uncertainty sampling, a strategy that selects samples the classifier is uncertain about. In this context uncertainty means a low confidence for the predicted class that is given by $\hat{y} = \operatorname*{argmax}_y \, P_\theta(y|x)$. We consider three commonly used uncertainty measures: 1. *Least Confident:* $x^\star_{LC} = \operatorname*{argmax}_x \, (1 - P_\theta(\hat{y} |x))$ 2. *Margin:* $x^\star_M = \operatorname*{argmin}_x \, (P_\theta(\hat{y}_1|x) - P_\theta(\hat{y}_2|x))$ 3. *Entropy:* $x^\star_H = \operatorname*{argmax}_x \left( - \sum_{i}\, P_\theta(y_i | x) \log P_\theta(y_i | x)\right)$ (a): Considering only one class label, the sample $x^\star_{LC}$ with the least confident label prediction is selected. (b): Margin sampling includes information about the second most certain prediction. The algorithm queries the sample $x^\star_M $ with the smallest difference between the two most probable class labels. (c): For multi class tasks, it is relevant to consider all label confidences. For each sample every class probability is weighted with its information content and summed up. The algorithm queries the sample with the highest entropy $x^\star_H $ [@settles_10]. For the following experiments we implement eight query strategies.\ Based on Least Confident (a): Naive Certainty (NC) Low: : Select $n$ samples with the minimal maximal activation in classifier logits. Since basing the decision only on the one highest activated neuron is a very straightforward approach, we call this family of strategies the “Naive” methods. NC Range: : Select $n$ samples within a certain range of the classifier logits’ activation (e.g. $[0.1, 0.9]$). NC Diversity: : Select $n$ samples with the minimal maximal activation in classifier logits and additionally prevent that similar samples are chosen by calculating the diversity of the samples below the threshold compared to those already included in the training set. NC Balanced: : Select $n$ samples with the minimal maximal activation in classifier logits and balance the class distribution using the reciprocal value of the classification confusion matrix obtained with the previous training set. Terminates if one class contains no more samples to be drawn. Based on Margin (b): Margin: : Select $n$ samples with the smallest difference of the two highest firing logits. Based on Entropy (c): Entropy High: : Select $n$ samples with the highest entropy. Sum of Squared Logits (SOSL): : Select $n$ samples with the highest Simpson diversity index $D = 1 - \sum_i (l_i)^2$ [@simpson] (cf. \[subsec:SOSL\]). **Core Set Greedy:** A similarity measure in the embedding space. Creates a core set by approximating the problem of distributing $k$-centres in $n$ points, such that the minimal distance of all points to the nearest centre is maximized. Select $n$ samples for which the minimum distance to all samples which are already part of the training set is maximized (cf. [@sener-savarese_18]). Sum of Squared Logits (SOSL) Method {#subsec:SOSL} ----------------------------------- In Active Learning, we require a measure of how sure the classifier is that its class decision during inference is accurate. One possibility for such an accuracy-of-inference measure is to analyze the distribution of logits. Within the trained model of the classifier, the logits can be interpreted as probabilities that the inferred sample belongs to the class associated of the respective logit. If the logits are strongly biased in favour of a certain class, it is very likely that the given sample belongs to the class corresponding to the strongest logit. On the contrary, if the logits do not show a clear preference for a certain class, there is a high risk that taking the class of the strongest logit results in a false prediction. In other words, to which degree the distribution of logits tends towards peaks rather than an equipartition indicates how accurate the inference is going to be. In previous literature, the Shannon entropy [@shannon] has been frequently used as a measure of how peaked or equipartitioned a distribution is. A valid strategy for Active Learning could then be to sort out those samples, for which the Shannon entropy $H = - \sum_i l_i \log(l_i)$, with $l_i$ being the values of the logits, is particularly high. However, a shortcoming of this approach is that it does not adequately account for the situation when the the distribution of logits is admittedly strongly peaked, but with peaks on more than one class logit. Such a situation can easily arise in samples, when they belong to classes showing similarities and the classifier’s model does not yet feature a clear decision boundary between them. In such a case, the distribution of logits is still far away from an equipartition, resulting in a relatively low value for the Shannon entropy $H$. Thus, although labelling these samples would be particularly valuable for fleshing out the decision boundary and allowing the classifier to better separate between classes, they would not be added to Active Learning training set. To overcome these shortcomings of the Shannon entropy $H$ as a measure for characterizing the distribution of logits $l_i$, we propose to use the Simpson diversity index $D = 1 - \sum_i (l_i)^2$ [@simpson] instead. The closer the distribution $l_i$ is to an equipartition, the larger $D$ becomes. If the $l_i$ shows a strong peak at a certain $i$, $D$ is close to zero. Finally, if the $l_i$ are strongly peaked among several classes, $D$ will have a small-to-moderate value between zero and one. The latter property of $D$ in particular allows to select those samples for labelling, for which the classifier can narrow the class decision down to a few classes, among which it is still unsure. The Active Learning strategy is then to select in each iteration the $n$ samples with highest $D$. Experiments and Results ======================= We conduct a series of experiments with the query strategies presented in section \[methods\] on six different datasets for image classification (cf. Table \[tab:datasets\]). These consist of the well-known digit classification set MNIST [@mnist] and the thereof inspired dataset of the Latin alphabet CoMNIST [@comnist] and clothing classification Fashion-MNIST [@fmnist], as well as general object classification CIFAR-10 [@cifar] and the house number collection SVHN [@svhn]. We furthermore evaluate strategies on a private dataset of $33$ different classes of traffic signs (TSR) represented through small grey scale images. [max width=]{} CIFAR-10 CoMNIST Fashion-MNIST MNIST SVHN TSR -------------------- -------------- -------------- --------------- -------------- -------------- -------------- Classes $10$ $26$ $10$ $10$ $10$ $33$ Image Size $32\times32$ $32\times32$ $28\times28$ $28\times28$ $32\times32$ $34\times34$ Channels $3$ $1$ $1$ $1$ $3$ $1$ Training Samples $50\,000$ $9\,918$ $60\,000$ $60\,000$ $73\,257$ $265\,774$ Validation Samples $10\,000$ $1\,300$ $10\,000$ $10\,000$ $26\,032$ $66\,443$ : Characteristics of the datasets used for the experiments.[]{data-label="tab:datasets"} -- -- -- -- -- -- -- -- General Performance {#subsec:general} ------------------- Before we analyse the robustness of the presented query strategies, we compare their general performance on the datasets presented above. For each dataset we use a distinct plain feed-forward CNN. Only for CIFAR-10 we use an implementation of ResNet50 [@resnet]. As we are not aiming to find the best architecture for a certain problem but to identify the most promising samples, we choose the number of layers and channels according to the approximate complexity of the task and select learning rates and batch sizes in commonly used ranges. For all of these experiments, we start with a training set of $100$ samples per class of the particular dataset. We train the CNN for up to $1000$ epochs with an early stopping of $200$. For this purpose we split $10\%$ of the training set into an additional “development set”. It is not used for training but to validate classification over the course of the training. This is done to obviate an overfitting-like bias with the use of early stopping. Of course the validation accuracy is then determined on the original test set of the respective dataset, using the best network weights acquired during training according to the development set accuracy. This network is also the one used to then select new samples to be added to the training set utilizing the query strategies. With each iteration we increase the number of samples in the training set by $20\%$. In all cases we conduct five repetitions per strategy and dataset for statistical significance. To reduce the computational burden, we iteratively draw new samples until we have reached approximately a third of the full size of the respective training set. Figure \[fig:generalperformance\] illustrates the results of the evaluation of all query strategies. Nearly all findings show a benefit of Active Learning methods and at least some of the query strategies are either hitting the baseline, or are close to it, around the $30\%$ mark. For CIFAR-10 however, this is not true. None of the methods show any profit for this dataset and are in line with the random sample selection, resulting in a nearly perfectly linear increase in accuracy. This does not come as a surprise, as CIFAR-10 has very diverse representations of its classes and seems to contain no redundant information. -- -- -- -- Changing Hyperparameters and Falsely Labelled Data {#subsec:hyperparams} -------------------------------------------------- As hyperparameter optimisation is very important in fine-tuning the performance of Machine Learning algorithms, we analyse how much changes in these parameters influence the usability of the Active Learning methods shown.\ Figure \[fig:hyperparams\] shows the effect of altering the learning rate over two magnitudes and the batch size up to a factor of $16$, for experiments on MNIST. All methods behave very robustly and do not show to be influenced by these alterations. Since it can be expected that human annotation, especially in large scale labelling of sensor data, is never perfectly accurate, it is interesting to investigate how this might interfere with the applicability of Active Learning. In Figure \[fig:errors\] (left) we show results for an experiment where we purposely introduced false labels into the Fashion-MNIST training set. It can clearly be seen, that methods relying on a diversity criterion (NC Diversity, Core Set) suffer the most, since their selection process prevents similar samples from being chosen and therefore it can be harder to correct the negative impact that the selection of a wrongly labelled sample would have. Please note that these strategies also show the highest sensitivity to changes in dropout (cf. Figure \[fig:errors\] right). Replaceability of Classifiers {#subsec:classifiers} ----------------------------- In the application of Machine Learning, especially in product context, successive refinement of the algorithm is very common. A CNN architecture might be adjusted several times over the course of development or a production process, to optimise the performance or to adapt to changes in the dataset or external restrictions like computational resources. We investigate how the usability of Active Learning might be influenced, if data selection is done by a different network than the one eventually targeted for classification performance. For this purpose we implemented three CNNs, referred to as $Min$, $Med$ and $Max$ in the following, of different capacity to iteratively select samples from Fashion-MNIST with the query strategies as described above. We then perform a cross-training, where every network is trained with the selections of the others and its own. To ensure comparability, we use the same initial dataset of $100$ samples per class for all classifiers and repeat calculations five times. -- -- -- -- Figure \[fig:cross\] shows the results for selected strategies. Apart from information about the replaceability of classifiers, these results can show how the classifier capacity itself influences the applicability of Active Learning strategies. For the example of NC Balanced we can note a bias for the own selection performing best with the $Max$ and $Min$ classifier, while the medium-sized one shows indifference. The “weaker” the network gets, the better the performance of the random selection becomes. For the SOSL, this becomes even more clear. While the selection of the $Max$ classifier is still definitely the best for itself, the smaller networks show the best performance with the randomized set. The results with Entropy High are very similar, but the gaps become even more obvious. $Max$ now shows a very clear preference for the own selection compared to any other and the performance of the Active Learning strategy selection on the $Min$ network is now more than three percentage points behind random. -- -- -- -- -- -- Hierarchical Classifiers {#subsec:hierarchical} ------------------------ To complete our Active Learning robustness study, we examine a neural network structure different from the straightforward CNNs in the preceding sections. Hierarchical or cascaded classifiers do not use a single label per sample but a whole label tree (cf. [@weyers-etal_18]). Consequently, label vectors consist of one of the three following options per class: “1, 0 or not applicable” and each sample belongs to exactly one class per hierarchy level. Furthermore, during the learning phase each class is treated independent of all others. If we have an $n$-class classification problem, $n$ “1-vs-all classifiers” are trained. This renders all Active Learning strategies which rely on quantifying the uncertainty of the logits useless. All of them (e.g. Naive Certainty, Margin) implicitly rely on the assumption that labels with two possible states are used. As the neurons that belong to classes marked as “not applicable” are not considered during backpropagation (cf. [@weyers-etal_18]) they can take arbitrarily high values and thus confuse the mentioned Active Learning methods. As can be seen in Figure \[fig:cascade\] this can even result in worse performance than random sampling. However, we can show, that methods, which work in the embeddings space (like the Core Set method), are not effected and thus are also employable for hierarchical neural networks. ### Used Dataset {#subsubsec:dataset} In all experiments with the hierarchical classifier we use a private dataset that consists of 12 classes which depict different poses of a human hand (e.g. “One finger”, “Two fingers”, “Fist Thumb Left”, etc.). We use a training set of $670\,000$, a development set of $75\,000$ and a test set containing $8\,000$ grey scale images of size $22\times46$. As depicted in Figure \[fig:gesture\_structure\], we use three levels of hierarchy: 1.) “Hand”/“No hand”, 2.) Class, 3.) Subclass. A sample of “Fist Thumb Left” e.g. would have the labels “Hand + ”Fist Thumb“ + ”Fist Thumb Left“. Especially the neurons of the subclasses often have the label ”not applicable" as each subclass belongs to only one class. ![Hierarchical labels for hand gesture recognition. Blue boxes denote the $12$ classes.[]{data-label="fig:gesture_structure"}](images/gestures_wider){width="90.00000%"} Conclusion ========== We have presented a study on the robustness of Active Learning. While we show that even plain methods can bring a notable profit in different image classification applications, we emphasise, that prior knowledge about the data and the Machine Learning algorithm in use is essential for successful application. As seen in \[subsec:general\], methods that work well on a number of datasets might suddenly fail on a different one and certain data collections might be inherently unsuitable for this kind of active data selection. Although many changes in hyperparameters and erroneous labels might not influence the performance of particular strategies on one hand (cf. \[subsec:hyperparams\]), classifier changes on the other can by all means (cf. \[subsec:classifiers\]). Critical alterations in the way a Machine Learning tasks is tackled, like switching from a straightforward to a hierarchical classifier (cf. \[subsec:hierarchical\]), can turn the all previous findings upside down.\ These findings underline, that Active Learning can be a helpful tool in data science, but has to be used with knowledge about the targeted utilisation. We aim to continue our endeavours in this field and expand the considerations to segmentation problems and ways to automatically provide assessment on promising combinations of data, Machine Learning algorithms and Active Learning strategies, to avoid possible pitfalls like the ones presented in this work. \[sect:bib\]
[**TANGO ARRAY**]{} 0.5 cm [**An Air Shower Experiment in Buenos Aires$^{\dagger}$**]{} 0.8 cm [P. Bauleo, C. Bonifazi, A. Filevich$^{1}$ and A. Reguera$^{2}$]{} [*Departamento de Física, Comisión Nacional de Energía Atómica,\ Avenida del Libertador 8250, (1429) Buenos Aires, Argentina*]{} 1.5cm [**Abstract**]{} 1.5cm [A new Air Shower Observatory has been constructed in Buenos Aires during 1999, and commissioned and set in operation in 2000. The observatory consists of an array of four water Čerenkov detectors, enclosing a geometrical area of $\sim$ 30.000 m$^{2}$, and is optimized for the observation of cosmic rays in the “knee” energy region. The array detects  $\sim$ 250 to $\sim$ 1500 showers/day, depending on the selected triggering condition. In this paper, the design and construction of the array, and the automatic system for data adquisition, daily calibration, and monitoring, are described. Also, the Monte Carlo simulations performed to develop a shower database, as well as the studies performed using the database to estimate the response and the angular and energy resolutions of the array, are presented in detail.]{} Introduction ============ The Earth’s atmosphere is being bombarded continuously by a flux of particles (cosmic rays), coming from all directions. Their energies range from a few MeV to more than 10$^{20}$ eV. Their spectrum follows a power law with a negative exponent which is almost constant over thirteen orders of magnitude in energy. The origin of the cosmic rays is still an open question. Those rays with energy below $\sim$ 1 GeV are likely to have a solar origin, but for higher energies their acceleration mechanism remains in mystery. It is believed that up to $\sim$ 4.10$^{15}$ eV they can be accelerated by diffuse shock processes produced in supernova explosions. In this energy region, (usually called “the knee”), the exponent of the power law describing the cosmic ray flux per units of area, time, solid angle, and energy, suddenly steepens from $\sim$ -2.7 to $\sim$ -3.2, and this change is believed to be related with the maximum energy that can be transferred by a supernova shock to a single particle. If the kinetic energy of the cosmic ray is high enough, then secondary particles are produced as a consequence of hadronic or electromagnetic interactions with the upper atmosphere atomic nuclei. Those secondary particles will, in turn, produce more particles, yielding a cascade which is known as an Extensive Air Shower (EAS). Depending on the primary energy and zenithal angle, this cascade can be stopped in the atmosphere, or even reach the ground level. A method which has been used for the observation of EAS is the detection of the light emitted by the Čerenkov effect in air, while fast charged particles, (mainly electrons), are crossing the atmosphere (WHIPPLE, CANGAROO). Alternatively, it is possible to observe the UV light emitted by decay processes occurring in the atmospheric molecules after excitation by the EAS’s secondary particles (Fly’s Eye, HiRes, Pierre Auger Project, Telescope Array, OWL Project). The amount of UV and Čerenkov light emitted by an EAS is extremely faint, and because of this it is possible to observe these processes only during moonless dark nights, and by using relatively large telescope mirrors as light concentrators and sensitive photomultiplier tubes. Another (and perhaps more common) approach (Haverah Park, AGASA, Volcano Ranch, SUGAR) is the direct detection of the shower secondary particles reaching the ground level. The size of the footprint at ground level is several thousand square meters for showers produced by primary cosmic rays of energies near the “knee” or higher. Because of this these experiments are designed so as to observe only samples of the particle showers using an array of ground-based detector stations, where gas-filled chambers, plastic scintillators or Čerenkov-effect detectors are typical components. The detector stations of these ground arrays are usually capable to measure particle densities. In the case of the array described in the present work, where water Čerenkov detectors (WCD) are used, this measurement is performed through a sample of the amount of light emitted when the shower particles traverse through the water radiator. Also, the precise relative times of the signals produced by each station are recorded, together with the Čerenkov light intensity information. By using the relative hitting times at each station and the known geometry of the array it is possible to determine the direction of arrival of the primary cosmic ray, assuming that the general development of the shower follows a rather flat front profile. Rigorously the shower front is a curved surface whose radius of curvature could in principle be determined if the number (and quality) of the sampling detectors is high enough. The determination of the primary energy from EAS measurements using ground-based detectors is closely tied to shower reconstructions, based on Monte Carlo simulations. These simulations correlate the primary energy to the particle densities at a fixed distance from the shower “core” position, that is, the center of gravity of the air shower at ground. In a simplified model, the primary energy is simply estimated as a magnitude proportional to the total number of particles in a shower. Hence, the particle density measurements performed by each station is used to estimate the total number of particles of the shower. In the following sections the design and construction of this new air shower experiment, which has been optimized for the “knee” region of the energy spectrum, are described. The necessary simulations, which were required to set the numerous design parameters of the array, are presented in detail. The array ========= The TANGO ([**TAN**]{}dar [**G**]{}round [**O**]{}bservatory) Array has been constructed in Buenos Aires, Argentina, at ($\sim$ 15 m a.s.l), 3534’ 21” S and 5830’ 50” W, in the Campus of the Constituyentes Atomic Center, belonging to the Argentinean Atomic Energy Commission (CNEA). The data acquisition (DAQ) room was set inside the TANDAR Accelerator Building. Three detectors are placed on the vertices of an almost isosceles triangle, and a fourth detector was installed on top of the building, in a convenient position close to the center of the triangle, as shown in Figure \[fig:array\]. The final positions of the detector stations were conditioned by the free space available between the existing buildings, and an effort was made to come up to an overall shape as close as possible to an equilateral triangle, which maximizes the effective collection area. The distances between surface stations were measured using a GPS and their error has been estimated in $\pm$ 1 m (the measurement was performed after release of the high precision GPS service). The final configuration encloses a geometrical area of 31286 m$^{2}$. The array has a yearly average overburden of $\sim$ 1000 gr/cm$^{2}$. During an EAS event the DAQ system measures both, the intensities of the Čerenkov photons emitted by the water when crossed by the secondary particles of a high energy cosmic ray’s EAS, and also, the arrival time of the these particles to each station. The threshold energy of the array, resulting from the geometry and from the particular detector conditions, (present noise, trigger levels, etc) is close to 10$^{14}$ eV for vertical showers. The detector stations are connected by low attenuation (RG-213) coaxial cables to the DAQ room, where the signals are recorded using a 4-channel digital oscilloscope connected to a computer. Depending on the selected trigger conditions, which are generated by standard NIM electronic modules, the number of accepted events ranges from $\sim$ 250 to $\sim$ 1500 per day. The detector stations --------------------- This array is a project which grew up from the first 1:1 scale prototype of a WCD [@nimtank](See Figure \[fig:detector\]) built in 1995 by members of the local Pierre Auger Project Collaboration [@PAP]. This first detector (labelled A in Figure \[fig:array\]) was construced in a tank, cylindrical in shape, made of 0.68 mm stainless steel plate, with a footprint area of 10 m$^{2}$. The effective water depth is 120 cm. Three, 8-inch photomultipliers (Hamamatsu R1408), symmetrically placed at 120 cm from the tank axis were installed looking down on the top of the detector, having only the photocathodes immersed in the water working as Čerenkov radiator. Thus, a sample of the Čerenkov photons emitted when a charged particle crosses the tank are collected by the three PMTs. This detector, being a prototype, was designed as a flexible system, allowing the introduction of modifications in the photomultiplier positions, the effective water height, or the inner lining material. During the measurements as a component of the TANGO array the configuration of this detector was that of the Pierre Auger Project baseline design[@PAP], with the dimensions mentioned above. In order to improve the optical properties of the inner surfaces all detectors were fully lined with Tyvek which is a highly UV-diffusive and reflective material[@tyvek]. The two other detectors sitting in the vertices of the triangle (B and C in Figure \[fig:array\]) have the same general dimensions quoted previously. They are made of 1 mm thick stainless steel, and the external walls are shaped as a dodecagon (see Figure \[fig:station\_1\]). In these detectors we used Hamamatsu R5912, 8-inch diameter PMTs, arranged with the same geometry used in the first detector tank. The fourth detector (D) is smaller, it was made using a fiberglass-reinforced polyestyrene tank, with a footprint of 0.5 m$^{2}$ and an effective water depth of 80 cm. The tank was also internally lined with Tyvek and only one, 3-inch PMT, was installed centered on the top of the tank. The larger outer detectors are more adequate to measure lower particle densities generated by showers falling relatively far away from them, either close to the center of the array or very away the whole array. The relatively large ratio between the volume of the external detectors to that of the smaller central detector helps to improve the accuracy in the determination of the particle densities in those cases where the shower core falls close to the center of the array. This is so because of the larger dynamic range of the central detector, which admits higher particle densities without going into saturation (See Section \[lab:elect\], and the higher sensitivity of the larger detectors placed on the vertices of the triangle. All PMTs used in the WCDs were mounted in water-tight enclosures that protect from moisture their voltage dividers and only the photocathode areas of the glass bulbs are immersed in the water radiator (See Figure \[fig:detector\]. The glass bulbs of the PMTs were glued to the PVC housings using an elastic silicone compound to reduce mechanical stresses that could break the glass, as happened in the Milagrito experiment[@Milagrito]. Local high voltage power suplies, fed from the AC mains, were installed near each station. The bias configuration of all PMTs was adopted as grounded cathode, to prevent eventual noise produced by electrical leaks or discharges through the glass. The water used to fill the tanks was treated in a reverse-osmosis plant, producing an average final water resistivity of about 1 M$\Omega$-cm. Before filling the tanks they were carefully degreased, brushed with water and mild detergent and rinsed abundantly with the same water used as the detector material. These precautions, together with the darkness and the fact that the water used as detector material has a very low level of bacteria nutrients, virtually blocked any extensive biological activity [@gap_96_036]. After more than 1 year since the filling of the detectors, no significative decrease in the signal strength has been observed. Characterization of the photomultiplier tubes --------------------------------------------- The gains of the three (R1408) PMTs used in the first prototype were measured previously[@gap_ganancia], and we have built a dark box, adequate for measuring the gain and dark current of the new Hamamatsu R5912 tubes, purchased for the new detector stations. We also used this darkbox to characterize the photocatode sensitivity profiles and the influence of the Earth magnetic field direction of the PMTs. The dimensions of the box are 50 x 50 x 100 cm and accepts one PMT, which is mounted in an axially rotatable holder. Using electrons thermally emitted from the photocathodes at room temperature, we measured the gains of the PMTs by means of the single electron technique. In these measurements we tested different voltage divider configurations in a range of high voltage from 1100 to 1800 V. In all cases the tubes were kept in total darkness for at least 2 hours before collecting the single electron spectra, to reduce the rate of multi-electron emissions due to fast decaying fluorescence in the photocatodes. The dark current pulse rate (threshold = 1/3 p.e.) after one hour of storage in total darkness was about 900 Hz at 1500 Volts. A typical spectrum and a plot of the gain values are shown in Figure \[fig:se\]. A study of the influence on the PMT gain of the Earth magnetic field direction relative to the dynode geometry, was also performed. The gains were carefully measured at a fixed HV setting (1.5 kV) for 8 different azimuthal orientations of the PMTs, covering 360. The tubes were measured keeping always their axis in vertical position. No significative shifts (less than $\pm$ 1%) were observed in the peak positions of the single electron spectra obtained in this way. From these results we concluded that the local influence of the Earth magnetic field direction on the gain can be neglected, and thus no special care was taken to install the tubes in the detector stations at any special dynode orientation. Gain settings and calibration of the detector electronics {#lab:elect} --------------------------------------------------------- After testing several possible configurations we adopted a pasive, grounded cathode design for the voltage dividers, as simple and reliable as possible, assuring both wide dynamic range, and linearity. The final configuration chosen is similar to that recommended by Hamamatsu for the R5912 tubes. Metal film resistors were used, and decoupling capacitors stabilize the 3 last dynodes. The finished printed cards were protected with water resistant varnish and installed with bags of dissicant material in the water-tight enclosures mentioned above. It is important that the gains of the 3 PMTs installed in each detector station are matched to each other to avoid unbalances in charge collection. Unmatching could impair the homogeneity in the response of the detector, and even reduce the dynamic range. Because of availability of equipment and space, we used a common high voltage supply in each detector station. Because of this, and provided that the observed differences in gains were small (less than 15 %), we compensated the differences in gain by using passive, constant-impedance variable attenuators to reduce as necessary the output pulse amplitude of the two tubes having larger gains in each station. In order to determine the relative gains we adopted a routine procedure based on the measurement of the signal from each PMT produced by background muons. The trigger for this measurement is taken from the signals from other PMT belonging to the same station, as described in[@GAP_PRYKE_CALIB]. Once the average relative gains are obtained for the three tubes, the attenuators are set in the two PMTs with higher gains, matching the peak position produced by the PMT with lowest gain. This procedure for gain matching and calibration has been performed on a monthly basis during the complete period of measurements. The system proved to be very stable and very few adjustments were required along this time. In addition to this periodic gain matching monitoring procedure, a daily routine for monitoring the overall gain of the 4 detector stations has been performed. It also uses the natural background muons falling in the detectors as the source of signals for calibration. It has been found that the spectra of the summed signals of the three PMTs within each peripheral detector station, and also the response of the 3" PMT installed in the small central detector show clearly a peak when they are triggered by themselves. Although it is somewhat broad, the position of this “background” muon peak is very closely the same as the position of the similar peak obtained when a pair of external plastic scintillators are used to select vertical and central muons for triggering. This is valid for both, voltage and charge spectra. This experimental result, which might be due to the remarkable uniformity in the light distribution produced by the Tyvek liners, provides a simple and reliable procedure for remote monitoring and calibration of the station gain [@gap_00_027]. This peak value has been called VEM ([**V**]{}ertical [**E**]{}quivalent [**M**]{}uon), and is defined as the charge (or voltage) peak produced by singly charged, energetic particles, crossing vertically the detector along its axis. This VEM-value is a characteristic parameter of each detector, and depends on its components, geometry, construction, and also on its operation conditions (transparency of the water radiator, bias voltage, etc). The VEM-value provides a practical way to normalize the signals from different detectors and moreover, to express the total signal produced in each station by an EAS ([*i.e.*]{} muons, electrons, gamma rays, etc., hitting the station) in terms of an “equivalent reference particle”. Muons have been selected in this case as they are present everywhere and proved to be very convenient for calibration. In previous studies[@gap_97_032] performed with the prototype detector, very good homogeneity in charge collection was obtained by using the sum of the signals of the three PMTS. This behavior, which might be again attributed to the excellent light spread produced by the Tyvek liners, is kept almost independently of the entrance points and directions of the muons. According to these results, our design included fast active adder circuits installed in the peripheral detectors. The operational amplifier employed (CLC 452) works also as the driver for the relatively long RG-213 cable, carrying the signals from each detector to the DAQ room. Although their response in speed is excellent (we require a 130 MHz bandwidth), these circuits introduce a limitation in the dynamic range as the maximum span voltage is less than 2 V. In order to reduce the reflection of the pick up noise signal the impedance of the cable was matched at the sending end, but this reduces further the available amplitude to only 1.4 V. On the other hand, an acceptable signal to noise ratio in the DAQ room asks for minimum signal amplitudes for single muons of $\sim$ 100 mV. These figures, together with the measured RF pick up and the signal attenuation produced along the cables, limit the final available dynamic range from 1 to 15 muons. Even when this dynamic range is limited, it has been found to be acceptable because only $\sim$ 30% of the events had to be rejected in the off-line data analysis due to electronic saturation in any station. Because there is only one PMT in the central detector, and the cable length to the DAQ room is relatively short ($\sim$ 45 m), its anode signal was directly sent without summing circuit nor attenuator, and hence without the limitation in the dynamic range present in the outer detectors. Trigger System and Data Acquisition ----------------------------------- The signals from the four stations arriving to the electronics front panel are split using linear fan-in/fan-out (FIFO) modules (see Figure \[fig:trigger\]). Then, they are fed directly to the input connectors of a four-channel digital oscilloscope (Tektronix TDS 3034 set at 500 Ms/s). The oscilloscope is the core of our DAQ system and works as the digitizing stage for detector signals under control of a STROBE pulse. The detector signals are at this point unsynchronized after travelling different lengths of cables (206, 196, 310 and 44 m, for detectors A,B,C,D, respectively). Thus, in order to generate valid trigger conditions, it is essential to compensate these different transit times. With this purpose, we use the second signals from the FIFOs to generate logical pulses in analog discriminators (discrimination level $\sim$ 1 VEM), then these pulses are delayed accurately to compensate for these differences in time and then they are fed to a majority logic coincidence unit to select the desired trigger condition. The time window in this module is set to  1.1 $\mu$s, covering safely the maximum time used by the EAS front to go across the array, even for the case of almost horizontal directions. The digital oscilloscope available does not feature external trigger. For this reason one of the analog channels had to be used for triggering purposes, in addition to its signal digitizing function. With this purpose the STROBE signal generated by the coincidence unit, indicating the production of an event of interest (in practice 3 or 4-fold coincidences) is delayed about 8 $\mu$s after arrival of the last detector signal and then summed to one of the detector channels (channel 4 in Figure \[fig:trigger\] and \[fig:scope\]). Because of the relatively low singles counting rates and with the introduced delay of 8 $\mu$s no overlaps are produced in practice. Provided that the STROBE pulse is summed with opposite polarity respect to the detector signals the Advanced Trigger feature of the oscilloscope could be safely used for triggering. When the STROBE pulse is detected by the oscilloscope, the SAVE procedure is initiated, [*i.e.*]{}, the traces stored in the four channel memories corresponding to the last 16384 ns, are frozen and transferred to the PC disk. This time slice allows us to obtain a good measurement of both, the desired detector signals and the unavoidable radio noise pick-up in the long cables carring the signals. The system dead time (digitalization, data transfer and PC storage) is 22 seconds for event. This relatively long dead time is primarily produced by the transfers, through the RS-232 serial port working, at 19200 bps. This dead time is considered acceptable in comparison with the average time between events, which is of the order of 6 minutes. The first time region of 8192 ns (up to the first cursor in Figure \[fig:scope\]) is used to compute the bias level at the time of presentation of the detector signals. The typical pick up noise appears as a dominant oscillation with a period of the order of 1 $\mu$s, corresponding mainly to the AM broadcasting stations. The following 8192 ns region, between the cursors, is the time region where the detector signals are stored. The last region which contains the STROBE signal is not saved to disk. The internal 150 MHz bandwidth low-pass filter built in the oscilloscope is active in order to reduce the amplitude of higher frequency signals. A fast Fourier analysis of the detector signals indicated that their main harmonic components extend up to $\sim$ 100 MHz, thus little distortion in the detector signals is introduced by the filter. A special program was written to drive the data acquisition system in a completely automatic way. Normal collection of shower events is performed when the program runs in “Survey Mode”. The detection of a STROBE pulse causes that the oscilloscope traces recorded in the 4 channels are saved to disk, together with information on the year, day, and local civil time, which allows to reconstruct the equatorial or galactic coordinates of the shower arrival direction. In addition, every day the program switches at a predetermined time to the “Calibration Mode”. In this mode, the collection of data from each detector station is self-triggered, in order to record background events to calibrate the stations, [*i.e.*]{} to determine the daily VEM value for each station. The four detectors are measured sequentially in this mode, under program control. The data are stored to disk and analyzed off-line. Roughly, one hour and a half is required to acquire 3000 background events (found to be adecuate to obtain the VEM values with an error of $\sim$ 5%) for each station and to save the calibration data from the four detectors. The starting time for the calibration procedure, and the total amount of background events for each detector, are set in an ASCII file. Once the calibration is completed, the program automatically switches to the “Survey mode”’’ described above. This mode of operation is kept until the “Calibration Mode” is called up again, at the programmed time next day. The singles counting rates of the four detector stations are permanently recorded using a CAMAC scaler with a refreshing time of 1 s, and are also saved to disk. This information is valuable for monitoring the status of each station. It helped discarding particular data when the operating condition of a particular station became unstable due, for instance, to a high level of pick up noise, or gave an alert signal for the need of maintenance of a station, in occasional cases of light leaks. The recording of the counting rates is also program-controlled and does not require operator action to run, once it is launched. Simulated performance of the array ================================== In order to characterize the behavior of the array, detailed simulations were performed to estimate its efficiency for shower detection and its angular and energy resolutions. A special routine, simulating the detector response to the different shower particles, has also been written to provide an input for the reconstruction routines. Shower database --------------- The AIRES program [@AIRES] using the SYBILL hadronic package was used in the first step of the simulation pipeline: the construction of an adequate shower database containing detailed information about the secondary particles at ground produced by primary cosmic rays of the energies of interest. The shower simulation starts with the injection of a primary particle in the high atmosphere ($\sim$ 100 km above sea level) and tracks down the different generations of secondary particles in the subsequent cascade. The technique known as [*thinning*]{}[@AIRES] was used to reduce the CPU time and the disk storage requirement. This procedure consists in tracking explicitly all particles above certain energy threshold (or [*thinning*]{} energy), and those particles having an energy level below the threshold are computed using statistical weight. In our simulations we set a relative thinning energy level of 5.10$^{-5}$ with respect to the primary particle energy. To construct the shower database, twenty primary energies ranging from 10$^{14}$ eV to 10$^{18}$ eV were selected, and two nuclear species (protons and iron nuclei) were considered as primary particles. They were injected at zenithal angles from 0to 60$^{\circ}$, in 15steps. To reduce the artificial fluctuations due in part to the thinning method [@gap_96_020], and also to obtain representative values of the relevant parameters, batches of 100 showers were simulated under the same initial conditions (as described above), and their average and RMS values were used. All these simulations were performed considering a ground level of 15 m.a.s.l. The AIRES program produces a set of tables written in ASCII code, in which the information referent to the secondary particles reaching ground level (after deconvolution of the thinning algorithm) can be expressed simply as particle densities as a function of the core distance. This function is called [*lateral distribution function*]{} (LDF). In addition, for those particles “reaching” the ground level, these tables provide the landing time as function of the shower core distance and also their energy distribution. These tables include the mean, RMS and extreme values for each computed variable bin. Figure \[fig:aires\] shows a typical example of AIRES results for the LDF and for the energy distributions produced by a proton primary of 2.10$^{15}$ eV impinging at a zenithal angle of 30$^{\circ}$. Thus, by running protons and iron nuclei as primary particles, the AIRES database contains an amount of 20000 simulated showers covering the energy and zenithal range of interest for the TANGO Array. The shower database tables contains only particle densities, energies and arrival times (with respect to the core position particles arrival times) for muons, electrons, and gamma-rays. Array simulation procedure -------------------------- In order to predict the response of the array, the information on showers contained in the AIRES database tables was used to simulate “events”, [*i.e.*]{} the effect of individual showers falling relatively close to the array. A simulated shower is a set of information describing in detail the calculated number and properties of secondary particles reaching the ground level. A simulated event is the set of information about the calculated effect of the shower on the array, taking into account the simulation of the detector, DAQ hardware, electronics, etc. With this purpose the showers are read from the AIRES database tables, establishing the number, energy and type of particles hitting each detector station, and their relative arrival time. The result of an event is a set of electronic signals from the detectors which are stored in computer memory. The simulation of each particular event allows to determine if it triggers or not the DAQ system. A total number of 360000 “events” , including all primary species and energies have been simulated from the information contained in the AIRES tables and they constitute the [*simulated events database*]{}. The procedure to simulate one event is described as follows: - For each primary energy, 9000 shower core positions were selected landing at random in a area larger than the geometrical area of the array. In this way we can estimate the effect of showers falling outside the boundaries and obtain an estimation of their triggering efficiency. The size of this landing area was scaled logarithmically with the primary energy with the purpose to take into account the increase of the shower size at ground with energy. - For each core landing position the zenithal and azimuthal angles of the event were chosen as follows: the azimuthal angle was uniformly distributed, and the zenithal angle distribution can be described by a cos$^{3}$($\theta$) function, with a cut-off at 45 $^{\circ}$. This cut-off was selected accordingly with the atmospheric depth at Buenos Aires, where most EASs arrive within a cone of $\sim$ 40. The exponent of the distribution was chosen so as to produce a distribution flatter than the flattest one reported up to date [@Milagrito]. This was done with the purpose of including in the database a number statistically significant of simulated events at higher zenithal angles. - Once the event core position and the angles were established for each particular event, the distances from each detector station to the core were calculated. Then, from the AIRES tables the apropiate mean values and dispersions were extracted and interpolated to reproduce the simulated event. - To include the shower-to-shower fluctuations, uniform random number generators were profiled (using the accept-reject technique) [@ac-rej] to reproduce the mean value and dispersion of the AIRES particle density tables contained in the database, according to the particular secondary particle considered. With the AIRES tables interpolated to the particular conditions of each simulated event, and the modified random number generators, the densities (particles/m$^{2}$) of muons (both charges), electrons (both charges) and gamma-rays hitting each detector neighbourhood, were obtained. Finally, these density values were scaled according to the geometrical area of each detector to obtain the number of particles falling over each detector in each shower. - The energy and arrival time of every individual particle hitting each detector station were obtained using the same procedure (the accept-reject technique). This was made taking by into account the particle species and its distance to the core. - Once the number of particles, energies and arrival times of all particle species falling on each station for the event, were obtained, the detector signal was obtained as is described in detail in \[lab:simdet\]. - The next step in this calculation was the simulation of the response of the data acquisition electronics by performing a check to determine whether each particular simulated shower produces or not a valid trigger. With this purpose, the simulated traces for each detector station were scanned, searching for the threshold crossing times in each channel (there could exist multiple crossings in a single event). Then, these threshold crossing times, determined in each channel, were compared to those corresponding to the other channels to establish the presence of temporal coincidences between the traces (an EAS). If multiple crossing times were present in one or more channels, each one of them was searched for time coincidence with the other channels. The threshold levels in all channels were set as equivalent to the signal amplitudes produced by 1 VEM from each particular detector, and the time window for the coincidences was set to 1.1$\mu$s, in correspondence to the real situation during measurements. If a coincidence condition was found, then the event was classified accordingly to the number of stations involved in the coincidence. - Finally, the behavior of the A/D converter stage was also simulated, featuring a FADC working at 500 Ms/s (like that used in the data acquisition system). An appropriate noise generator has been included. From noise spectrum measurements we concluded that the local AM radio stations are the main noise sources, contributing with $\sim$ 15 to 30 mV to the signal (the typical signal amplitude corresponding to one single particle is $\sim$ 100 mV). The noise spectrum can be described as a continuous distribution with superposed, strongly varying peaks corresponding to the well-known local AM broadcasting frequencies, ranging from $\sim$ 550 to $\sim$ 1650 kHz. The FM band is also seen in the noise spectrum. However, its amplitude is much lower and can be safely ignored. On this basis, in order to obtain a realistic simulation, we added to the simulated signals a noise spectrum which follows the description given above. All computer programs required for the simulation pipeline (except AIRES) were especially developed in the present work. ### Detector simulation {#lab:simdet} A simple and very fast simulation program was written to emulate the detector response. In this program, instead of simulating in detail the production and transmission of the Čerenkov photons emitted during the passage of charged particles through the water, we used the detailed knowledge of the detector behavior achieved during the previous years of operation of the first prototype. On this basis, the detector response was reproduced accordingly to a large set of measured parameters. In the following, prior to describing the detector simulation program, we present a summary of the experimental data, obtained previously. In previous experiments [@nimtank; @gap_Carla] the response of the WCD to vertical and tilted muons has been observed in detail. In these experiments, the entrance and exit points of the muons on the detector surface have been carefully selected to cover as much as possible all possible situations. A total of 38 different particle track lengths have been measured, corresponding to 162 different situations derived from the symmetry properties of the detector configuration. A particle track is considered to be “fully contained” when the entrance point of the muon is anywhere on the lid and the exit point is in the bottom, or when the entrance and exit points are near diametrally placed on the lateral cylindrical wall of the detector. Either an entrance or exit point on the side wall, and the other in the bottom or in the lid, are considered to produce a “clipping corner” track. As a result of these measurements (which are summarized in the Figure \[fig:carla\_thesis\]) we have found that the sum of the charges collected in the three PMTs of our WCD is, very approximately, directly proportional to the track length of the particle in the water radiator, and this is valid regardless of the entrance point position or the zenithal angle of the track. For all measured tracks the digitized pulse shapes were recorded. The rise and fall times remain almost constant for the whole range of track lengths, which might be understood from the fact that these parameters are primarily determined by the highly diffusive properties of the Tyvek liner[@black_top]. These measurements have also shown that the fluctuations of the measured parameters (rise and fall times, voltage amplitude, and charge) are not larger than about $\pm$ 10 % of their mean value. These results were supported by GEANT [@geant] simulations, performed previously[@gap_96_011; @gap_96_029]. \[ht\] In addition to the response to fast muons, the response of the WCD to fast electrons and gamma-rays was obtained. Both, electrons and gamma-rays, produce also an amount of light proportional to their track lengths. It should be taken into account that gamma-rays are detected through their interaction with the water going essentially through pair-creation processes. This is the most probable case, given the relative cross sections at the typical gamma-ray energies present in the EAS. Therefore, the signals produced by gamma-rays are roughly the same as those produced by fast electrons, provided their energy distributions are similar (see Figure \[fig:aires\]). It should be taken into account that the signal produced by a muon with energy higher than $\sim$ 400 MeV becomes indistinguishable from the signal produced by an electron with energy higher than $\sim$ 250 MeV [@pryke_phd]. Hence, these values were used to normalize the signals from electrons to those corresponding to muons. In order to include in the simulations the effect of the signal distortions in the cables, we have recorded in a previous work the average pulse shape for vertical muons transmitted through 200 m of RG-213 cable. By taking into account all this information, the simulation of the surface detector signal was carried out as described below: - [**Muons:**]{} For each muon hitting a detector station, a zenithal angle is selected using a gaussian-shaped random number generator, with its mean value centered in the zenithal angle of the primary particle of the EAS, and a sigma value of 4$^{\circ}$. Hence the particles are restricted to an angular range of about $\pm$ 25[@pryke_phd]. Once the zenithal angle is established, the range of the particle in water is obtained according to its energy, and a peak amplitude is found as a function of its range. If the range of the muon exceeds the track length inside the surface detector, then the amplitude is made proportional to the track length. Finally, rise and fall times are selected with a gaussian shaped random number generator and the pulse shape is written to memory, considering the respective time delay from the AIRES results (again conveniently spread using a gaussian random number generator). The signal peak amplitudes, as well as the rise and fall times, are established also from gaussian-shaped random number generators, with their relative sigma-values obtained from measurements. - [**Electrons:**]{} The general procedure is similar to that described for muons. The main difference occurs in the calculation of the range, which, in the case of the electrons, is assumed to be completely contained within the WCD, [*i.e.*]{} no backscattered electrons are simulated. The values of the peak amplitudes are obtained from electron simulations performed previously using the program GEANT. - [**Gamma Rays:**]{} The energy of the $\gamma$-rays originated in an EAS range from $\sim$ 10 MeV to $\sim$ 100 MeV, and the main interaction channel in water goes through the pair creation process. In this energy regime, the mean interaction length of gamma-rays in water is about 80 cm. The track length for a specific gamma-ray (which depends on the zenithal angle selected as described above), determines the probability of creation of an electron-positron pair. In this case the electron simulation routine is called with two electrons, having a total energy balancing the gamma-ray energy. The energy of the recoiling nucleus is neglected. The resulting final program is very fast; once the AIRES tables are locally available in a 233 MHz PC running under Linux, it simulates an average of 100 events/minute, and produces realistic pulse shapes as shown in Figure \[fig:sim\_vs\_data\], where a real shower is compared with a simulated one. As a summary, we have simulated the output of the digitizing electronic stage, reproducing the detector signal on the basis of previously known parameters relating the underlying physics with the detector response. The effect of the cables on the signal shape and the influence of the pick-up noise (only for the AM band), have been considered. Shower Reconstruction ===================== The reconstruction of the showers aims to find the direction of the shower axis, and to make an estimation of the energy of the primary cosmic ray. This reconstruction could be made in principle by performing a careful evaluation of a number of parameters which are measured from the oscilloscope traces, and from a comparison with the results of the simulations. The reconstruction procedure is initiated by the obtention of the direction of the shower axis by fitting the arrival times to each detector, asuming a flat shower front. Once the direction is determined, the core position is found through minimization of the lateral distribution function using the particle density falling over each station. Then, using Monte Carlo simulations, it is possible to correlate the shower primary energy with the particle density measured by the detector stations. Reconstruction of the shower direction -------------------------------------- The reconstruction of the direction is based on the arrival times of the shower front particles to each detector station in the array. In order to determine the “trigger time” in each station, the voltage signal is time-integrated, and the crossing times of charge amplitude values equal to 10%, 50% and 90% of the maximum collected charge are determined, with the condition that no dynamic range saturation occurs. These times are called $t_{10}$, $t_{50}$ and $t_{90}$, respectively. These parameters behave like ‘’constant fraction discriminators’’ crossing times, and they are valuable for the comparison of the overall time structure of the station’s signals when different particle densities are measured. The $t_{10}$ are good indicators of the arrival time of the shower front to the detectors and they are used to obtain the shower axis direction, which is coincident with the primary cosmic ray arrival direction. On the other hand, the $t_{50}$ and $t_{90}$ are more closely related with the time structure and temporal width of the shower than with the shower direction, and can be used to estimate the primary mass composition and also the core distance [@linsley; @watson]. If three ground stations detect a shower, its axis can be determined by triangulation with reference to the arrival times and to the positions of the stations. This is made by searching for a unique, downward-going shower front, which we assume to be a plane, and moves at the speed of light. When all four detectors are hit, then a least squares method is used to find the best fit to this plane shower front. More elaborate and detailed algorithms can be used to obtain the shower direction including, for instance, the radius of curvature of the shower front. However, if we use the available data obtained from our four stations and try to use more complicated algorithms they, very often, fail to converge. ### Angular resolution As described above, the $t_{10}$ values were obtained from the simulated events database and used to obtain the arrival direction of each event. From this reconstruction the $\theta$ and $\phi$, (zenithal and azimuthal angles) were obtained. These angles are the spherical angular components of a vector, normal to the (assumed) plane shower front. The accuracy in the reconstruction is determined by comparing these angles with the “true” angular direction of the particular simulated event, which is read from the events database. As can be seen in Figure \[fig:ang\_res\], the angular resolution ($\sigma$) of the array improves progressively with energy in the decade of 10$^{14}$ eV, then remains almost constant in the decade 10$^{15}$ eV and slowly decreases beyond $\sim$ 10$^{16}$ eV. The reconstructed plane is, actually, a plane parallel to the plane tangent to the shower curved front surface crossing the array at its center point. Beyond $\sim$ 10$^{16}$ eV the shower front disk is much larger than the geometrical size of the array and the probability of having the shower core falling away from the array, and being still able to produce a trigger, is higher than the probability for the core to fall closer. Because of the finite radius of curvature of the shower front, the vector normal to the tangent plane is more tilted, respect to the shower axis, at points lying far away from the core than for points closer to the core. Typical shower front curvature radius are in the order of 10 km, hence at a distance of 300 meters from the geometrical center of the array, the normal to the tangent plane is tilted $\sim$ 2$^{\circ}$ respect to the shower axis. This angular difference between the shower axis and the reconstructed direction have to be added to the intrinsic angular resolution of the array, which is of the same order of magnitude. This effect might explain the decrease in angular resolution at higher energies. Also, this may be interpreted as an energy limit for the validity of the flat shower front assumption, given the size of our array. Reconstruction of shower energy ------------------------------- The axial symmetry assumption for an EAS is relevant for the energy analysis. It means that in a plane perpendicular to the shower axis, the particle density only depends on the radial distance from the axis. On the ground plane this symmetry is lost (unless the shower is vertical). However, for moderate zenithal angles ($\le$ 40$^{\circ}$) the assumption of a symmetric distribution is a valid approximation. For instance, if the EAS has a zenithal angle of 40$^{\circ}$ and the shower front has a diameter of 300 meters (typical size of the TANGO array) the ‘’forward’’ component of the shower front travels about 250 meters more than the backward’’ component to reach the ground. Considering the measured attenuation length reported in the Haverah Park experiment of (780 $\pm$ 35) gr/cm$^{2}$[@edge], the forward component of the shower front would be attenuated only $\sim$ 5% with respect to the backward component. This calculation shows the validity of the approximation. On this basis, we assume in the following that axial symmetry is a valid assumption. A key parameter required to estimate the energy of an EAS is the LDF, [*i.e.*]{} the particle density as a function of the distance to the core position. From the results of previous experiments [@HP; @AGASA] it is possible to propose the functional dependence: $$\label{eq:ldf} {\rho}={A\over{r^{\eta + r/r_0}}}$$ where $\rho$ is the particle density (\[VEM/m$^{2}$\]), $r$ is the distance to the core (\[m\]), $A$ is a normalization constant (proportional to the primary particle energy) and $\eta$ and $r_0$ control the shape of the LDF. The last two parameters were obtained by fitting the previous expresion to simulated particle density distributions, which have included the detector response to different shower particle species ($\mu^{\pm}$, $e^{\pm}$ and $\gamma$-rays) as described in \[lab:simdet\]. Figure \[fig:sim\_ldf\] shows the fits using Equation \[eq:ldf\] to the simulated particle densities “measured” with the simulated WCD for several primary energies, where all zenithal angles included in the simulation were averaged within each core-distance bin. The small “plateau” observed in the leftmost part of the 2.10$^{16}$ eV curve is produced by the (simulated) electronic dynamic range saturation. $\eta$ $r_0$ --------- ----------------- ---------------- Proton 1.99 $\pm$ 0.02 3400 $\pm$ 150 Iron 1.94 $\pm$ 0.02 3400 $\pm$ 100 Average 1.965 3400 : Lateral distribution function parameters obtained from the simulated events database for both primary species. Also, the average values used in the reconstruction algorithm are shown.[]{data-label="tab:parameters"} It should be noted that the $\eta$ parameter is slightly sensitive to the primary particle mass [@PAP86] as was found by fitting the previous expresion to the simulated data events. The reproduction of such dependence from simulations is an encouraging result. Although different approaches were attempted to obtain, at least, a primary mass indicator from the reconstructed events in the simulated database, none of them were satisfactory, probably due to the simulated shower-to-shower fluctuations that might mask the small differences in the $\eta$ parameter for different primary species. Because of this we used in the following an average value for the $\eta$ parameter between the values corresponding to proton and iron primaries (see Table \[tab:parameters\]). It is known from extensive Monte Carlo simulations[@HP] that there exists a certain distance from the shower core for which the particle density of an EAS correlates with its primary energy and also their fluctuations are minimized. In the present experiment performed with only 4 detectors, we have used a simplified model where the normalization constant $A$ of LDF was correlated with the primary energy instead of the particle density at a fixed distance of the core position. The LDF was obtained from particle density measurements in each detector station, far away from the core. The normalization constant of the LDF is found through minimization of the following equation $$\label{eq:minimization} {\chi^{2}}={\sum_{i=1}^n \left(\rho_i-{A\over{r_i^{\eta + r_i/r_0}}}\right) }^{2}$$ where $\rho_i$ and $r_i$ are the particle density and the distance between the core impact position and the [*i*]{}-th station, respectively, and $\eta$ and $r_0$ were obtained from simulations as mentioned before. Finally, the particle density measured by each station is obtained by the ratio of the time-integrated oscilloscope trace (bias subtracted) and the VEM value corresponding to that particular detector station. This ratio yields the number of equivalent particles falling in the station. Then, the equivalent particle densities (VEM/m$^{2}$) is obtained by simple normalization to the respective detector area. The miniminization of Equation \[eq:minimization\] was performed through a grid search on the simulated data of the events database, yielding the $x$ and $y$ coordinates of the core position, as well as the normalization constant $A$. Table \[tab:core\] shows the accuracy of the core position reconstruction obtained by this method for some energies. The accuracy is degraded at higher energies, probably because the shower front size becomes comparable with the array size. Primary Energy Core position accuracy ---------------- ------------------------ 5.10$^{14}$ eV 40 m 2.10$^{15}$ eV 30 m 5.10$^{15}$ eV 55 m 2.10$^{16}$ eV 110 m : Accuracy (RMS) of the reconstructed core position.[]{data-label="tab:core"} ### Correction by atmospheric attenuation The axial symmetry asumption proved to be a valid approximation regarding the forward ’and backward ’components of the shower front for non-vertical showers. However, the effect of atmospheric attenuation in a tilted EAS development cannot be ignored. In order to get an evaluation of the magnitude of this effect we used the simulated events database to estimate the atmospheric attenuation on the shower propagation through the atmosphere. By simple geometrical considerations it is possible to propose a functional dependence of the form $$\label{eq:zen_corr} {N} = {A} e^{\left[\beta (sec(\theta)-1)\right]}$$ where $N$ is a normalization factor, proportional to the primary particle energy that includes the atmospheric attenuation correction factor and this constant includes the atmospheric attenuation correction factor. For each primary energy, the simulated showers were divided in zenithal angle bins of 5each, and a fit was performed to the data using the functional dependence shown in Equation \[eq:zen\_corr\], [*i.e.*]{}, assuming only a dependence for the $A$ parameter on the zenithal angle. The average value obtained for $\beta$ by fitting the simulated data to Equation \[eq:zen\_corr\] is $\beta = 4.1 \pm 0.1$. For the zenithal range of interest ($\theta \le$ 30$^{\circ}$) this correction because of the atmospheric attenuation increases the estimated EAS’s energy up to $\sim$ 50%. ### Primary energy assignment Finally, after minimization of Equation \[eq:minimization\] and being performed the atmospheric attenuation correction (for which the directional reconstruction is required) it is possible to show the relationship between $N$ -a parameter obtained from the shower reconstruction routine- and the primary energy (obtained from the simulated events database). It should be noted that in this survey over the simulated events database we found that, beyond $\sim$ 2.10$^{16}$ eV, $N$ fails to converge, and the linearity (in logarithmic scale) as a function of the primary particle energy is lost. Therefore, only data at lower energies are shown in Figure \[fig:prim\_en\]. From these fits we obtain the following expressions, useful to correlate the parameter $N$ \[VEM/m$^{2}$\] with the primary energy \[eV\]: $$\label{eq:energy_proton} {E_0} = {(4 \pm 1) 10^{9} N ^{1.17 \pm 0.03}}$$ and $$\label{eq:energy_iron} {E_0} = {(5 \pm 2) 10^{9} N ^{1.20 \pm 0.03}}$$ where Equations \[eq:energy\_proton\] and \[eq:energy\_iron\] correspond to proton and iron primaries, respectively. From these equations it is possible to estimate the relative error in the energy reconstruction. Even when the expression has a dependence on the $N$ value, its dependence is logarithmic, and the $\Delta$$N/N$ value was found to be $\sim$ 0.4 from the simulated database. This yields a relative error of 57% and 66% for protons and iron nuclei, respectively, in the energy range from  $\sim$ 10$^{14}$ eV to $\sim$ 10$^{16}$ eV. According to these results, a knowledge of the primary particle mass would be required to correctly correlate the N parameter with the primary particle energy by choosing the proper expresion. Strictly, this fact prevents us to make an unambiguous assignment of the primary energy. Furthermore, it should be recalled that both Equations \[eq:energy\_proton\] and \[eq:energy\_iron\], were obtained from surveys performed on the Monte Carlo simulations, which are dependent of the particular hadronic package utilized. On the other hand, however, the results obtained from both expresions are consistent within errors. Summary ======= A new, Extended Air Shower Array has been constructed in Buenos Aires during 1999 and was commissioned in 2000. It consists of 4 Water Čerenkov Detectors, three of them are arranged in a triangular shape and the fourth is near the center of the triangle. The enclosed area is $\sim$ 30.000 m$^{2}$. The detectors placed in the vertices of the triangle have a footprint area of 10 m$^{2}$, the central detector has 0.5 m$^{2}$. Detailed Monte Carlo simulations of the showers were performed using the AIRES code with the SYBILL hadronic package. Various computer programs and routines were developed to simulate the array response including the surface detector, front end electronics, pick-up noise, and triggering. It should be noted that an effort was made to use experimental data whenever possible. The simulated events database contains a total of 360000 events. A reconstruction routine has been developed from the simulated shower database. According to the simulations, the angular reconstruction resolution is better than 5in the range 5.10$^{14}$ eV to 10$^{17}$eV. The accuracy expected in the energy resolution is roughly 60% in the range $\sim$ 10$^{14}$ eV to $\sim$ 10$^{16}$ eV. With respect to the primary mass determination it is concluded from the present simulations that no unambiguous assignement can be made, at present, from the showers measured with our array. A fully automatic system for calibration, monitoring and data acquisition has been built using standard NIM and CAMAC modules and a 4-channel digital oscilloscope connected to standard PCs. Data have been continuously collected since September, 2000 and the shower reconstruction analysis will be published in an forthcoming paper. Acknowledgements ================ We are very especially indebted to the late J.Vidallé for his unvaluable dedication and help in the early stages of the TANGO Array. We are also deeply indebted to D.Simoncelli and E.Fisher for their outstanding work at the Mechanical Workshop in TANDAR Laboratory. Also, we would like to express our gratitude to P. Stoliar, H. Di Paolo, C.Bolaños, J.Fernández Vásquez and O.Romanelli, for their help with different aspects of the electronic system. Thanks are given to M.Figueroa and M.Wagner for their work in the characterization of the PMTs. We would like to mention H.Grahmann, O.Ruiz, E.Altmann, A.Ferrero and A.Etchegoyen. They helped us in many different ways. We also thank Prof. Ma Yu Quian, from Beijing University, for the donation of the 3-inch PMT used in the central detector and to [*Plásticos Industriales S.A.*]{}, specially E.Carricondo and P.Martelli for the donation of the fiberglass-reinforced tank. Finally, we would like to thank Fermilab for the loan of some electronic modules used in the experiment (Fermilab Loan C96082). The work of P. Bauleo, C. Bonifazi and A. Reguera was supported by different CNEA fellowships. This work was partially supported by a CONICET grant (PIP 4446/96). [99]{} P. Bauleo [*et.al*]{}, Nucl. Inst. and Meth. A406,69 (1998) The Auger Collaboration [*Pierre Auger Project Design Report*]{}, Revised Edition, March, 1997 R. Atkins, [*et.al*]{} The Milagro Collaboration, Nucl. Inst. and Meth. A449, 478 (2000) A. Filevich [*et.al*]{}, Nucl. Inst. and Meth. A423,108 (1999) O. Bernaola [*et.al*]{}, GAP-1996-036 D. Ravignani [*et.al*]{}, GAP-1997-024 T. Kutter [*et.al*]{}, GAP-1997-025 P. Bauleo [*et.al*]{} accepted for publication in NIM, and GAP-2000-027 J. Rodr[í]{}guez Martino [*et.al*]{}, GAP-1997-032 S. Sciutto, The AIRES program is available at\ [**http://www.fisica.unlp.edu.ar/auger/aires**]{} D. Ravignani [*et.al*]{}, GAP-1996-020 C. Walck, Private Communication. C. Bonifazi, [*et.al.*]{}, Lic. Thesis (Buenos Aires University), and GAP note to be submitted F. Hasenbalg [*et.al.*]{}, GAP-1997-027 GEANT, CERN Program Library FCEN & TANDAR Groups, GAP-1996-011 P. Bauleo [*et.al*]{}, GAP-1996-029 C. Pryke, Ph.D. Thesis, Leeds University (1996) D.M. Edge [*et.al*]{}, J. Phys. A, 6, 1612 (1973) M. Lawrence [*et.al*]{}, J. Phys. G, 17, 733 (1991) H. Dai [*et.al*]{}, J. Phys. G, 14, 793 (1988) J. Linsley, J. Phys. G, 12, 51 (1986) A.A. Watson [*et.al.*]{}, J. Phys. G, 7, 1199 (1974) The Auger Collaboration [*Pierre Auger Project Design Report*]{}, Revised Edition, p. 86, March 1997 0.8 cm [**Note:**]{} The [*Pierre Auger Project Design Report*]{} and the Pierre Auger Project Internal Notes (GAPs) can be found in: [**http://www.auger.org/admin/index.html**]{}
--- abstract: 'Quadrupole scans were used to characterize the [leda]{} [rfq]{} beam. Experimental data were fit to computer simulation models for the rms beam size. The codes were found to be inadequate in accurately reproducing details of the wire scanner data. When this discrepancy is resolved, we plan to fit using all the data in wire scanner profiles, not just the rms values, using a 3-D nonlinear code.' author: - | W.P. Lysenko, J.D. Gilpatrick, L.J. Rybarcyk, J.D. Schneider, H.V. Smith, Jr., and L.M. Young,\ LANL, Los Alamos, NM 87545, USA\ M.E. Schulze, General Atomics, Los Alamos, NM 87544, USA title: 'Determining Phase-Space Properties of the LEDA RFQ Output Beam[^1]' --- INTRODUCTION ============ During commissioning of the [leda rfq]{}[@r1; @r2], we found that the beam behaved in the high energy beam transport ([hebt]{}) much as predicted. Thus the actual [rfq]{} beam must have been close to that computed by the [parmteqm]{} code. The [hebt]{} included only limited diagnostics[@r3] but we were able to get additional information on the [rfq]{} beam distribution using quadrupole scans[@r4]. An good understanding of the [rfq]{} beam and beam behavior in the [hebt]{} will be helpful for the upcoming beam halo experiment. The problems with the quad scan measurements were the strong space effects and the almost complete lack of knowledge of the longitudinal phase space. Also, our simulation codes, which served as the models for the data fitting, did not accurately reproduce the measured beam profiles at the wire scanner. HEBT DESIGN =========== The [hebt]{}[@r5] transports the [rfq]{} beam to the beamstop and provides space for beam diagnostics. Here, we discuss [hebt]{} properties relevant to beam characterization. [*Design has Weak Focusing.*]{} Ideally, the [hebt]{} would have closely-space quadrupoles at the upstream end until the beam is significantly debunched, i.e., for about one meter. After this point, we could use any kind of matching scheme with no fear of spoiling the beam distribution with space-charge nonlinearities. Our [hebt]{} design uses four quadrupoles, which is the minimum that provides adequate focusing for the given length. Any fewer than four quadrupoles results in the generation of long Gaussian-like tails in the beam, which would be scraped off in the [hebt]{}. [*Good Tune is Important.*]{} If a tune has a small waist in the upstream part of the [hebt]{}, the beam will also acquire Gaussian-like tails. Simulations showed that good tunes existed for our four-quadrupole beamline and were stable (slight changes in magnet settings or input beam did not lead to beam degradation). [*Beam Size Control.*]{} In our design, increasing the strength of the last quadrupole (Q4) increases the beam size in both $x$ and $y$ by about the same amount. This is because there is a crossover in $x$ just downstream of Q4 and a (virtual) crossover just upstream of Q4 in $y$. If the beam turns out to not be circular, this can be adjusted by Q3, which moves the upstream crossover point. [*Emittance Growth in HEBT.*]{} Simulations showed that the transverse emittances grew by about 30% in the [hebt]{}. However, this did not affect final beam size. At the downstream end of the [hebt]{} and in the beamstop, the beam is in the zero-emittance regime (very narrow phase-space ellipses). Simulations with [trace 3-d]{}, which has no nonlinear effects, and a 3-D particle code that included nonlinear space-charge predicted almost identical final beam sizes. OBSERVED HEBT PERFORMANCE ========================= Near the beamstop entrance, there is a collimator with a size less than 3 times the rms beam size. Initial runs showed beam hitting the top and bottom of the the collimator, indicating the beam was too large in $y$. This was fixed by readjusting Q3 and slightly reducing Q4 to reduce the beam size. After these adjustments, beam losses were negligible. This indicated the [hebt]{} was operating as predicted and the [rfq]{} beam was about as predicted. There were no long tails generated in the [hebt]{} that were being scraped off. Thus our somewhat risky design, having only four quadrupoles, worked as designed. QUADRUPOLE SCANS ================ Procedure --------- Only the first two quadrupoles were used. For characterizing the beam in $y$, Q1, which focuses in $y$, was varied and the beam was observed at the wire scanner, which was about 2.5 m downstream. The value of the Q2 gradient was chosen so that the beam was contained in the $x$ direction for all values of Q1. For characterizing $x$, Q2 was varied. As the quadrupole strength is increased, the beam size at the wire scanner goes through a minimum. At the minimum, there is a waist at approximately the wire-scanner position. For larger quadrupole strengths, the waist moves upstream in the beamline. Measurements ------------ Quadrupole scans were done a number of times for a variety of beam currents for both the $x$ and $y$ directions. The minimum beam size at the wire scanner was near 2 mm, which was almost equal to the size of the steering jitter. Approximately ten quadrupole settings were used for each scan. Data were recorded and analyzed off line. Fitting to Data --------------- To determine the phase-space properties of the beam at the exit of the [rfq]{}, we needed a model that could predict the beam profile at the wire scanner, given the beam at the [rfq]{} exit. We parameterized the [rfq]{} beam with the Courant-Snyder parameters $\alpha$, $\beta$, and $\epsilon$ in the three directions. We used the simulation codes [trace 3-d]{} and [linac]{} as models for computing rms beam sizes in our fitting. The [trace 3-d]{} code is a sigma-matrix (second moments) code that includes only linear effects but is 3-D. The [linac]{} code is a particle in cell ([pic]{}) code that has a nonlinear $r$-$z$ space charge algorithm. Figure \[t1\] shows the rms beam size in the $y$ direction as a function of Q1 gradient. The experimental numbers are averages from a set of quad scan runs[@r4]. The other curves are simulations using the [trace 3-d]{}, [linac]{}, and [impact]{} codes. The [impact]{} code is a 3-D [pic]{} code with nonlinear space charge. The initial beam (at the [rfq]{} exit) for all simulations is the beam determined by the fit to the [linac]{} model[@r4]. (This is why there is little difference between the experimental points and the [linac]{} simulation.) There are significant differences among the codes in the predictions of the the rms beam size. Table 1 shows emittances we obtained when fitting to the [trace 3-d]{} and [linac]{} models. [|l|c|c|]{}\ & $\epsilon_x$ & $\epsilon_y$\ Prediction ([parmteqm]{}) & 0.245 & 0.244\ Measured ([trace 3-d]{} fit)& 0.400 & 0.401\ Measured ([linac]{} fit) & 0.253 & 0.314\ QUAD SCAN SIMULATIONS ===================== Profiles at Wire Scanner ------------------------ Since only the [impact]{} code has nonlinear 3-D space charge, we would expect that this code would be the most accurate and should be used to fit to the data. Both nonlinear and 3-D effects are large in the quad scans. However, we found that the [impact]{} code (as well as [linac]{}) could not predict well the beam profile at the wire scanner. Figure \[f3\] shows the projections onto the $y$ axis for two points of the $y$ quad scan, corresponding to a Q1 gradients of 7.52 and 11.0 T/m. The agreement for 11 T/m, which is to the right of the minimum of the quad scan curve, is especially poor. We see that the experimental curve (solid) has a narrower peak, with more beam in the tail than the [impact]{} simulation predicts. Figure \[f1\] shows the $y$ phase space just after Q2 for two points in the $y$ quad scan. After Q2, space charge has little effect and the beam mostly just drifts to the end (there is little change in the maximum value of $|y'|$). The graph on the left is for a Q1 value to the left of the quad scan minimum (9.5 T/m). The graph at the right shows the situation to the right of the minimum (10.9 T/m). The distribution in the left graph is diverging, while the one on the right is converging. It is this convergence that apparently leads to the strange tails we seen in the experimental profiles at the wire scanner. Figure \[f2\] shows similar graphs a little before the wire scanner, 2.35 m downstream of the [rfq]{}. We see how the tails in the $y$ projection form for the case of the quad scan points to the right of the minimum, which correspond to larger quad gradients. While this appears to explain the narrow-peak-with-enhanced-tails seen in the wire scans, the effect is much smaller than in the experiment. We studied various effects looking to better reproduce the profiles seen at the wire scanner, all with negative results. Code Physics ------------ We studied the effects of mesh sizes, boundary conditions, particle number, and time step sizes with no significant change in results. We investigated the possibility that there were errors associated with using normalized variables ($p_x$) in a $z$ code, which [impact]{} is. For high-eccentricity ellipses, this could be problem. However, transforming distributions to unnormalized coordinates, which are appropriate to a $z$ code, did not noticeably change the results. Effects of Input Beam --------------------- We used for input the beam generated by the [rfq]{} simulation code [parmteqm]{}. We also used generated beams, which were specified by the Courant-Snyder parameters. Using the Courant-Snyder parameters of the [parmteqm]{} beam yielded similar results. Varying these parameters in various ways did not make the beam look any closer to the experimentally observed one. We tried various distortions of the input beam such as enhancing the core or tail and distorting the phase space by giving each particle a kick in $y'$ direction proportional to $y^2$ or $y^3$. These changes had little effect, even for very severe distortions. Kicks proportional to $y^{1/3}$ were more effective. These are more like space-charge effects in that the distortion is larger near the origin and smaller near the tails. In general, we found that any structure we put into the input beam tended to disappear because of the strong nonlinear space-charge forces at the [hebt]{} front end. Effects of Quad Errors ---------------------- Multipole errors were investigate using a version of [marylie]{} with 3-D space charge. We could generate tails that looked like the experimentally observed ones, but this took multipoles that were about 500 times as large as were measured when the quadrupoles were mapped. Quadrupole rotation studies also yielded negative results. Space Charge ------------ We investigated various currents and variations in space charge effects along the beamline, as could be generated by neutralization or unknown effects. Longitudinal Motion ------------------- We had practically no knowledge of the beam in the longitudinal direction except that practically all of the beam is very near the 6.7 MeV design energy. Since the transverse beam seems to be reasonably predicted by the [rfq]{} simulation code, we do not expect the longitudinal phase space to be much different from the prediction. We tried various longitudinal phase-space variations and none led to profiles at the wire scanner that looked similar to the experimental ones. DISCUSSION ========== In the upstream part of the [hebt]{} the beam size profiles ($x_\mathrm{rms}$ and $y_\mathrm{rms}$ as functions of $z$) for the quad scan tune are not much different from those of the normal [hebt]{} tune. The differences occurs quite a way downstream. But here, space charge effects are small and are unlikely to explain the differences we see in the beam profiles at the wire scanner. This is a mystery that is still unresolved. If we succeed in simulating profiles at the wire scanners that look more like the ones seen in the measurement, then it will be reasonable to fit the data to the 3-D [impact]{} simulations. In that case, we will use all the wire-scanner data, taking into account the detailed shape of the profile and not just the rms value of the beam width, as we did for the [trace 3-d]{} and [linac]{} fits. While we were able to use a personal computer to run the [hpf]{} version of [impact]{} for most of the work described here, the fitting to the [impact]{} model will have to be done on a supercomputer. ACKNOWLEDGEMENTS ================ We thank Robert Ryne and Ji Qiang for providing the [impact]{} code and for help associated with its use. [9]{} H.V. Smith, Jr. and J.D. Schneider, “Status Update on the Low-Energy Demonstration Accelerator ([leda]{}),” this conference. L.M. Young, et al., “High Power Operations of [leda]{},” this conference. J.D. Gilpatrick, et al., “[leda]{} Beam Diagnostics Instrumentation: Measurement Comparisons and Operational Experience," submitted to the Beam Instrumentation Workshop 2000, Cambridge, MA, May 8-11, 2000. M.E. Schulze, et al., “Beam Emittance Measurements of the [leda]{} [rfq]{},” this conference. W.P. Lysenko, J.D. Gilpatrick, and M.E. Schulze, “High Energy Beam Transport Beamline for LEDA,” 1998 Linear Accelerator Conference. [^1]: Work supported by US Department of Energy
--- abstract: 'Let $\mathcal{B}$ be the class of functions $w(z)$ of the form $w(z)=\sum\limits_{k=1}^{\infty}b_k z^k$ which are analytic and satisfy the condition $|w(z)|<1$ in the open unit disk $\mathbb{U}=\left\{z\in \mathbb{C}:|z|<1\right\}$. Then we call $w(z)\in \mathcal{B}$ the Schwarz function. In this paper, we discuss new coefficient estimates for Schwarz functions by applying the lemma due to Livingston (Proc. Amer. Math. Soc. [**21**]{}(1969), 545–552).' address: - 'Hitoshi Shiraishi Department of Mathematics Kinki University Higashi-Osaka, Osaka 577-8502, Japan' - 'Toshio Hayami Department of Mathematics Kinki University Higashi-Osaka, Osaka 577-8502, Japan' author: - Hitoshi Shiraishi - Toshio Hayami title: Coefficient estimates for Schwarz functions --- Introduction ============ Let $\mathcal{B}$ be the class of functions $w(z)$ of the form $$w(z)=\sum\limits_{k=1}^{\infty}b_k z^k$$ which are analytic and satisfy the condition $|w(z)|<1$ in the open unit disk $\mathbb{U}=\left\{z\in \mathbb{C}:|z|<1\right\}$. Also, let $\mathcal{P}$ dnote the class of functions $p(z)$ of the form $$p(z)=1+\sum\limits_{k=1}^{\infty}c_k z^k$$ which are analytic and satisfy the condition ${\mathrm{Re}}(p(z))>0$ in $\mathbb{U}$. Then we call $w(z)\in \mathcal{B}$ and $p(z)\in \mathcal{P}$ the Schwarz function and the Carathéodory function, respectively. The following results are well-known for the class $\mathcal{B}$. \[p03lem01\] If $w(z)\in \mathcal{B}$, then $$|w(z)|\leqq |z| \quad (z\in \mathbb{U}) \quad and \quad |b_1|\leqq 1$$ are obtained. In particular, $|w(z_0)|=|z_0|$ for some $z_0\in \mathbb{U}\setminus \{0\}$ or $|b_1|=1$ if and only if $w(z)=e^{i\theta}z$ for some $\theta$ $(0\leqq\theta<2\pi)$. By the subordination principle, we establish the following coefficient bounds. \[p03lem02\] If $w(z)\in \mathcal{B}$, then $$|b_k|\leqq 1\qquad (k=1,2,3,\ldots).$$ Furthermore, $|b_k|=1$ for some $k$ $(k=1,2,3,\ldots)$ if and only if $w(z)=e^{i\theta}z^k$. Applying the Schwarz–Pick lemma (see, for example, [@N]), we derive the next coefficient estimate. \[p03thm01\] If $w(z)\in \mathcal{B}$, then $$|b_2|\leqq 1-|b_1|^2$$ with equality for $$w(z)=\left\{ \begin{array}{ccll} e^{i\theta}z & & & (|b_1|=1) \\ \\ \dfrac{b_1 z+e^{i\theta}z^2}{1+e^{i\theta}\overline{b_1}z} &=& b_1 z+(1-|b_1|^2)e^{i\theta}z^2+\ldots & (|b_1|<1) \end{array} \right.$$ for each $\theta$ $(0\leqq \theta<2\pi)$. In this paper, we discuss new coefficient estimates for Schwarz functions by using the following lemma due to Livingston [@L]. \[p03lem03\] If $p(z)\in \mathcal{P}$, then $$\left|c_s-c_t c_{s-t}\right|\leqq 2$$ for any positive integers $s$ and $t$ $(1\leqq t<s)$. For all $s$ and $t$, the equality is attained by the function $$p(z)=\dfrac{1+z}{1-z}=1+\sum\limits_{k=1}^{\infty}2z^k.$$ Main results ============ Our first result is contained in the following theorem by use of Lemma \[p03lem03\]. \[p03thm02\] If $p(z)\in \mathcal{P}$ with $c_k=2e^{i\theta}$ $(0\leqq\theta<2\pi)$ for some $k$ $(k=1,2,3,\ldots)$, then $$c_{nk}=2e^{in\theta}$$ for each $n$ $(n=1,2,3,\ldots)$. Taking $s=2k$ and $t=k$ in Lemma \[p03lem03\], we see that $$\left|c_{2k}-c_{k}^{2}\right|=\left|c_{2k}-4e^{i2\theta}\right|\leqq 2.$$ On the other hand, we know that $|c_{2k}|\leqq 2$, and therefore it follows that $c_{2k}=2e^{i2\theta}$. Similarly, since $$\left|c_{3k}-c_k c_{2k}\right|=\left|c_{3k}-4e^{i3\theta}\right|\leqq 2\quad {\rm and}\quad |c_{3k}|\leqq 2,$$ we have that $c_{3k}=2e^{i3\theta}$. In the same manner, for all $n$ $(n=1,2,3,\ldots)$, we conclude that $c_{nk}=2e^{in\theta}$. By virtue of the above theorem, we obtain \[p03cor01\] If $p(z)\in \mathcal{P}$ with $c_1=2e^{i\theta}$ $(0\leqq\theta<2\pi)$, then we can declare that $$p(z)=\dfrac{1+e^{i\theta}z}{1-e^{i\theta}z}=1+\sum\limits_{k=1}^{\infty}2e^{ik\theta}z^k.$$ Moreover, applying Lemma \[p03lem03\], we have a new coefficient bound for Schwarz functions. \[p03thm03\] If $w(z)\in \mathcal{B}$, then $$|b_3|\leqq 1-|b_1|^3.$$ We first note that if a function $w(z)\in\mathcal{B}$ then $$e^{i\theta}w(z)\in\mathcal{B}$$ for all $\theta$ $(0\leqq\theta<2\pi)$. Then, we know that the function $p(z)$ defined by $$\begin{aligned} \label{p03thm03eq01} p(z) &=& \frac{1+e^{i\theta}w(z)}{1-e^{i\theta}w(z)} \nonumber \\ &=& 1 + 2 e^{i\theta} b_1 z + 2(e^{i2\theta} b_1^2 + e^{i\theta} b_2) z^2 + 2(e^{i3\theta} b_1^3 + 2e^{i2\theta} b_1 b_2 + e^{i\theta} b_3) z^3 \nonumber \\ & & + 2(e^{i4\theta} b_1^4 + 3e^{i3\theta} b_1^2 b_2 + 2e^{i2\theta} b_1 b_3 + e^{i2\theta} b_2^2 + e^{i\theta} b_4) z^4 + \ldots \nonumber \\ &=& 1+c_1z+c_2z^2+c_3z^3+c_4z^4+\ldots \qquad(z\in\mathbb{U}),\end{aligned}$$ belongs to the class $\mathcal{P}$. In view of Lemma \[p03lem03\], we obtain that $$\begin{aligned} |c_3-c_1c_2| &= |2(e^{i3\theta} b_1^3 + 2e^{i2\theta} b_1 b_2 + e^{i\theta} b_3) - 2 e^{i\theta} b_1 2(e^{i2\theta} b_1^2 + e^{i\theta} b_2)| \\ &= |2 e^{i\theta} (b_3 - e^{i2\theta} b_1^3)| \\ &\leqq 2\end{aligned}$$ which gives us that $$|b_3 - e^{i2\theta} b_1^3| \leqq 1.$$ Thus, $b_3$ is in the region $$\bigcap_\theta \{ b_3 : |b_3 - e^{i2\theta} b_1^3| \leqq 1 \} = \{ b_3 : |b_3| \leqq 1 - |b_1|^3 \}.$$ This completes the proof of the theorem. The same process in the proof of Theorem \[p03thm03\] leads us another proof of Theorem \[p03thm01\]. Applying Lemma \[p03lem03\] to the function (\[p03thm03eq01\]) with $s=2$ and $t=1$, we deduce that $$\begin{aligned} |c_2-c_1^2| &= |2(e^{i2\theta} b_1^2 + e^{i\theta} b_2) - (2 e^{i\theta} b_1)^2| \\ &= |2 e^{i\theta} (b_2 - e^{i\theta} b_1^2)| \\ &\leqq 2.\end{aligned}$$ This implies that for all $\theta$ $(0\leqq\theta<2\pi)$ $$|b_2 - e^{i\theta} b_1^2| \leqq 1$$ which means that $$|b_2| \leqq 1 - |b_1|^2.$$ But, using the same process in the proof of Theorem \[p03thm03\], we have no good result for coefficients $b_4$, $b_5$ and so on. Because, for example, applying Lemma \[p03lem03\] to the function (\[p03thm03eq01\]) with $s=4$ and $t=1$ to find the estimate of $b_4$, we obtain the next inequality. $$\begin{aligned} |c_4-c_1c_3| &=& |2(e^{i4\theta} b_1^4 + 3e^{i3\theta} b_1^2 b_2 + 2e^{i2\theta} b_1 b_3 + e^{i2\theta} b_2^2 + e^{i\theta} b_4) \\ & & - 4 e^{i\theta} b_1 (e^{i3\theta} b_1^3 + 2e^{i2\theta} b_1 b_2 + e^{i\theta} b_3)| \\ &=& |2 e^{i\theta} (b_4 + e^{i\theta} b_2^2 - e^{i2\theta} b_1^2 b_2 - e^{i3\theta} b_1^4)| \\ &\leqq& 2.\end{aligned}$$ Calculating this inequality, we have $$\label{p03eq01} |b_4 + e^{i\theta} b_2^2 - e^{i2\theta} b_1^2 b_2 - e^{i3\theta} b_1^4| \leqq 1.$$ Also, if we apply Lemma \[p03lem03\] to the function (\[p03thm03eq01\]) with $s=4$ and $t=2$, then we obtain another inequality as follows: $$\begin{aligned} |c_4-c_2^2| &=& |2(e^{i4\theta} b_1^4 + 3e^{i3\theta} b_1^2 b_2 + 2e^{i2\theta} b_1 b_3 + e^{i2\theta} b_2^2 + e^{i\theta} b_4) \\ & & - ( 2(e^{i2\theta} b_1^2 + e^{i\theta} b_2) )^2| \\ &=& |2 e^{i\theta} (b_4 + 2e^{i\theta} b_1 b_3 - e^{i\theta} b_2^2 - e^{i2\theta} b_1^2 b_2-e^{i3\theta} b_1^4)| \\ &\leqq& 2.\end{aligned}$$ Calculating this inequality, we have $$\label{p03eq02} |b_4 + 2e^{i\theta} b_1 b_3 - e^{i\theta} b_2^2 - e^{i2\theta} b_1^2 b_2-e^{i3\theta} b_1^4| \leqq 1.$$ We don’t know the region to which both inequalities (\[p03eq01\]) and (\[p03eq02\]) point. C. Carathéodory, [*Über den Variabilititasbereich der Koeffizienten von Potenzreihem, die gegebene werte nicht annehmen*]{}, Math. Ann. [**64**]{}(1907), 95–115. P. L. Duren, [*Univalent Functions*]{}, Springer-Verlag, New York, Berlin, Heidelberg, Tokyo, 1983. A. W. Goodman, [*Univalent Functions, Vol. I and Vol. II*]{}, Mariner Publishing Company, Tampa, Florida (1983). T. Hayami and S. Owa, [*The Fekete-Szegö problem for $p$-valently Janowski starlike and convex functions*]{}, Int. J. Math. Math. Sci. [**2011**]{}, Article ID 583972, 1–11. A. E. Livingston, [*The coefficients of multivalent close-to-convex functions*]{}, Proc. Amer. Math. Soc. [**21**]{}(1969), 545–552. Z. Nehari, [*Comformal Mapping*]{}, McGraw Hill Company, New York (1952). W. Rogosinski, [*On subordinate functions*]{}, Proc. Cambridge Philos. Soc. [**35**]{}(1939), 1–26. W. Rogosinski, [*On the coefficients of subordinate functions*]{}, Proc. London Math. Soc. [**48**]{}(1943), 48–82. J. Sokół, [*Coefficient estimates in a class of strongly starlike functions*]{}, Kyungpook Math. J. [**49**]{}(2009), 349–353.
--- author: - Asmita Bhandare - Andreas Breslau - Susanne Pfalzner bibliography: - 'Bibliography.bib' title: 'Effects of inclined star-disk encounter on protoplanetary disk size' --- Introduction {#sec:intro} ============ Stars are formed by gravitational collapse of dense cores in molecular clouds. During the initial stages of star formation, they are surrounded by circumstellar disks as a consequence of conservation of angular momentum. However, most of these stars are not formed in isolation, but as a part of a stellar cluster .\ Depending on the local stellar density, the cluster environment might have a significant impact on the evolution of the disks surrounding young stars (for an overview see and references therein). The two most investigated external processes which potentially influence the evolution of protoplanetary disks are external photoevaporation due to nearby massive stars [@Johnstone1998; @Adams2004; @Font2004; @Clarke2007; @Dullemond2007; @Gorti2009; @Owen2010; @Owen2012; @Rosotti2015] and gravitational interactions during fly-bys. Here we want to concentrate on the effect of stellar fly-bys because this effect is present throughout the cluster formation and early evolution, whereas external photoevaporation becomes efficient only when most of the cluster gas has already been removed. Disk properties that may be affected by such an encounter are the mass, angular momentum, and size. In the past, there have been various numerical and analytical studies of the consequences of stellar encounters on properties like the mass, angular momentum, and accretion of the disk [@Clarke1993; @Ostriker1994; @Heller1995; @Hall1996; @Hall1997; @Kobayashi2001; @PfalznerVogel2005; @Olczak2006; @Steinhausen2012]. By contrast, in this paper our work mainly focuses on the effects of star-disk encounters on the disk size. This is important because the disk size determines the maximum extent of the potentially forming planetary systems. So far the effects of stellar encounters on protoplanetary disk size has been investigated less extensively [@Ovelar2012; @Breslau2014; @Rosotti2014].\ Determining disk sizes after an encounter also poses additional problems in observations and simulations. During an encounter, part of the disk material is moved onto highly eccentric and/or highly inclined orbits. This makes it difficult to apply a straightforward definition of a disk size because observational limitations often hinder putting strong constraints on disk sizes. However, now ALMA allows disks to be resolved with high precision and gives much better constraints on disk sizes [@Moor2013; @Mann2014; @Bally2015].\ It has been numerically and analytically estimated that for a prograde, coplanar, parabolic encounter the disk is tidally stripped down to 1/2 - 1/3 of the periastron distance [@Clarke1993; @Hall1997; @Kobayashi2001]. Unfortunately, this result for an encounter between equal-mass stars has been applied in a number of studies [@Adams2006; @Adams2010; @Malmberg2011; @Torres2011; @Pfalzner2013; @Rosotti2014] to non-equal mass encounters where it is not valid [@Breslau2014].\ investigated the dependence of the disk size on the mass ratio for the case of a parabolic, coplanar, prograde encounter at different periastron distances. They define the disk size as the radius within which of the disk mass is enclosed. In their study, estimated a theoretical upper limit for the disk radius as a function of the periastron distance and mass ratio by transforming the disk-mass loss obtained from numerical simulations by to a truncation radius under the assumption that the disk is always truncated to the equipotential (Lagrangian) point between the two stars.\ However, during an encounter the disk material can lose angular momentum and move inwards by recircularizing at smaller radii, thus suggesting that the disk sizes can be reduced even without a significant mass loss . @Rosotti2014 have also concluded from their work on star-disk interactions in young stellar clusters that the disk size is affected to a higher degree than the disk mass.\ Using the steepest gradient in the surface density distribution, @Breslau2014 found a simple fitting formula for the disk size after parabolic, coplanar, prograde encounters $$\begin{aligned} r_{\mathrm{final}} = \begin{cases} 0.28 \cdot {r_{\mathrm{peri}} \cdot {{m_{\mathrm{12}}}^{-0.32}}}, \hspace{2em} & \text{for} ~r_{\mathrm{final}} \leq r_{\mathrm{init}} \\ r_{\mathrm{init}}, & \text{otherwise}, \end{cases} \label{eq:discsize_Breslau} \end{aligned}$$ which gives the dependence of the final disk size ($r_{\mathrm{final}}$) on the periastron distance ($r_{\mathrm{peri}}$) and mass ratio between the perturber mass ($M_{\mathrm{2}}$) and mass of the central star ($M_{\mathrm{1}}$). The final disk size is always limited to the initial disk size ($r_{\mathrm{init}}$).\ The outcome of an encounter not only depends on the periastron distance and the mass ratio between the two stars, but also on the orbital eccentricity and relative inclination of the perturber orbit. This spans an extensive parameter space and therefore most studies were not only restricted to parabolic, equal-mass encounters but also to prograde, coplanar encounters. Only a handful of studies take into account retrograde or inclined encounters. The effects of retrograde encounters on the disk-mass loss was investigated by , who conclude that the disk mass is largely unaffected within the periastron distance by a retrograde encounter. @Heller1995 and @Hall1996 have pointed out the importance of inclined encounters in their work. study mass and angular momentum loss for inclined encounters. For a limited number of cases, @Kobayashi2001 have analytically and numerically investigated the dependence of particle inclinations and eccentricities on the inclination of the perturber orbit. Similar numerical studies used to investigate the influence of inclined stellar encounters on the inclinations and eccentricities of the Edgeworth-Kuiper belt objects were performed by @Kobayashi2005. For the case of an equal-mass parabolic encounter they found the truncation radius to be 1/3 periastron distance, beyond which the particle inclinations and eccentricities are pumped up by an encounter and many particles can become unbound. Their study was motivated by finding the area that is unperturbed by an encounter to explain the Kupier belt. Here we use a different definition aimed at reproducing the observationally determined disk size. The obtained disk size can differ by about 10$\%$.\ In this paper, we therefore expand the parameter space studied by @Breslau2014 for coplanar encounters to investigate the effects of inclined and retrograde parabolic encounters. We study the dependence of the final disk size on the inclination of the perturber orbit, the mass of the perturbing star, and the periastron distance. We briefly describe our numerical method and disk size definition in . In we discuss and compare the results of coplanar to inclined encounters followed by a discussion of dependence of the obtained results on the assumptions in . In addition, we show how the results for disks can be applied to our solar system. We conclude by summarizing our work in . Method {#sec:Method} ====== Numerical method {#sec:method} ---------------- We consider a star surrounded by a disk which is perturbed by a passing star. Here we assume that the disk mass $m_{\mathrm{disk}}$ is much smaller than the stellar mass $\mathrm{M_{*}}$, $m_{\mathrm{disk}} \ll$ $\mathrm{M_{*}}$, as has been found for most observed disks [see @Andrews2013]. Hence, in our studies we neglect self-gravity between the disk particles. In addition, we assume that viscous forces can be neglected because the encounter time is short compared to the viscous timescale and disk-size changes mainly affect the outer disk areas where viscosity effects are negligible. In the case of a low-mass, non-viscous disk, it is enough to study only three-body interactions by considering the gravitational forces between the two stars and each disk particle [@Hall1996; @Kobayashi2001; @Pfalzner2003; @PfalznerVogel2005; @Breslau2014; @Musielak2014].\ In our simulations the disk is modeled by test particles and effects due to self-gravity and viscous forces are neglected. This means that the application is limited to low-mass disks and to situations where viscous forces can be neglected (see \[sec:discussion\]). We perform numerical simulations of thin disks using 10000 massless tracer particles. It has been shown in a number of studies that this resolution is sufficient for investigations of the global properties of disks .\ For measuring the effects on the disk size it is nevertheless advantageous to have a relatively high resolution in the outer regions of the disk. Therefore, we use an initial constant particle surface density and assign different masses to the particles to model various mass surface density distributions in the initial disk [@PfalznerVogel2005; @Olczak2006; @Ovelar2012; @Steinhausen2012].\ These tracer particles initially orbit the host star on circular Keplerian orbits. The trajectories of the particles during and after the stellar encounter were integrated with the Runge-Kutta Cash-Karp scheme; the maximum allowed error between the 4th and 5th integration step was $10^{-7}$. We consider an inner hole of 1 AU to avoid small time steps and to account for matter accreted onto the host star.\ Usually our disks have an initial radius ($r_{\mathrm{init}}$) of 100 AU, but we also perform similar simulations with . Simulations were performed for different ratios of perturber mass to host mass, . We fix the host mass ($M_{\mathrm{1}}$) to 1 $\mathrm{M_{\odot}}$ and vary the perturber mass ($M_{\mathrm{2}}$) in the range . These values are typical for a young dense cluster like the Orion nebula cluster (ONC) . The lower limit is chosen to be 0.3 $\mathrm{M_{\odot}}$ because even for the most destructive prograde coplanar encounters the effects on disk sizes is seen only for very close encounters (periastron distance $r_{\mathrm{peri}} \leq r_{\mathrm{init}}$) for masses below 0.3 $\mathrm{M_{\odot}}$ [@Breslau2014]. [0.2]{} [0.2]{} Similarly, periastron distances in the range are studied to cover the parameter space from encounters that completely destroy the disk to those having a negligible effect on the disk size. Here completely destroying the disk means the case where less than 5$\%$ of the original disk mass remains bound.\ Here we investigate the case where only one of the stars is surrounded by a disk. In many cases the results from star-disk encounters can be generalized to disk-disk encounters as captured mass is usually deposited in the inner disk areas and as such does not influence the final disk size . Exceptions are discussed in .\ In the previous studies mainly the effects due to inclined perturber orbit relative to the disk plane have been considered so far for a restricted parameter space. In addition to the inclination, the orbit can also be rotated in the disk plane, resulting in different angles between the periastron and the ascending node (here on the x-axis because the longitude of the ascending node is zero). Hence, we consider the effects of change in the argument of periapsis ($\omega$) as well as orbital inclination as illustrated in Fig. \[fig:aop\].\ Considering the disk to be in the xy plane, in principle the perturber orbit can be inclined in two ways, either along the x-axis wherein the periastron always lies in the disk plane (Fig. \[fig:aop\]a) or with respect to the xz plane wherein the periastron lies outside the disk plane . We vary the inclination of the perturber orbit in the range in steps of $10^{\circ}$ for each of the three cases of $\omega = 0^{\circ}$, $\omega = 45^{\circ}$, and $\omega = 90^{\circ}$ that we investigate. By doing so we cover the entire parameter space to study both coplanar prograde $\&$ retrograde and also non-coplanar prograde $\&$ retrograde . In addition, we also study the effects due to an encounter with a perturber on an orthogonal orbit. This is an interesting case, since for encounters with the perturber passes right through the disk without having interacted much with the disk material before and after it crosses the disk. Thus, covering a wider range of orbital inclinations in comparison to previous work [@Kobayashi2001; @Kobayashi2005; @Breslau2014].\ The simulation starts and ends when it holds for all particles bound to the host that the force of the perturber on the particles is less than that of the host star. As an example, the total simulation time for an equal-mass case then corresponds to around 40 orbits for the outermost particles and more than 50 orbits for the inner particles. Disk size determination {#sec:discsize_determination} ----------------------- As mentioned in the introduction, encounters lead to some matter being bound on highly eccentric [[^1]]{} and/or inclined orbits, which makes it difficult to define a disk size after such an encounter. Several disk size definitions have been applied in the past [@Clarke1993; @Hall1997; @Kobayashi2001; @PfalznerVogel2005]. Here we use a theoretical disk size definition that is representative of the observed values. This differs from disk sizes defined by radii containing a certain percentage of mass [@PfalznerVogel2005] or disk size definitions based on the eccentricity of the orbits .\ Observationally the most common method for determining disk sizes is to fit the observed spectral energy distribution (SED) in the millimeter and submillimeter range to truncated power laws or exponential radial density and temperature profiles [@Andrews2005; @Andrews2007; @Moor2015]. The disk size is then taken to be the truncation radius. In the case of resolved images, the disk size is taken to be the radius beyond which there is an observed luminosity drop [@McCaughrean1996; @Odell1998; @Vicente2005; @Bally2015]. Since the disk does not have a sharp edge, the disk size is specified in terms of intensity threshold, which corresponds to the characteristic radius where the surface density profile begins to steepen .\ Therefore we follow the approach used by , and determine the disk size to be the steepest gradient in the surface density in the outer disk areas . Owing to the particles on highly eccentric orbits, the disk structure changes on scales of decades. The motivation for using the steepest gradient definition is that it is the closest to the observed method and will allow a direct comparison to the disk sizes found by recent ALMA observations. For a detailed discussion on the disk size definition used here see @Breslau2014.\ We use a temporal averaged surface density, which is determined by first obtaining the orbital elements for all particles finally bound to the host star from the relative positions and velocities at the last time step. The eccentricities and semi-major axes are used to obtain the radial probability distributions for all individual particle orbits. The sum of the radial probability distributions for the individual particle orbits averaged over the period of the particle orbit, then gives the temporal averaged surface density distribution. It is important to note that as a result of using the temporal averaged surface density, particles on eccentric and/or inclined orbits do contribute to the disk size, but not as strongly as those on coplanar, circular orbits. Owing to the statistical deviations in our data the surface density distributions have to be smoothed before estimating the disk sizes.\ We expand the previous work by to inclined encounters and adopt the same method to estimate the final disk size in our studies. We find that for a certain range of inclinations, all disk size definitions are problematic.\ Because parabolic encounters have the most significant influence on disks owing to the longest interaction time, we restrict our study and consider only parabolic encounters. Since the main aim of this work is to study the dependence of disk size on orbital inclination, a parabolic orbit is a reasonable approximation to begin with.\ To obtain a statistically sound sample we performed 20 simulations for each encounter scenario with different random seeds for the initial particle distribution and found an estimate on the mean global error for all inclinations at a fixed periastron distance to be less than 2 AU for grazing and distant encounters and on the order of for penetrating encounters . Increasing the number of simulation runs did not affect these errors to a great extent and hence 20 runs proved to be sufficient for our studies. Results {#sec:results} ======= For simplicity we restrict our investigation throughout this study to parabolic encounters. In order to study the dependence of disk sizes on the inclination of the perturber, we investigate the effects of a star-disk encounter due to a perturber on prograde , orthogonal , and retrograde orbits. There are basically two ways in which a perturber orbit can be inclined: - Along the x-axis wherein the periastron always lies in the disk plane with . - With respect to the xz plane wherein the periastron lies outside the disk plane with . Prograde vs retrograde encounters {#sec:encounters} --------------------------------- Many studies have shown that prograde, coplanar encounters have the strongest influence on the disk in terms of mass loss and angular momentum loss [@Clarke1993; @Heller1995; @Hall1996; @PfalznerVogel2005; @Olczak2006; @Pfalzner2007]. In their numerical studies, have already shown a strong effect of prograde coplanar encounters on the disk size. We confirm these results for the effect on disk size due to prograde inclined encounters. However, it is seen here for the disk size, that for the retrograde coplanar and inclined encounters, although the effect on the disk size is smaller than that in the prograde case, it is still considerable for a wide range of encounter parameters.\ First the effect on disk size due to prograde coplanar and inclined encounters is illustrated in , which shows the final disk size for an initial 100 AU disk around a star of mass perturbed by a star of mass , on different prograde orbits with inclinations in the range at different periastron distances. Since here for the equal-mass case, encounters with have a negligible effect[^2] on the disk size, the cases only in the range are compared. The lower periastron limit of 30 AU has been chosen because for closer encounters the material remaining bound is less than 5 - 10 % of the initially bound particles, which makes it difficult to determine a disk size.\ The penetrating and grazing encounters ($r_{\mathrm{peri}} \leq r_{\mathrm{init}}$) destroy most of the disk, whereas the distant encounters ($r_{\mathrm{peri}} > r_{\mathrm{init}}$) have an effect only in the outer regions of the disk. These are the type of encounters that dominate in most star cluster environments [@Scally2001; @Olczak2006].\ As seen in for the prograde encounters, the disk size has an almost linear dependence on the inclination angle (*i*). For example, for (red line), the equal-mass coplanar () encounter reduces an initial 100 AU disk to 24 AU, an encounter due to a perturber on an orbit with an inclination of $30^{\circ}$ reduces the disk to 26 AU, whereas a perturber on a highly inclined orbit of $60^{\circ}$ reduces the disk to 27 AU.\ In the case of penetrating and grazing encounters ($r_{\mathrm{peri}} \leq 100$ AU), for a fixed periastron distance, the difference in the final disk size due to encounters at different orbital inclinations is less than . In the case of distant encounters ($r_{\mathrm{peri}} > 100$ AU) where mostly only the outer disk particles are affected, this difference is seen to be which is still small compared to the actual initial disk size of 100 AU. Hence these results can be approximated as having a nearly linear dependence.\ shows a similar plot for the retrograde coplanar and inclined encounters. In the case of retrograde encounters, the dependence on the inclination angle is more complex. For the equal-mass case, there is a peak at an inclination of ${140^\circ}$. We discuss the reason for this peak in \[sec:inclination\]. However, if only the coplanar retrograde () case is compared to the prograde cases, the nearly linear dependence seen in case of prograde encounters can be extrapolated up to the coplanar retrograde case. For example, for (red line), the difference between the final disk size of 41 AU due to a perturber on a coplanar retrograde () orbit and the mean value obtained from the linear extrapolation is less than $\approx$ 10 AU. Dependence on orbital inclination {#sec:inclination} --------------------------------- In order to compare the disk sizes for all the different orbital inclinations in the range including the prograde, retrograde and orthogonal cases, shows a similar plot of final disk size as a function of orbital inclination for the equal-mass case after encounters at different periastron distances. We note that here the argument of periapsis is fixed to $0^{\circ}$ and the inclination of the perturber orbit is defined with respect to the x-axis. The dependence of disk size on the argument of periapsis of the perturber orbit is discussed later in \[sec:orientation\]. Here, it is important to note that even for distant ($r_{\mathrm{peri}} > r_{\mathrm{init}}$) orthogonal encounters where the perturber has the least interaction time with the particles in the disk, there is a significant change in the disk size.\ We would like to emphasize that the effects of inclined encounters are nearly as significant as the coplanar ones. It has been argued before that inclined encounters can have a considerable effect on the disk mass and angular momentum , but only for penetrating and grazing encounters ($r_{\mathrm{peri}} \leq r_{\mathrm{init}}$). In our studies we show that disk sizes are significantly affected by inclined encounters not only for close but also for distant encounters, at least up to an encounter distance of $r_{\mathrm{peri}} \approx 5 \cdot r_{\mathrm{init}}$, depending on the perturber mass Hence it is important to understand that there can be a disk-size change without disk-mass or angular momentum loss. The reason for the different degree of influence of inclined encounters on the disk mass and size is that the disk-size change is an effect of the inward movement of the outer disk particles due to gravitational interactions during stellar fly-bys.\ It is also important to note that disk sizes are least susceptible to fly-bys on inclined retrograde orbits ($\sim 140 ^{\circ} - 160^{\circ}$) and not for the coplanar retrograde encounter as one would expect. For example, for the equal-mass case an encounter at (yellow dots in Fig. \[fig:discsize\_inclination\]) on a orbit with inclination reduces an initial 100 AU disk to whereas an encounter due to a perturber on a coplanar retrograde orbit reduces the disk to a comparatively smaller size of .\ The left column of show the face-on view of disks at the final time step after an equal-mass encounter with a perturber on orbital inclinations of (a), $130^{\circ}$ (b), $140^{\circ}$ (c), and , whereas the right columns show the corresponding edge-on view. The perturber orbit is shown with the arrow indicating the direction in which the perturber moves on the orbit. We note the differences in the prograde and retrograde cases.\ In Fig. \[fig:faceon\_edgeon\_discs\], particle inclinations have been indicated by different colors, whose values can be found in the legend. For example, particles having inclinations are shown in purple and those with inclinations in the range are shown in dark blue and so on. Similar plots showing particle eccentricities can be seen in Fig. \[fig:faceon\_edgeon\_discs\_eccentricity\]. The particle inclinations and eccentricities result from a combined effect of the resultant angular momentum due to the torque acting on the disk and the force due to both the stars acting on the particles.\ The vertical solid black line indicates the final disk size from steepest gradient in the long-term averaged surface density profile (discussed in ). In these cases, the disk size determined using the steepest gradient in the surface density profile is smaller than expected from the face-on or edge-on plots since the final disk sizes are estimated considering particles on nearly coplanar orbits using a disk size definition which depends on final particle eccentricities and semi-major axes.\ Understanding why fly-bys with $i = 140^{\circ}$ have the smallest effect on the disk size is not straightforward. Possible reasons are that it is due to the disk size definition used here or to a real physical effect. To obtain additional information we next determine the disk size using projected surface densities in the xy plane (face-on) and xz plane (edge-on). In the face-on case we use the x-y distance of the particles to the origin (i.e., $r = \sqrt{x^{2} + y^{2}}$) and in the edge-on case we use the x-z distance (i.e., $r = \sqrt{x^{2} + z^{2}}$). Here again we define the disk size using the similar idea of the steepest gradient in the surface density profiles. In this case, however, the steepest gradient is taken beyond the limit within which at least of the finally bound particles lie. This also includes particles on inclined orbits. These disk sizes can then be considered to be the upper limit and are shown by the vertical dashed lines in .\ Using the projected surface densities still leads to a gradual increase in the final disk size up to an inclination in the range , depending on the mass ratio and periastron distance, and then a decrease for perturber orbital planes closer to the disk plane.\ [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} [0.3]{} In all the retrograde cases, the disk is not sharply truncated, but the impact of the encounter results in an increase in the outer disk particle inclination and eccentricity. The disk appears to be scattered due to the particles on inclined (see Fig. \[fig:faceon\_edgeon\_discs\]) and eccentric orbits (see Fig. \[fig:faceon\_edgeon\_discs\_eccentricity\]). The amount of scatter depends on the inclination of the perturber orbit, mass ratio, and periastron distance. We note that not all the particles are influenced by an encounter, as discussed above. The particles in the inner disk regions are usually unperturbed and remain on coplanar, nearly circular orbits. Hence the disk as a whole is not inclined. However, the particles in the outer disk regions end up on highly eccentric and/or inclined orbits. These outer disk particles lead to a shallow decrease in surface density. As we use the steepest slope this might potentially contribute to the relatively large disk size, but this is probably not the main reason.\ Thus, in cases where the particles are on inclined and/or eccentric orbits (as seen more clearly in the edge-on plots in , it is very difficult to define a disk size since it is not possible to observe a sharp truncation in the disk. Observations face a similar problem in determining the appropriate disk size owing to the dependence on the viewing angle. Depending on how the disk is observed – face-on, edge-on, or at inclinations in between – not all the matter is taken into account while estimating the surface density profiles, especially the matter on highly inclined and/or eccentric orbits.\ The disk sizes obtained here depend on the contribution of the inclined and/or eccentric outer disk particles to the surface density profile. In conclusion, the large number of particles on inclined and eccentric orbits is a problem not only for the definition used here, but for any definition of the disk size.\ These effects contribute to the peak seen at for the equal-mass case. This peak shifts in the range for different mass ratios (see ). The shift in the peak is a combined effect of the resultant angular momentum and the amount of force acting on the disk particles due the perturber. A more massive perturber on an orbit closer to the disk plane (i.e., smaller inclinations with respect to the disk plane) will have a stronger effect leading to an increase in outer disk particle inclinations and eccentricities.\ This result is similar to what @Heggie1996 found analytically when investigating the effect of encounters on the eccentricity of binaries. Their analytical solution shows that the eccentricity change is the least for retrograde encounters with orbital inclinations similar to what we find here (see their Figure 6), where the exact maximum depends also on the masses of the involved stars. Thus their analytical result originally meant for binaries can also be generalized to fly-bys studied here. Dependence on argument of periapsis {#sec:orientation} ----------------------------------- Next we investigate the effect of three different orientations of the perturber orbit in the xy plane (disk plane) as discussed in . For most of the parameter space, we found only a small difference () in the disk sizes for the different argument of periapsis of the perturber orbit. This confirms the expectations of @Hall1996.\ For example, shows the final disk size versus the periastron distance for (squares, solid line), (circles, dashed line) and (stars, dotted line). Here the dependence of disk size on the argument of periapsis for two cases of a prograde () and a retrograde () encounter are discussed. For the three different orientations, in case of prograde encounters, the disk size differs by $\leq$ 5 AU and for retrograde encounters, the difference in the disk size is less than 10 AU considering the more complex structure as discussed before.\ Although we do not find a significant difference in the disk size for different argument of periapsis, we do find a difference in the outer disk particle inclinations and eccentricities for penetrating and grazing encounters. This is seen especially in the case of the orthogonal encounters where the perturber passes through the disk for $r_{\mathrm{peri}} \leq r_{\mathrm{init}}$. This could have consequences in the context of the highly inclined Sedna-like bodies in our solar system and for wide-orbit extrasolar planets. Dependence on mass ratio and periastron distance {#sec:massratio_rperi} ------------------------------------------------ In the following we want to have a closer look at the dependence on the mass ratio and periastron distance. For mass ratios ($m_{\mathrm{12}}$) 0.3 (dashed), 1.0 (solid), and 20.0 (dotted), Fig. \[fig:massratio\_rperi\]a shows the disk-size change versus periastron distance scaled to the initial disk size (100 AU) for parabolic, coplanar prograde and retrograde encounters. shows a similar plot for parabolic, inclined prograde and retrograde encounters.\ A more massive perturber has a greater influence on the disk and results in smaller disk sizes. For example, an inclined, prograde encounter at and (Fig. \[fig:massratio\_rperi\]b, blue solid line) destroys roughly of the initial 100 AU disk, whereas for a higher mass perturber (Fig. \[fig:massratio\_rperi\]b, blue dotted line), of the initial disk is destroyed.\ As expected, we find that the closer the encounter distance, the more significant the disk truncation. For example, an inclined , prograde, equal-mass () penetrating encounter at destroys of the initial 100 AU disk, whereas a distant encounter at destroys only of the initial disk.\ The disk sizes for a fixed mass ratio ($m_{\mathrm{12}}$), for different encounter distances (${r_{\mathrm{peri}}}$) at different orbital inclinations are tabulated in Appendix \[sec:appendixA\].\ Considering that in a star cluster, an encounter by a perturber on orbits with different inclinations is equally probable, we calculated the mean disk size over all the orbital inclinations for a fixed mass ratio and periastron distance. This is important, for example, if one post-processes cluster simulations to determine the average disk size change due to encounters . We found a dependence of the disk size on the mass ratio and periastron distance which is represented by the fit formula of the form $$\begin{aligned} r_{\mathrm{final}} \approx \begin{cases} 1.6 \cdot {m_{\mathrm{12}}^{-0.2}} \cdot {r_{\mathrm{peri}}^{0.72}}, \hspace{2em} & \text{for} ~r_{\mathrm{final}} \leq r_{\mathrm{init}} \\ r_{\mathrm{init}}, & \text{otherwise}, \end{cases} \label{eq:discsize_formula} \end{aligned}$$ where the bottom line expresses that the final disk size is limited to the initial disk size. We note that we find a similar dependence on the periastron distance and mass ratio to that obtained by @Breslau2014 ; however, our disk size definition can be applied to all encounter scenarios taking into account both coplanar and inclined encounters. The fit to the data expressed by equation \[eq:discsize\_formula\] deviates less from the simulations results than the statistical difference.\ In their studies for coplanar, prograde encounters, have already indicated that the final disk size is fairly independent of the initial disk size. It is always the periastron distance and the mass ratio that determine the final disk size. As stated before, in the studies where viscous forces and self-gravity can be neglected, the fly-by can be treated as a three-body encounter for each particle. This basically implies that the fate of individual particles is independent of the remaining disk. Therefore, in this case the final disk size is independent of the initial disk size. This is confirmed by our simulation results shown in where the final disk size for an initial 200 AU disk (red diamonds) is compared to those for initial disk size of 100 AU (blue circles). The sizes are the same within the simulation error, as long as the final disk size is smaller than 100 AU. The dashed lines represent the fit formula given by equation \[eq:discsize\_formula\].\ For example, for $m_{\mathrm{12}}$ = 0.3 as seen in , for an encounter at $r_{\mathrm{peri,1}}$ = 150 AU = 1.5$\cdot r_{\mathrm{init,1}}$ gives a disk size , whereas for an encounter at the same relative periastron distance gives a resulting disk size . These results are confirmed using our simulations. This means that the final disk size and the periastron distance can always be scaled to an arbitrary initial disk size. Discussion {#sec:discussion} ========== Some assumptions have been made in our studies described above. First, we model our disks using pure N-body methods and neglect the effects due to viscosity and self-gravity as described in \[sec:method\]. This is motivated by the fact that the observed disk masses are relatively small compared to the stellar masses.\ The relative importance of viscous forces also depends on what type of disk these results are being applied to: young gas rich disks, debris disks, or even evolved planetary systems. Viscosity plays an important role only in the case of young gas rich disks, whereas for the other cases a purely gravitational treatment suffices. In young viscous disks there is one situation where viscosity can become important in terms of disk sizes. The relative velocities between the particles strongly perturbed by a passing star are greater than the sound velocity and therefore the energy damping by shock is non-negligible even in the encounter timescale. However, viscosity is a strong function of radial distance to the central star. Only in the inner parts of the disk, viscosity is strong enough to affect the obtained disk size. This means only encounters leading to relatively small disk sizes are influenced by local viscosity effects. The actual value depends on the assumed disk viscosity, but for typical viscosity values only disks with final sizes smaller than approximately 20 AU will be noticeably affected. Only in this case the actual disk size could be larger after an encounter than determined above.\ Viscosity enables recircularization of the remaining disk material after the encounter on long timescales . However, this does not affect the disk size because recircularization by viscosity immediately after an encounter is only efficient in the inner disk regions (&lt; 20 - 30 AU) on the timescales considered here . The disk size reduces to such small radii only in case of penetrating encounters which are relatively rare in most star clusters .\ Another effect of viscosity is that of disk spreading due to redistribution of angular momentum in a highly viscous gaseous disk. On long time scales ( &gt; 0.5 Myr) this means that disks can have a larger size than immediately after the encounter. However, studies have found that material at such large radii is usually affected by distant encounters resulting in a truncated disk, which nullifies the effects of disk spreading . Further studies are required since viscosity effects are not currently well constrained by observations. It is important to note that in our simulations the disk is represented by test particles without any gas and hence the viscosity effects can be safely neglected.\ Since our study is restricted to low-mass thin disks we can neglect the influence of the test particles on each other (self-gravity). Our approximation of restricted three-body encounters is hence valid in case of low-mass thin disks. Our studies may not apply to massive disks usually found in the earlier stages of star formation, since in those cases viscosity and self-gravity effects must be taken into account.\ In order to simplify the investigation done here, only one of the stars is surrounded by a disk. In reality, in many cases – at least initially – both stars will be surrounded by a disk. The disk can be replenished by mass transfer between the two disks which could then in turn affect the disk size. However, it has already been shown that most of the transfered mass is usually transported in the inner regions of the disk and the captured material would have very little influence on the disk size [@PfalznerUmbreit2005]. Hence the assumption of a star-disk encounter works well for the low-mass thin disks modeled in our studies.\ The disk size definition used here would not necessarily define an absolute limit for the matter bound to the star since the steepest gradient in the surface density distribution used to define the disk size could vary to a certain extent. There is a small fraction of disk material outside this limit which is still bound to the star. In the case of an initial $r^{-1}$ distribution, the mass of the bound particles outside the determined disk size is usually less than 15 % of the total mass density of bound particles. The disk sizes defined here can be used to determine the radius within which enough material would be available for the formation of planetary systems.\ In this work only parabolic encounters are considered, as they are the most destructive type of encounters owing to the longer interaction time compared to the hyperbolic ($\mathrm{e_p} > 1$) encounters [@Clarke1993; @PfalznerUmbreit2005]. @VinckePfalzner_2015 have found that the parabolic encounters mainly dominate in low-mass clusters and clusters like the ONC, whereas hyperbolic encounters are predominant in denser clusters like the Arches cluster. Although the hyperbolic encounters would lead to larger disk sizes than the parabolic ones, the dependence of the final disk size on the orbital inclinations for the hyperbolic encounters would be interesting to compare with the parabolic ones. Effects due to hyperbolic encounters on the disk size will be investigated in a follow-up study. These results can also be applied directly to cluster simulations to determine the disk size distribution in different cluster environments.\ There have been studies related to the effect of stellar encounters on the solar birth environment and on dynamics of highly eccentric and inclined objects in our solar system [@Adams2001; @Kobayashi2005; @Adams2010; @Bailer2015; @Mamajek2015; @Jilkova2015; @Higuchi2015]. In our work, for an initial 100 AU disk and considering an equal-mass perturber, close stellar fly-bys at an encounter distance of would result in a disk the size of the solar system .\ Considering the fact that inclined encounters can lead to particles on highly inclined and eccentric orbits, in a follow up study we will investigate further the implications of these encounters for highly inclined Sedna-like bodies in our solar system. Summary {#sec:Summary} ======= Depending on the cluster density, stellar encounters might have a strong effect on protoplanetary disks in star cluster environments, the dominant place of star formation. In particular, the disk size might be strongly influenced by the presence of other cluster members [@Vincke2015]. Most of the investigations so far have considered the effect of parabolic, coplanar encounters on the disk size. However, inclined encounters are much more common in star clusters. Here, we investigated the effect of inclined stellar fly-bys with an emphasis on the disk size after such an encounter.\ We presented a parameter study covering orbital inclinations from , for different mass ratios in the range and at periastron distances from which span a range from penetrating to distant encounters. For comparison, we also studied encounters with perturbers on inclined orbits with different arguments of periapsis for cases where the periastron lies in the disk plane and outside the disk plane . We summarize our results from this extensive parameter study as follows, - Our studies extend the results of @Breslau2014 for disk sizes after coplanar prograde encounters to inclined and retrograde encounters. The results obtained here show, that the coplanar prograde encounters have a strongest effect on the disk size, in comparison to the inclined and retrograde encounters. However, even inclined encounters mostly have a strong influence on the disk size. The similar influence of coplanar prograde encounters has already been studied in the case of disk-mass and angular momentum loss . - Although parabolic prograde encounters are the most destructive ones, encounters still have a significant effect on the disk size. The difference between the disk size due to prograde and retrograde encounters decreases with an increase in the perturber mass and decrease in the periastron distance. Hence the effect of retrograde encounters on disk-mass loss and angular momentum change should be studied for a larger parameter space. - We find that averaged over all the inclinations, the disk size after an encounter is a function of the periastron distance ($r_{\mathrm{peri}}$) and the mass ratio ($m_{\mathrm{12}}$) of the form $$\begin{aligned} r_{\mathrm{final}} \approx \begin{cases} 1.6 \cdot {m_{\mathrm{12}}^{-0.2}} \cdot {r_{\mathrm{peri}}^{0.72}}, \hspace{2em} & \text{for} ~r_{\mathrm{final}} \leq r_{\mathrm{init}} \\ r_{\mathrm{init}}, & \text{otherwise}. \end{cases}\end{aligned}$$ - The more massive is the perturber, the stronger is the effect on disk size. - Penetrating encounters destroy most of the disk, whereas distant encounters mainly have a strong influence in the outer regions of the disk. - The disk size due to an encounter by a perturber on orbits with different argument of periapsis ($\omega$) differs by $\leq 10\%$. A change in $\omega$ of the perturber orbit mostly has a strong effect on the particle inclinations and eccentricities in the outer disk, which depends on the periastron distance, mass ratio, and orbital inclination. With the current ground-based and space-based missions providing a great deal of data, the work done here can prove to be a useful tool for tracing the possible encounter scenarios for the observed disk sizes. It is also a likely tool for determining the disk sizes after binary captures. Final disk sizes {#sec:appendixA} ================ Here we present the values for the final disk sizes for an initial 100 AU disk around a 1 $\mathrm{M_{\odot}}$ star for different perturber masses in the range listed in the different tables. Every table contains the final disk size for different periastron distances ($r_{\mathrm{peri}}$) in the range , different perturber orbital inclinations in the range and for a fixed argument of periapsis . The effect of orbital inclinations, as discussed in \[sec:inclination\], can be compared for the different parameters studied here. [^1]: In their studies, @Heggie1996 have derived analytical expressions for change in orbital eccentricity of a binary due to a distant stellar encounter. [^2]: With negligible effect here we mean a change in disk size of less than 5%, which is smaller than the errors typical of this type of simulations.
--- abstract: 'The Einstein Telescope is a conceived third generation gravitational-wave detector that is envisioned to be an order of magnitude more sensitive than advanced LIGO, Virgo and Kagra, which would be able to detect gravitational-wave signals from the coalescence of compact objects with waveforms starting as low as 1Hz. With this level of sensitivity, we expect to detect sources at cosmological distances. In this paper we introduce an improved method for the generation of mock data and analyse it with a new low latency compact binary search pipeline called `gstlal`. We present the results from this analysis with a focus on low frequency analysis of binary neutron stars. Despite compact binary coalescence signals lasting hours in the Einstein Telescope sensitivity band when starting at 5 Hz, we show that we are able to discern various overlapping signals from one another. We also determine the detection efficiency for each of the analysis runs conducted and show a proof of concept method for estimating the number signals as a function of redshift. Finally, we show that our ability to recover the signal parameters has improved by an order of magnitude when compared to the results of the first mock data and science challenge. For binary neutron stars we are able to recover the total mass and chirp mass to within 0.5% and 0.05%, respectively.' author: - Duncan Meacher - Kipp Cannon - Chad Hanna - Tania Regimbau - 'B. S. Sathyaprakash' bibliography: - 'bibliography.bib' title: 'Second Einstein Telescope Mock Data and Science Challenge: Low Frequency Binary Neutron Star Data Analysis' --- Introduction {#sec:intro} ============ Second generation gravitational-wave (GW) detectors, aLIGO [@cqg.32.074001.15] and AdVirgo [@cqg.32.024001.15], are planned to improve the sensitivity over first generation detectors, LIGO [@rpp.72.076901.09] and Virgo [@aip.794.307.05] by an order of magnitude. aLIGO has recently begun operations and AdVirgo is currently in the commissioning stage with plans to join operations in 2016. It is expected that the first direct detection of gravitational waves will be made before the end of this decade. The Einstein Telescope (ET) is a conceived third generation gravitational-wave detector that is currently in the design stage [@cqg.27.194002.10] and is planned to be operational after $\sim$ 2025. This detector will have an improvement in sensitivity by an order of magnitude over that of the second generation detectors that will allow for the detection of a large number of GW signals from a variety of processes, out to large distances. These include, but are not limited to, events such as the formation of neutron stars or black holes from core collapse supernovae [@prd.72.084001.05; @prd.73.104024.06; @mnras.398.293.09; @mnrasl.409.L132.10], rotating neutron stars [@aap.376.381.01; @prd.86.104007.12], and the merger of compact binary systems [@prd.84.084004.11; @prd.84.124037.11]. ET is expected to yield a significant number of detections and the interpretation of the results will allow us to answer questions about astrophysics, cosmology and fundamental interactions [@cqg.29.124013.12]. In order to prepare and test our ability to extract valuable information from the data, we initiated a series of mock data and science challenges (MDSCs), with increasing degrees of sophistication and complexity with each subsequent challenge. These challenges consist of first simulating ET data that includes a population of sources expected to be detectable via different astrophysical models. This is then analysed with a variety of current data analysis algorithms, each searching for a specific signal type contained within the data. Unlike advanced detectors, ET data is expected to be dominated by many overlapping signals which increases the complexity of the data analysis. An important goal of the MDSC is to test the ability of different analysis algorithms in efficiently detecting signals and discriminating different signal populations. Finally we consider the interpretation of these results to investigate different areas of astrophysics and cosmology. For the first ET MDSC [@prd.86.122001.12], we produced one month of mock data containing simulated Gaussian coloured noise, produced using a plausible ET noise power spectral density (PSD), and the GW signals from a set of compact binary coalescence (CBC), in this case a population of binary neutron stars (BNS) in the redshift range $z\in$\[0, 6\]. Using a modified version of the LIGO/Virgo data analysis pipeline `ihope` [@prd.79.122001.09; @prd.80.047101.09; @prd.82.102001.10; @prd.85.082002.12], which was the main matched filtering analysis pipeline during the initial detector era, we showed that it is possible to employ the use of a matched filtering algorithm to search for GW signals when there is a large amount of overlap of their waveforms. Using this pipeline we were also able to recover the observed chirp mass ($\mathcal{M}_z$) and observed total mass ($M_z$) of the injected signals with an error of less than 1% and 5% respectively[^1]. We also analysed the data with the standard isotropic cross-correlation statistic and measured the amplitude of an astrophysical stochastic GW background (SGWB) [@prd.59.102001.99; @prd.79.062002.09; @raap.11.369.11] created by the population of background BNS signals with an accuracy better than 5%. Finally, we were able to verify the existence of a *null stream*, created by the closed loop detector layout which results in the complete cancelling of GW signals and gives an acceptable estimate of the noise PSD of the detectors. By subtracting the null stream from the data, we showed that we could recover the expected shape of the PSD of the astrophysical SGWB. After the success of the first challenge, we extended our data generation package to conduct a second MDSC. The second ET MDSC contains a larger selection of sources over that of the first, including BNS, neutron star-black holes (NSBH), binary black holes (BBH), binary intermediate mass black holes (IMBH) [@grg.43.485.11] as well as several burst sources. In the second MDSC we have taken the intrinsic mass distributions and time delays, the time between the formation and merger of the binary systems, from the population synthesis code `StarTrack` [@apj.572.407.02; @apjs.174.223.08; @apjl.715.L138.10; @apj.759.52.12], as opposed to selecting the component masses from a Gaussian distribution in the first MDSC. With this mock data set several investigations have been carried out, each focusing on a different scientific aspect of the MDSC. The first of these investigations, on the measurement of a SGWB from astrophysical sources, has already been completed [@prd.89.084046.14], while others are ongoing. In this paper we investigate the application of a new low-latency matched filtering analysis pipeline, `gstlal` [@prd.82.044025.10; @prd.83.084053.11; @apj.784.136.12; @prd.88.024025.13], which is built using `gstreamer` multimedia processing technology. The analysis will be run multiple times, searching for low mass systems, using a low frequency cut-off of 25Hz, 10Hz and 5Hz, on both the main mock data set as well as a noise only data set that is used to make estimates of the background. The 25Hz and 10Hz runs will be conducted on the full data set while the 5Hz analysis will be run on 10% of the data. This is due to the fact that starting at 5Hz, there are more templates produced for the analysis and the waveform for low mass systems will be of the order of a few hours long, both of which significantly increases the computational cost of the analysis. Once the analyses have been run, we compare the list of detections that are reported in each of the three ET detectors against the list of injected signals. Using a small window in both coalescence time $(t_c)$ and the observed (redshifted) chirp mass ($\mathcal{M}_z$) we produce a list of matched detections. We will then make a comparison of the recovered detection parameters ($t_c$, $\mathcal{M}_z$ and $M_z$) against the true injected parameters. The rest of this paper is divided into the following sections. In Section \[sec:MockData\] we introduce the methods by which we produce the mock data used for this investigation. In Section \[sec:analysis\] we discuss the analysis methods that are used as well as our reasons for choosing a new analysis pipeline. In Section \[sec:results\] we present our results from the analysis runs that are conducted, with a focus on both event detection and parameter measurements. In Section \[sec:futuredev\] we highlight possible areas that can be investigated in future MDSCs. Finally in section \[sec:conclusion\] we discuss the results shown in the last section and make a conclusion to this investigation. Mock Data {#sec:MockData} ========= In this section we describe how we go about generating the ET mock data used in this investigation. Here we use the same data generation package as was used in the first ET MDSC [@prd.86.122001.12], which has since been updated to simulate more sources [@prd.89.084046.14; @prd.92.063002.15]. We first explain the generation of the coloured noise and then we introduce and describe each of the steps that are used to simulate the GW inspiral signals that are injected into the noise. For this we describe how the cosmological model and star formation rate (SFR) are used to determine the rate of coalescence of compact binary objects as a function of redshift and how the signal parameters are selected as well as the waveform models used in the simulation. Simulation of the Noise ----------------------- The current design of the Einstein Telescope is envisioned to consist of three independent V-shaped Michelson interferometers with 60 degree opening angles, arranged in a triangle configuration, and placed underground to reduce the influence of seismic noise [@cqg.26.085012.09; @ijmpd.22.1330010.13]. Here we make the assumption that there will be no instrumental or environmental correlated noise between the detectors so that the noise is simulated independently for each of the three ET detectors, E1, E2 and E3 [@prd.87.123009.13; @prd.90.023013.14]. This is done by generating a Gaussian time series that has a mean of zero and unit variance. This time series is then Fourier transformed into the frequency domain, coloured with the noise PSD of the ET detector, and then inverse Fourier transformed back into the time domain. In order to remove any potential discontinuities between adjacent data segments, we gradually taper away the noise spectral density to zero at frequencies above 4096Hz and below 5Hz, which we set as the low frequency cut-off for the generation of the noise and GW signals. For this MDSC, we consider the sensitivity given by ET-D rather than ET-B that was used in the first MDSC, as shown in the left-hand plot in Fig. \[fig:noise\]. ET-B is a simpler design with just one interferometer in each V of the equilateral triangle but due to high stored power it suffers from enhanced radiation pressure noise at lower frequencies. ET-D is a design that includes two interferometers in each V (a high-frequency, high-power interferometer to mitigate photon shot noise and a low-frequency, low-power, cryogenics interferometer to mitigate thermal noise) and achieves a very good high-frequency sensitivity without compromising on low-frequency sensitivity. ![image](Figure1a){width="49.00000%"} ![image](Figure1b){width="49.00000%"} Simulation of the GW signals from BNS ------------------------------------- We employ the use of Monte Carlo (MC) simulation techniques for the generation of the mock data. The process that we use to generate the various parameters is very similar to that used in the first ET MDSC [@prd.86.122001.12], except here we take the intrinsic mass distribution of the component masses, $m_1$ and $m_2$, and the time delay, $t_d$, i.e. the interval between the formation of a binary and its eventual merger, from the stellar evolution code `StarTrack` [@apj.572.407.02; @apjs.174.223.08; @apjl.715.L138.10; @apj.759.52.12]. As was done in the first MDSC, we adopt a $\Lambda$CDM cosmological model with the Hubble parameter $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_m = 0.3$, and $\Omega_\Lambda = 0.7$ and the SFR of [@apj.651.142.06]. We first consider the coalescence rate for BNS per unit volume, as a function of redshift $$\dot{\rho}_c(z, t_d) \propto \frac{\dot{\rho}_\ast (z_f(z,t_d))}{1+z_f(z,t_d)} , \;\; \text{with} \;\; \dot{\rho}_c(0) = \dot{\rho} _0 ,$$ where $z$ is the redshift of the source at the point of coalescence, $z_f$ is the redshift of the source at the point at which the binary formed, $\dot{\rho}_\ast$ is the SFR and $\dot{\rho}_0$ is the local coalescence rate. A factor of $(1+z_f)^{-1}$ is used to convert the rate from the source’s frame of reference to the observer’s frame of reference. The redshifts $z$ and $z_f$ are connected to each other via the delay time, $t_d$, which is the total time that it takes between the initial formation of the binary system, through its evolution into a compact binary and finally the merging time to the point of coalescence due to the emission of gravitational radiation using $$t_d = \frac{1}{H_0} \int_z^{z_f} \frac{{\mathop{}\!\mathrm{d}}{z}'}{(1+{z}')E({z}')},$$ where $$\label{eq:EcosParam} E(z) = \sqrt{\Omega_m(1+z)^3 + \Omega_\Lambda}.$$ The coalescence rate per redshift bin is given by $$\label{eq:CoalescenceRate} \frac{{\mathop{}\!\mathrm{d}}R}{{\mathop{}\!\mathrm{d}}z}(z,t_d) = \dot{\rho}_c(z,t_d) \frac{{\mathop{}\!\mathrm{d}}V}{{\mathop{}\!\mathrm{d}}z}(z),$$ where ${\mathop{}\!\mathrm{d}}V / {\mathop{}\!\mathrm{d}}z$ is the comoving volume element given by $$\frac{{\mathop{}\!\mathrm{d}}V}{{\mathop{}\!\mathrm{d}}z}(z) = 4 \pi \frac{c}{H_0} \frac{r^2(z)}{E(z)},$$ where $c$ is the speed of light in vacuum and $r(z)$, the proper distance, is given by $$r(z) = \frac{c}{H_0} \int_0^z \frac{{\mathop{}\!\mathrm{d}}{z}'}{E({z}')} .$$ The average time between the arrival of events, which we define as $\lambda$, is given by taking the inverse of the coalescence rate, Eq. (\[eq:CoalescenceRate\]), integrating over all redshifts $$\lambda = \left[ \int_0^{z_\mathrm{max}} \frac{{\mathop{}\!\mathrm{d}}R}{{\mathop{}\!\mathrm{d}}z} (z,t_d) {\mathop{}\!\mathrm{d}}z \right]^{-1}.$$ Once we have a value for the average waiting time between events we then produce the parameters for each CBC source as follows: - The arrival time, $t_c$, of injection $i$ is selected assuming a Poisson distribution, where the difference in arrival time, $\tau = t_c^i - t_c^{i-1}$, is drawn from an exponential distribution $P(\tau) = \exp(-\tau / \lambda)$. - The average time between all events is set to $\lambda$ = 20 s, which is comparable to the realistic rate given in [@cqg.27.173001.10] where different coalescence rates for BNS, NSBH, BBH and IMBH are taken into account[^2] . This gives a total of 159,302 events which are split up into the following proportions: 80.47% BNS (128,244), 2% NSBH (3190), 12.46% BBH (19,766), provided from Table 3 in [@aap.574.A58.15], and 5.07% IMBH (8102). - The binary’s component masses, $m_1$ and $m_2$, shown in Fig. \[fig:InjMass\], and the time delay, $t_d$, are selected from a list of compact binaries generated by `StarTrack`. For the given delay time and a particular model for the cosmic SFR, we construct a redshift probability distribution, $p(z, t_d)$, by normalising the coalescence rate in the interval $z$ = \[0, 10\], where $$p(z, t_d) = \lambda \dfrac{{\mathop{}\!\mathrm{d}}R}{{\mathop{}\!\mathrm{d}}z} (z,t_d) .$$ In the right-hand plot of Fig. \[fig:noise\] we show the normalised redshift distribution for BNS, produced by using redshift bins of size $\Delta z = 0.1$. - The sky position, $\hat{\Omega}$, the cosine of the inclination angle, $\iota$, the polarization angle, $\psi$, and the phase at the coalescence, $\phi_0$, are selected from uniform distributions. - The two GW polarisation amplitudes, $h_+(t)$ and $h_\times(t)$, and the antenna response functions to the two polarisations for each of the three ET detectors, $F^A_+(t,\hat{\Omega},\psi)$ and $F^A_\times(t,\hat{\Omega},\psi)$, where $A$ = 1, 2, 3 is the index representing one of the three ET detectors, are then calculated. The detector responses $$h^A(t) = F^A_+(t,\hat{\Omega},\psi) h_+(t) + F^A_\times(t,\hat{\Omega},\psi) h_\times(t) ,$$ are then added to the detector output time series for E1, E2 and E3, where the modulation of the signal due to the rotation of Earth is taken into account. In this MDSC we have chosen to use the TaylorT4 waveforms [@prd.80.084043.09], which is accurate to 3.5 post-Newtonian order [@lrr.17.2.14], in phase and the most dominant lowest post-Newtonian order term in amplitude, for the generation of the BNS and NSBH signals. For the BBH signals we choose the EOBNRv2 waveforms [@prd.87.082004.13] that includes the merger and quasi-normal ring down phases of the signal, and it is accurate to $4^{th}$ post Newtonian order in phase and lowest order in amplitude [@prd.80.084043.09]. For the sake of testing and to determine the number of background detections we might expect to have, we have also produced a second, noise only data set that is produced with the same Gaussian noise as the main data set. ![image](Figure2a){width="49.00000%"} ![image](Figure2b){width="49.00000%"} Analysis {#sec:analysis} ======== The analysis method used here to search for the CBC signals is generally the same as was used in the first MDSC though we are now using a newly developed pipeline, `gstlal`. This is a coincident analysis pipeline where the data streams from each of the separate detector’s are analysed individually via matched filtering with the use of a large bank of templates. The template bank is produced using a TaylorF2 waveform [@prd.62.084036.00], which is generated in the frequency domain to the second post Newtonian order and terminates at the frequency of the last stable circular orbit, where $f_\mathrm{lsco}~\simeq~\dfrac{c^3}{6^{3/2} \pi G M_z}$. This waveform generator is selected as it is relatively fast to generate (compared to the TaylorT4 waveform) and reduces the computational cost of the analysis which is performed in the frequency domain. The analysis produces a list of matched *triggers* that exceed a given SNR threshold, $\rho_\mathrm{T}$; each trigger is a list that contains the SNR and the parameters of the template that produced the trigger, such as the epoch of merger and component masses of the binary. These are then checked against triggers from the other two detectors for coincidence. Any double or triple coincident triggers that result from the same template are then reported as potential GW detections though in this investigation we only consider the results from triple coincident events. Analysis stages --------------- The different stages for this analysis pipeline are described here: - Estimation of PSD: The `gstlal` analysis estimates the noise PSD as function of time during filtering. The method is a modified version of Welch’s method [@itae.15.70.67] with two main differences. First, each periodogram is derived from choosing the geometric mean of the last 7 periodograms and second, the periodograms are weighted averages that weigh the present periodogram slightly more than the past ones. The result is a PSD estimate with an effective average over a few hundred seconds with 1/16 Hz resolution. - Generation of template bank: A bank of GW inspiral signals are produced that are used to search the data. This bank needs to cover the full mass parameter range that is being considered. Because we know the mass distributions of the signals being injected we are able to tailor the mass parameter limits that are used to generate the template banks in order to cover the full range of masses whilst keeping the number of templates produced to a minimum. A new template bank is generated for each search that is conducted, with the mass parameter ranges given in Table \[tab:searches\]. - Matched filtering: This is implemented with the LLOID (Low Latency Online Inspiral Detection) method, which uses singular value decomposition (SVD) to compress the waveform parameter space and multi-rate time domain filtering [@apj.784.136.12]. It provides the same result as standard matched filtering [@prd.85.122006.12] to within $< 1\%$. The matched filtering of each SVD bank against each detector data stream produces an SNR time series $\rho(t)$. - Trigger generation: As templates are filtered against data streams, if any SNR time series passes a threshold value, $\rho_\mathrm{T}$, then it is considered as a trigger. Generally, using a lower SNR threshold value is better as it allows for the possibility of detecting weaker signals but it also results in an increase in the number of triggers produced from background noise. Here we set the single detector threshold to be SNR = 4 as this is the lowest we can go without having a trigger rate that becomes difficult to deal with. - Coincidence between detectors: Triggers from different detectors are then compared against each other. Any that are coincident in time, within a 5 ms window to account for small time delays for the time of flight between detectors, and have the same masses, are considered as either double or triple coincident triggers. The SNR for a network of detectors is given by $$\rho^2 = \sum_A \rho^2_A\,.$$ For triple coincident triggers this gives a minimum SNR of $\sim$ 6.928. - Clustering of triggers: The list of double and triple coincident triggers is then clustered, where any coincident events that occur within a 4 second time window of a coincident events with a higher SNR are deleted. This is done as the same event will be detected by multiple templates, some with a certain degree of mismatch in the signal parameters. This results in the reporting of the best matched template. The output of `gstlal`, containing all clustered triple coincident triggers, are then compared against the list of injections in order to “match" any potential detections. For this we apply a time and chirp mass window to each detection and if an injection is found within this two dimensional window then we determine it to be a found injection. If two injections are found within the same two dimensional window then the injections with the smallest redshift is assumed to be the more likely event. The chirp mass is selected because, as was found in the first MDSC and as is shown later, it is better constrained than the total mass by the analysis. Here a time window of $\pm$100 ms and a chirp mass window of 1% of the observed chirp mass for BNS is used. Search Data f$_\mathrm{min}$ (Hz) length (s) $M_\mathrm{total}$ range (M$_\odot$) $\eta$ range N$_\mathrm{templates}$ -------- ----------------- ----------------------- ------------ -------------------------------------- --------------- ------------------------ 1 Noise + Signals 25 3072000 2.6 - 12.3 0.2475 - 0.25 3603 2 Noise + Signals 10 3072000 2.6 - 12.3 0.2475 - 0.25 25252 3 Noise + Signals 5 307200 2.6 - 12.3 0.2475 - 0.25 87054 4 Noise 25 3072000 2.6 - 12.3 0.2475 - 0.25 3647 5 Noise 10 3072000 2.6 - 12.3 0.2475 - 0.25 26173 6 Noise 5 307200 2.6 - 12.3 0.2475 - 0.25 89495 Searches -------- Compared to the standard advanced detector searches there are several differences that we implement here. The first is low frequency cut-off used to produce the signal templates. Advanced detector will only be sensitive down to $\sim$ 20Hz for the first couple of years of operations, eventually reduced to $\sim$ 10Hz when the detectors begin to operate at the design sensitivity [@ObsScenario]. Starting at these frequencies, low mass systems will have waveform lengths of only a few minutes to tens of minutes. When considering ET, which is sensitive down to frequencies as low as 1-3Hz, depending on the final design configuration, signal templates can be of the order of hours to several days in length. In this investigation will focus on the application of different low frequency cut-offs where we run three searches using the same template mass range but using different $f_\mathrm{min}$. We use a low frequency cut-off of 25Hz and 10Hz where we analyse the full mock data, and then analyse 10% of the data at 5Hz. We select one analysis run at 25Hz so that we can make a direct comparison to the results from the first MDSC and we choose to only analyse 10% of the data at 5Hz because of the high computational cost associated with this analysis. At this starting frequency with the injected masses shown in Fig. \[fig:InjMass\], the template waveform lengths are already several hours long. Because of this we also impose a cut-off at a redshift of $z$ = 0.2, below which our search templates will not be sensitive. Instead we make the assumption that we have a detection efficiency of 100%. After this point, the signals are redshifted by a factor of $(1+z)$ by a significant fraction so that the signal wavelengths become computationally manageable. For these searches we set a minimum component mass of 1.3M$_\odot$, minimum total mass of 2.6M$_\odot$, a maximum component mass of 6.75M$_\odot$ and a maximum total mass of 12.3M$_\odot$ with a minimum symmetric mass ratio of $\eta = m_1m_2 / M^2 = 0.2475$. This minimum symmetric mass ratio is chosen to be as high as possible to reduce the number of templates being generated whilst still including most of the population of BNS, as can be seen in the right-hand plot of Fig. \[fig:InjMass\]. Already at this $\eta_\mathrm{min}$ we produce $\sim87000$ templates when starting at 5Hz. All the search parameters are displayed in Table \[tab:searches\]. All three analysis runs are repeated on the noise only data sets in order to obtain an estimate on the number of background triggers one would expect in the main data set. From these results an SNR threshold value is set with which to make a cut on all triggers in the main data sets. For this we select the SNR equal to the 100th loudest events for the 25Hz and 10Hz runs, and the 10th loudest event for the 5Hz run. At present there is no method for determining an estimate for the false alarm probability with ET and so the 100th (10th) loudest noise event is selected as it will cover most of the population of background noise events whilst avoiding statistical fluctuations which produce louder SNR events that may skew the background estimate. The results of this are presented in Table \[tab:detections\]. Results {#sec:results} ======= In this section we present the results from all the analysis runs carried out as part of this investigation, which is divided into four sub-sections. The first shows the number of detections made for each analysis run and the second details the detection efficiency. In the third we explore a proof of concept method for estimating the number of injected signals as a function of redshift and the fourth presents the accuracy with which we are able to recover the injection parameters. -------- ----------------------- --------------------- ----------------------- ------------------------- ----------------------- ------------------------- Search N$_\mathrm{triggers}$ SNR (100th loudest) N$_\mathrm{triggers}$ N$_\mathrm{detections}$ N$_\mathrm{triggers}$ N$_\mathrm{detections}$ 1 74323 8.655 82322 5708 5670 (6.89%) 4713 (82.57%) 2 291319 8.904 341747 9956 15590 (4.56%) 8138 (81.74%) 3 45183 8.964[^3] 63709 1242 7320 (11.49%) 1095 (88.19%) -------- ----------------------- --------------------- ----------------------- ------------------------- ----------------------- ------------------------- `gstlal` analysis: Impact of the lower frequency cut-off on detection efficiency -------------------------------------------------------------------------------- The results for the different analysis runs with different low frequency cut-offs are summarized in Table \[tab:detections\]. Here the first column gives the search identity, the second column gives the number of triggers that were produced when analysing the noise only data set, and the third column gives the SNR of the 100th (10th) loudest event. The fourth and fifth columns give the total number of triggers and resulting number of matched detections that are made with the smallest possible network SNR threshold of 6.9. The sixth and seventh columns again show the number of triggers and matched detections corresponding to an SNR threshold, $\rho_\mathrm{T}$, equal to the 100th (10th) loudest event from the noise only data set. The number in the brackets for the two right-hand columns indicates the fractional number of triggers or matched detections that remain when a higher SNR threshold is used as compared to the case of smallest SNR theshold. The results from these three analysis runs are shown in Fig. \[fig:bnsfminDet\], where the SNR is plotted against the observed chirp mass. In each of the plots all the triple coincident triggers produced by `gstlal` when analysing the main data set are plotted in blue, with any of these triggers that are then matched to an injection being plotted in red and finally the triggers produced from the analysis of the noise only data set are plotted in green. In the top plot we show the results from the 25Hz analysis where it is easy to distinguish a number of BNS signal detections from those of background events. There is a very clear peak of triggers with low chirp masses, implying small distances, with very high SNRs. The lower SNR events (i.e. SNR $\leq 10$) are harder to differentiate from the background events and its only by comparing them to the list of injections that we are able to identify them as true signal detections. There is a population of higher chirp mass, high SNR triggers that have not been matched to any BNS injections and clearly are not background events. These are in fact due to the presence of GW signals from different types of CBC within the data, in this case the population of NSBH. This shows that the matched filtering method employed in this search is sensitive to CBC signal whose injection parameters lie outside of the search range. Even though these are not optimal matches, as we would expect the resulting SNR to be louder than what is shown here, they are still considered as detected. In these cases one would expect the recovered parameters to differ greatly from the true parameters because of the search parameter limits used when generating these template banks. Finally we observe a large number of triggers (74,323) obtained from the noise only data set, spread across all chirp masses, with the loudest trigger having an SNR = 9.37 and the 100th loudest having an SNR = 8.566. These are all entirely caused by the random fluctuations in the Gaussian noise data and are labelled as background events. In the middle plot we show the results from the 10Hz analysis. We first note here that there is a massive increase in the total number of triggers produced (341,747) which is related to the increase in the number of templates (25,252) produced for the 10Hz analysis runs compared to that of the 25Hz run (3603). Here we clearly see the population of BNS detections that have both higher SNRs and are detectable at higher observed chirp masses. We also note that there is a large reduction in the number of high chirp mass, high SNR unmatched detections from non-BNS signals than compared to the 25Hz analysis. From the analysis of the noise only data set, the loudest background event has an SNR = 9.53 and the 100th loudest event has an SNR = 8.904. In the bottom plot we show the results from the 5Hz analysis. Again we clearly see the population of BNS signals and we also find the number of non-BNS triggers is very small. We should also note that the number of templates has significantly increased again (87,054 templates) over that of the 10Hz analysis but we do not see as large an increase in the number of detections due to analysing only 10% of the data. We would expect to obtain ten times as many triggers and detections as given in Table \[tab:detections\], giving an estimate of $\sim637,000$ triggers and $\sim12400$ detections from this mock data set. Finally we highlight the loudest BNS detections in each of the analysis runs on the main data set which are produced from the same event. Starting at 25Hz it is detected with an SNR = 98.22, at 10Hz it is detected with an SNR = 122.46 and at 5Hz it is detected with an SNR = 134.97. This gives a clear example of how, when analysing from lower frequencies, we are able to build up more SNR for each signal which also helps us to increase the total number of detections we are able to make. ![Scatter plots of SNR against the observed chirp mass for the three different low frequency cut-offs used in the analysis with 25Hz (top), 10Hz (middle) and 5Hz (bottom). All triggers produced from the analysis of the main data set are shown in blue, with the triggers produced from the analysis of the noise only data set shown in green. Any of the triggers from the main data set that are then matched to an injection are then plotted in red. Finally the dashed horizontal line represents an SNR equal to the 100th (10th) loudest trigger from the noise only data set.[]{data-label="fig:bnsfminDet"}](Figure3a "fig:"){width="45.00000%"}\ ![Scatter plots of SNR against the observed chirp mass for the three different low frequency cut-offs used in the analysis with 25Hz (top), 10Hz (middle) and 5Hz (bottom). All triggers produced from the analysis of the main data set are shown in blue, with the triggers produced from the analysis of the noise only data set shown in green. Any of the triggers from the main data set that are then matched to an injection are then plotted in red. Finally the dashed horizontal line represents an SNR equal to the 100th (10th) loudest trigger from the noise only data set.[]{data-label="fig:bnsfminDet"}](Figure3b "fig:"){width="45.00000%"}\ ![Scatter plots of SNR against the observed chirp mass for the three different low frequency cut-offs used in the analysis with 25Hz (top), 10Hz (middle) and 5Hz (bottom). All triggers produced from the analysis of the main data set are shown in blue, with the triggers produced from the analysis of the noise only data set shown in green. Any of the triggers from the main data set that are then matched to an injection are then plotted in red. Finally the dashed horizontal line represents an SNR equal to the 100th (10th) loudest trigger from the noise only data set.[]{data-label="fig:bnsfminDet"}](Figure3c "fig:"){width="45.00000%"} Detection efficiency -------------------- The detection efficiency, as a function of redshift, for a given analysis is given by $$\epsilon (z) = \frac{N_\mathrm{det} (z)}{N_\mathrm{inj} (z)}, \label{eq:effCalc}$$ where $N_\mathrm{det}$ is the number of detected injections per redshift bin, $N_\mathrm{inj}$ is the total number of injections per redshift bin and the variance is given by [@cqg.25.105002.08] $$\sigma_{\epsilon}^2 (z) = \frac{\epsilon (z) (1-\epsilon (z))}{N_\mathrm{inj} (z)}. \label{eq:SigEff}$$ In the left-hand plot of Fig. \[fig:DetEfficiency\] we show the smoothed detection efficiencies for each of the analysis runs carried, with the $\pm \;1\sigma$ limits contained within the shaded region. Here we have only considered found injections that have an SNR greater than the threshold set by the 100th loudest event from the analysis of the noise only data set. We clearly see that by lowering the cut-off frequency of the analysis we are able to increase our detection efficiency across all redhsift bins. This can be seen clearly by the fact that the efficiency at $z=1$ doubles when going from 25Hz to 10Hz. It is also shown that the size of the uncertainty in the 5Hz efficiency is considerably larger that for the 25Hz or 10Hz as we are only considering 10% of the data and from Eq. (\[eq:SigEff\]) we see that this decreases with the inverse of the number of injections per redshift bin. Rate estimation --------------- In the previous subsection we make the assumption that we know the true number and distribution of all the injections in order to calculate the efficiency. If we consider the case where the number of signals in the Universe is unknown, then, by rearranging Eq. (\[eq:effCalc\]), it is possible to make an estimate of this by consideration of the number of detections as a function of redshift[^4] along with the detection efficiency, which can be determined from MC simulations with prior knowledge of the BNS mass distribution from the second generation of detectors [@jpcs.484.012008.14]. In the right-hand plot of Fig \[fig:DetEfficiency\] we show this estimate on the number of injections per redshift bin for each of the detection efficiencies calculated previously. Here the errors on the size of the efficiencies have been carried through. We clearly see that for each of the analysis runs there is a similar chance of estimating the number of events up to a redshift of $z\simeq1.5$. Between the 25Hz (blue) and 10Hz (red) analysis runs, which were conducted on the full data set, there is a clear difference in the distance at which we are able to place an estimate on the number of injected signals, with the 25Hz extending to $z \sim 2$ and the 10Hz extending to $z \sim 3$. This is directly related to the detection efficiency presented in the previous subsection, with the size of the estimation increasing as the efficiency goes to zero. The 5Hz estimation appears to be larger than that of the 10Hz but this is a consequence of only analysing 10% of the data, which results in larger uncertainties in the efficiency and a smaller maximum redshift that an estimate can be made out too. ![image](Figure4a){width="49.00000%"} ![image](Figure4b){width="49.00000%"} Impact of lower frequency cut-off on parameter estimation --------------------------------------------------------- In this subsection we present the errors we obtained in the measurement of the epoch of coalescence, and binary’s chirp mass and total mass. We first look at the absolute error in the recorded time of coalescence, given by $\Delta t_c = t_{c,\,\mathrm{obs}} - t_{c,\,\mathrm{inj}}$, followed by relative error in total mass, $M_z$, and chirp mass[^5], $\mathcal{M}_z$. Table \[tab:errors\] lists the values of the mean and standard deviation for all the errors shown in this section. ### Coalescence time In this first MDSC, when matching triggers to injections, we considered a time window of $\pm$30ms while in this investigation, as stated above, we have increased this to $\pm$100ms. In Fig. \[fig:abstcErr\] we show a normalised plot of absolute error in measured coalescence time, $t_c$, of all the detections made when investigating the low frequency cut-off. We find that for all three BNS runs there is a constant bias of a few ms but nearly all detections are constrained very well to within $\pm$10ms. This is due to the fact that both the injected waveform and the waveform used to search the data end at the same point, the $f_\mathrm{lsco}$. So the $\pm$30 ms window considered for the first MDSC is suitable when considering BNS signals. ![Normalised distribution of absolute error in recovered coalescence time for all matched detections given by `gstlal` for Search 2 at 25Hz (solid blue), Search 3 at 10Hz (dashed red), and Search 4 at 5Hz (dot-dashed green), using time bins of size $\Delta t$ = 1 ms, where different low frequency cut-offs were used.[]{data-label="fig:abstcErr"}](Figure5){width="50.00000%"} ### Masses {#sec:fminPE} We now look at the errors in the measurements of the mass parameters. In Fig. \[fig:fminError\] we show the impact of lowering the minimum search frequency. In the top left-hand plot of Fig. \[fig:fminError\] we show a normalised distribution of the relative error in measured total mass with the results from the 25Hz analysis shown in blue, the results from the 10Hz analysis shown in red and the results from the 5Hz analysis shown in green. We first note that the error has decreased by an order of magnitude when compared to the results from the first MDSC (see Fig. 7 of [@prd.86.122001.12]). Also there is a constant systematic bias to generally underestimate the total mass for all three analysis runs, with a sudden drop off below 0.5%. The number of events where the total mass is underestimated does decrease as the cut-off frequency for the analysis is lowered but this is only a small proportion. This bias was not observed in either the first MDSC or in any of our initial analysis runs where, in both cases, the component masses, $m_1$ and $m_2$, were selected from the same distribution, which is not the case for this main mock data set. In the top right-hand figure we plot the relative error in total mass against the observed total mass with the results from the 25Hz analysis shown in blue, the results from the 10Hz analysis shown in red and the results from the 5Hz analysis shown in green. We clearly see that sharp cut-off at the 0.5% shown in the previous plot. We also see that at lower observed masses, which correspond to closer distances, the spread of error measurements covers a range of values. At higher masses this distribution decreases leaving only the larger error measurements. This agrees with what we would expect, that our error measurements increase with distance. In the bottom left-hand plot we show a normalised distribution of the relative error in measured chirp mass with the results from the 25Hz analysis shown in blue, the results from the 10Hz analysis shown in red and the results from the 5Hz analysis shown in green. We first note that the scale of the size of the distribution of the error has also decreased by a factor of $\sim10$ when compared to the results from the first MDSC. Here we clearly see that as we decrease the cut-off frequency for the analysis we obtain a smaller distribution of the error of the chirp mass measurement. We can also see from Table \[tab:errors\] that the deviation of the mean of the distribution from zero goes from 0.01% at 25Hz to 0.001% at 5Hz which shows that we are able to recover the chirp mass to a very high degree of accuracy in this part of the analysis. In the bottom right-hand figure we plot the relative error in chirp mass against the observed chirp mass with the results from the 25Hz analysis shown in blue, the results from the 10Hz analysis shown in red and the results from the 5Hz analysis shown in green. Here we clearly see that by decreasing the cut-off frequency we are able to better measure the chirp mass but also that the measured error on the chirp mass is related to the distance to the source. ![image](Figure6a){width="49.00000%"} ![image](Figure6b){width="49.00000%"}\ ![image](Figure6c){width="49.00000%"} ![image](Figure6d){width="49.00000%"} Search $\Delta t_c$ (ms) Relative error $M$ Relative error $\mathcal{M}$ ---------- -------------------- ---------------------------------------------------- --------------------------------------------------- 2 (25Hz) -1.694 $\pm$ 3.314 -3.301 $\times 10^{-3} \pm$ 2.353 $\times 10^{-3}$ 0.115 $\times 10^{-3} \pm$ 0.369 $\times 10^{-3}$ 3 (10Hz) -1.541 $\pm$ 5.307 -3.213 $\times 10^{-3} \pm$ 2.550 $\times 10^{-3}$ 0.044 $\times 10^{-3} \pm$ 0.286 $\times 10^{-3}$ 4 (5Hz) -1.572 $\pm$ 5.856 -2.674 $\times 10^{-3} \pm$ 2.665 $\times 10^{-3}$ 0.012 $\times 10^{-3} \pm$ 0.289 $\times 10^{-3}$ Future Development {#sec:futuredev} ================== Future MDSCs should aim to address increasing complexity of binary waveform models, improved detector noise models, simulating EM counterpart scenarios, and including other third generation detectors. There are still other GW sources that we can consider, consider, such as continuous waves [@apj.785.119.14] from rapidly rotating galactic neutron stars [@prd.88.102002.13; @prd.90.062010.14]. The inclusion of one or more SGWBs of cosmological origins [@jcap.6.27.12], such as phase transitions [@prd.77.124015.08; @prd.79.083519.09; @jcap.12.024.09], cosmic (super) strings [@prd.71.063510.05; @prl.98.111101.07; @prd.81.104028.10; @prd.85.066001.12; @prl.112.131101.13] or pre Big Bang models [@app.1.317.93; @prd.55.3330.97; @prd.82.083518.10], would allow us to test whether we can distinguish between cosmological background and astrophysical backgrounds [@prd.85.104024.12]. The waveform models that we choose to inject should also include additional features such as spin [@prd.72.084027.05; @prd.67.104025.06; @prd.74.029902.06] and tidal affects [@prd.77.021502.08; @prd.79.124033.09; @prd.81.123016.10; @prd.84.104017.11], for BNS and NSBH, spin and precession [@prl.113.151101.14; @prd.91.024043.15], for BBH and IMBH, and use a larger range of burst signal models. The inspiral waveforms should be generated down to even lower frequencies, such as 3Hz or 1Hz, to investigate if it is possible to push the low frequency cut-off used for the matched filtering past the 5Hz used here. At this frequency the low mass waveforms will be of the order of $\sim$ hours to days long. These would allow for investigations into areas such as rate estimation, both the SFR and coalescence rate for various sources, measurement of the mass functions for NSBH and BBH, testing of general relativity, cosmological measurements, investigating different cosmological and astrophysical models and testing alternate theories of gravity. When generating the data we should also include the two LIGO detectors with the use of the LIGO 3 Strawman PSD [@StrawmanRed]. A smaller second data set should also be constructed with the use of re-coloured aLIGO noise (which we would expect to have at that point) into which we inject coherent signals. This will allow to study the behaviour of the null stream in the non-Gaussian case. It is impossible to obtain a redshift measurement directly from a detection of a GW but it is possible to infer one through the use of an electromagnetic counterpart such as a sGRBs [@apj.725.496.10] or from an existing galaxy catalogue [@prd.86.043011.12], or consideration of either the neutron star mass function [@prd.86.023502.12], or EOS [@prl.108.091101.12]. None of these methods have yet been applied within an MDSC, but some of them, such as using sGRBs, the neutron star mass function, or EOS, can easily be included within a future MDSC. Conclusion {#sec:conclusion} ========== In this investigation we have described the generation and analysis of the data for the second Einstein Telescope mock data and science challenge with a focus on binary neutron stars. This data consisted of Gaussian noise, fitted to the expected ET-D sensitivity noise curve, into which a large number of GW signals from multiple sources are injected. The analysis was conducted with a new matched filtering pipeline that is able to analyse signals down to lower frequencies than has been considered before. Our motivation for this MDSC is to continue to explore the science potential of ET, increasing the complexity of the data analysis and science that is conducted with it. The analysis used in this investigation has far surpassed that carried out in the first MDSC. One of the main goals for this investigation was to show that it is possible to analyse gravitational-wave inspiral signals down to a frequency of 5Hz. Starting at this frequency the lowest mass BNS systems being considered here take over two hours to coalesce. We have shown that, while being very computationally intensive/expensive, it is still possible to analyse data down to this frequency. If we consider that in the few years since the first MDSC we have been able to push the limit of the analysis comfortably from 25Hz to 10Hz and proven that 5Hz is achievable, we would like to think that in the next decade when the Einstein Telescope is hoped to be built, given Moore’s law, it should be possible to push GW analysis to even lower frequency limits. In the analysis at lower frequencies we have also shown the improvement we obtain in both detection efficiency and our ability to recover the injection parameters. By searching for signals with lower frequencies we are able build up more SNR which allows many more signals to become detectable as well as making the already detectable signals louder. The longer template waveforms also allow us to better match up with the GW signals, giving us better accuracy in the measurements of the parameters. It has also been shown that analysing data at lower frequencies results in a higher rate of background detections being made with larger SNRs. Here we have just considered using an SNR threshold values that is equal to the 100th (10th) loudest background event from the analysis of the noise only data set, to reduce the number of background events but this has the drawback of reducing the number of true detections that are made as well. In the future it is hoped that a method will be developed that implements the null stream to reject background events, thus lowering the false alarm probability, allowing for a smaller SNR threshold to be used. We have also shown the difference in detection efficiencies obtained when using lower cut-off frequencies. From these a proof of concept method has been shown where we attempt to estimate the number of injected signals as a function of redshift. This is a very basic method that makes several assumptions, mainly that we know the true redshifts of the detected signals. More work is required to further develop this method so that it is able to account for different parameters as well as a distribution on the redshift from the detections. Finally we have also shown that our ability to measure mass parameters improved by an order of magnitude over that of the first MDSC in the case of BNS as a result of using a 5 Hz lower frequency cut-off instead of 25 Hz. We are able to recover the observed total mass to within 0.5% and the observed chirp mass to within 0.05%. This work will now continue, were we investigate the parameter estimation for a small subset of the BNS detections. Acknowledgements ================ We thank Bruce Allen and the Albert Einstein Institute in Hannover, supported by the Max-Planck-Gesellschaft, for use of the Atlas high-performance computing cluster in the data generation and analysis, and Carsten Aulbert for technical advice and assistance. DM acknowledges the PhD financial support from the Observatoire de la Côte d’Azur and the PACA region and would also like to thank Cardiff university for funding under which part of this work was conducted. CH is supported by NSF grant PHY-1454389. BSS acknowledges the support of the LIGO Visitor Program through the National Science Foundation award PHY-0757058, Max-Planck Institute of Gravitational Physics, Potsdam, Germany, and STFC grant ST/J000345/1. [^1]: The observed mass parameters, $M_z$ and $\mathcal{M}_z$, differ from the intrinsic parameters, $M$ and $\mathcal{M}$, by a factor of (1+$z$), due to the redshifting of the GW frequencies from the expansion of the Universe, which is the equivalent of observing heavier masses. These are denoted with a subscript $z$, such that $\mathcal{M}_z \equiv \mathcal{M}(1+z)$. [^2]: The original data sets as presented in [@prd.89.084046.14] consisted of a year’s worth of data that had an average time between all injection of $\lambda$ = 200 s, provided from Table 3 in [@aap.574.A58.15] using the BZ model. In order to reduce the computational cost of running the analysis with a very low cut-off frequency we have reduced the amount of data by a factor of 10 while increasing the coalescence rate by the same factor. This means that the same injections are present within both sets while the time of arrival between successive events has decreased resulting in more overlap of the waveforms. It has already been shown in [@prd.86.122001.12] that this overlap does not affect the ability of a matched filtering algorithm to detect overlapping signals. [^3]: Due to the reduced amount of data that has been analysed at 5Hz, we have selected the SNR of the 10th loudest event from the noise only analysis run. [^4]: We again make the assumption that we know the true redshift of the detection. In reality we would not know the detections true redshift though it is possible to derive estimates from various methods, detailed in Section \[sec:futuredev\]. [^5]: We note that in the case where we know exactly the redshift of the source, the relative error in the observed masses, $M_z$ and $\mathcal{M}_z$, is mathematically identical to the relative error in the intrinsic masses, $M$ and $\mathcal{M}$.
-37mm \#1[ ]{} \#1\#2[ \_[\#1]{}(\#2) ]{} \#1[ (\#1) ]{} \#1\#2[ \_[\#1 \#2]{} ]{}  \  \  \ gr-qc/9802042\ Mod. Phys. Lett. A 13 (1998) 1419-1425\  \  \  \   \ [Konstantin G. Zloshchastiev]{}\   \ Department of Theoretical Physics, Dnepropetrovsk State University,\ Nauchniy lane 13, Dnepropetrovsk 320625, Ukraine.[^1]\   \  \ PACS numbers: 04.40.$-$b, 04.70.$-$s, 11.27.+d\ Keywords: general relativity, perfect fluid, black hole, thin shell\ Since the classical works [@dau; @isr] there has been a lot of progress on the investigation of thin shells in general relativity (see Refs. [@sat; @mae; @bkt]). It was found that shells are both simple and instructive models of several dynamical and cosmological objects and processes. The formalism of the thin shell theory has been widely described in the literature (see Ref. [@mtw] for details), therefore, we shall only point out the most important features. In this letter we study a class of spherically symmetric shells of some perfect fluids. Attention will be paid to their behaviour near the event horizon. One considers the matter thin layer with the surface stress-energy tensor of a perfect fluid in the general case (we use the units $\gamma=c=1$, where $\gamma$ is the gravitational constant) $$S_{ab}=\sigma u_a u_b + p (u_a u_b +~ ^{(3)}\!g_{ab}),$$ where $\sigma$ and $p$ are the surface mass-energy density and pressure respectively, [**u**]{} is the time-like unit tangent vector, $~^{(3)}g_{ab}$ is the three-metric on a shell. We suppose the metrics of the space-times outside $\Sigma^+$ and inside $\Sigma^-$ of a spherically symmetric shell to be of the form s\_\^2 = -\[1+\^(r)\] t\^2\_+ \[1+\^(r)\]\^[-1]{} r\^2 + r\^2 \^2, \[eq1\] where $d\Omega^2$ is the metric of the unit two-sphere. It is possible to show that if one introduces the proper time $\tau$, then the 3-metric of a shell can be written in the form \^[(3)]{}s\^2 = - \^2 + R\^2 \^2, \[eq2\] where $R(\tau)$ is a proper radius of a shell. Define a simple jump of the second fundamental forms across a shell as $[K^a_b]=K^{a+}_b - K^{a-}_b$, where $$K^{a\pm}_b = \lim\limits_{n\to \pm0} \frac{1}{2} ~^{(3)}\!g^{a c} \frac{\partial}{\partial n} ~^{(3)}\!g_{c b},$$ where $n$ is a proper distance (time-like or space-like in the general case, space-like in our case) in the normal direction. The Einstein equations on a shell then read [@bkt; @vis] &&= - \[K\^\_\], \[eq3\]\ &&p = (\[K\^\_\] + \[K\^\_\]). \[eq4\] Besides, an integrability condition of the Einstein equations is the energy conservation law for shell matter. In terms of the proper time it can be written as (  \^[(3)]{}g ) + p  (  \^[(3)]{}g ) +  \^[(3)]{}g  \[ T \] =0, \[eq5\] where $[ T ] = (T^\tau_n)^+ - (T^\tau_n)^-$, $T^\tau_n = T_\alpha^\beta u^\alpha n_\beta$ is the projection of stress-energy tensors in the $\Sigma^\pm$ space-times on the tangent and normal vectors, $^{(3)}\!g=\sqrt{-\det{(^{(3)}\!g_{ab})}} = R^2 \sin{\theta}$. In this equation, the first term corresponds to a change in the shell’s internal energy, the second term corresponds to the work done by the internal forces of the shell, while the third term corresponds to the flux of energy across a shell. For clarity we study below the class of the black shells with the most simple event horizons, therefore, we suppose the space-times (\[eq1\]) to be the Schwarzschild’s (thereby the special case of the Minkowski flat space-time is also included). We assume \^= - , \[eq6\] where $M_\pm$ is the total energy of the configuration with respect to static distant observers in the space-times $\Sigma^+$ and $\Sigma^-$ respectively. It corresponds to a body with the mass $M_-$ surrounded by a shell. Then a computation of the extrinsic curvatures yields &&R K\^\_= \_ , \[eq7\]\ &&K\^\_= (R K\^\_), \[eq8\] and we can write Eq. (\[eq3\]) in the form \_+ - \_- = - , \[eq9\] where m = 4 R\^2 \[eq10\] is interpreted as the effective rest mass, $\dot R=\drm R/\drm\tau$ is a proper velocity of a shell, $\epsilon_\pm = \Sign{\sqrt{1+\dot R^2 - 2 M_\pm/R }}$. It is well-known that $\epsilon = +1$ if $R$ increases in the outward normal direction to the shell (e.g., it takes place in a flat space-time), and $\epsilon = -1$ if $R$ decreases (semiclosed world). Thus, only under the additional choice $\epsilon_+ = \epsilon_-=1$ we have an ordinary (black hole type) shell [@bkt; @bkkt; @gk]. It seems to be the most physical case. For definiteness in the letter we deal only with such shells, the rest of the cases can easily be considered by analogy. As for the conservation law (\[eq5\]), one can obtain that $[ T ]$ is identically zero for the Schwarzschild space-times (\[eq6\]). Further, we assume the equation of state of the shell matter to be that of some perfect fluid, viz., p=. \[eq11\] This equation includes the most studied cases: the dust shell $p=0$ [@isr; @bkkt; @hb], radiation fluid shell $\sigma - 2 p=0$ [@vis], and bubble $\sigma + p =0$ [@bkt; @cl; @lm]. If $\eta >0$, it can be interpreted as a square component of the vector of a speed of sound in the shell. Then for a spatially two-dimensional homogeneous fluid the square speed of sound is $2 \eta$. From the physical viewpoint some $\eta$ appear to be inadmissible. For instance, if a fluid is required to satisfy the dominant energy condition, $\sigma \geq |p|$, one obtains the constraint || 1. \[eq12\] If a fluid is required to satisfy the causality condition, we get the constraint 1/2, \[eq13\] where one takes into account spatial two-dimensionality of a fluid. Nevertheless, the aim of this letter is to study the general case of arbitrary $\eta$. So, solving the differential equation (\[eq5\]) with respect to $\sigma$, we obtain = R\^[-2 (+ 1)]{}, \[eq14\] where $C$ is the integration constant determined by the specific shell’s matter. The value of $C$ is closely related to the value of surface mass density (or pressure) at fixed $R$. We consider ordinary shells not wormholes [@bkt; @vis], therefore, $\sigma \geq 0$ is required. It should also be noted that from Eq. (\[eq9\]) at positive densities $\sigma$, it follows that $M_+ > M_-$ for any $R$, $\dot R$ and $m(R) \not = 0$. Otherwise, matching of the space-times is impossible. The equations (\[eq9\]), (\[eq10\]), and (\[eq14\]) together with the choice of the signs $\epsilon_\pm$ completely determine the motion of the perfect fluid shell. So, we have all the necessary equations to consider the behavior of the shells near the horizon. First of all we consider what an external observer will see when the shell collapses. The Lichnerowicz-Darmois-Israel formalism gives us the relation between the time of a static observer in the external space-time $\Sigma^+$ and proper time of the shell. Indeed, taking into account of (\[eq6\]), the gauge of the four-velocity yields $$-\left( 1 - \frac{2 M_+}{R} \right) \dot t_+^2 + \frac{\dot R^2}{1 - \frac{2 M_+}{R}} = -1,$$ and thus, the shell radial velocity with respect to a static external observer is determined by the expression ( )\^2 = R\^2. \[eq15\] It can easily be seen that this velocity asymptotically vanishes at the horizon point $R=2 M_+$, therefore, under external distant observation the black shell is static. Thus, we can also consider the black shell as a model of a black hole. What are the parameters of the objects obtained? What is the difference between the black shells made of several fluids? These are problems we must resolve. So, let us consider the fluid shell passing through the external event horizon $R=2 M_+$. Define the instantaneous radial velocity of the shell at a moment, when it reaches the horizon, as $$v = \dot R |_{R=2 M_+}.$$ We assume $v \not = 0$, otherwise $K^\tau_\tau$ becomes infinite (\[eq8\]). Then the equation of motion (\[eq9\]) yields the equation with respect to the external mass $M_+$ of a black shell in the general case 2 M\_+ = m |\_[R=2 M\_+]{}. \[eq16\] For our black shells, taking into account Eqs. (\[eq10\]) and (\[eq14\]), we have = (2 M\_+)\^[-2-1]{}. \[eq17\] We obtain the algebraic equation with respect to $M_+$. Thus, the dependency of the external (observable) total mass-energy on the internal one is nonlinear and the total mass-energy $M_+$ is not a sum $M_- + f(C,\eta)$, as it would take place in the non-relativistic case. This is a simple reflection of the known fact, that within the framework of general relativity the energy superposition principle is not valid as a rule. Also it can readily be seen that when the black shell is formed, the kinetic energy is converted into the observable total mass too. Let us perform the analysis of the obtained equation for several $\eta$. It comes easy to us because we can consider the inverse function $ M_- (C, \eta, v; M_+)$. Naturally we divide all shells into the five classes with respect to $\eta$. These cases are illustrated in Fig.\[fig1\]. The physical sector seems to be defined by the non-negative $M_+$, $M_-$ masses (though a theory of quantum gravity could give rise to space-times of negative mass, see Ref. [@man]). Hence, in the figure the physical sector lies on the right of the $M_+$ axis. \(a) $\eta > -1/4$. This is the most physically admissible class, because it includes the class $\eta > 0$. It is easy to see, that $\Lim{M_+}{0} M_- = -\infty$ and $\Lim{M_+}{+\infty} M_- = (1+ v^2) M_+$, i.e., we have the behavior, qualitatively described by the curve [*a*]{}, see Fig.\[fig1\]. It should be pointed out that $M_+ \not = 0$ at $M_- = 0$, i.e., the shells have a proper gravitational mass and can exist in absence of the “stuffing” $M_-$ (the so called hollow black shells). For instance, the proper Schwarzschild radii of the dust and radiation fluid shells are respectively $$2 M_+= \sqrt{ \frac{2 C}{1+v^2}},~~ 2 M_+= \sqrt[4]{ \frac{2 C}{1+v^2}}.$$ \(b) $\eta = -1/4$. In this case the function (\[eq17\]) is simply the line ([*b*]{}, Fig.\[fig1\]) $M_- = M_+ (1+v^2) - C$. The proper gravitational mass of this black shell is the observable mass of the corresponding hollow black shell, $M_+ = C/(1+v^2)$. \(c) $-1/2 < \eta < -1/4$. In this case the curve $M_- (M_+)$ has the single minimum point M\_[-]{} = -2 (2+ 1) C\^ \^, \[eq18\] at M\_[+]{} = \^. \[eq19\] One can see from Fig.\[fig1\] (the curve [*c*]{}) that $M_- = 0$ at the two values of $M_+$ M\_+ = 0,  M\_+ = ( )\^. \[eq20\] Thus, for this class of black shells we have the ambiguity in determining the mass. Of course, this ambiguity takes place only for hollow black shells, because for stuffed black shells it lies in the unphysical sectors $M_- < 0$ or $M_+ <0$. ([*d*]{}) $\eta = -1/2$. In this case we also have a line ([*d*]{}, Fig.\[fig1\]), $M_- = M_+ ( 1+v^2 - 2 C)$. It is of interest to note that the proper gravitational mass of such black shells is zero, i. e. they can be observable only at $M_- \not = 0$. In other words, if the space-time inside such shells is flat, then the external space-time will also be flat. It can easily be explained by the fact that the matter with the equation of state $\sigma + 2 p =0$ has to be a two-dimensional analog of the three-dimensional global texture $^{(3)}\!\varepsilon + 3 ~^{(3)}\!p =0$, which is the topological defect having zero gravitational mass [@dad]. ([*e*]{}) $\eta < -1/2$. This case, including the (black) bubble at $\eta = - 1$, is a mirror reflection of the case ([*c*]{}). The curve $M_- (M_+)$ has the single maximum point given by the expressions (\[eq18\]), (\[eq19\]), just replace the subscript “min” by “max”. Here we also have the ambiguity in determining the mass (except the maximum point (\[eq18\]), (\[eq19\]) and $M_- = 0$ point (\[eq20\])), but now it lies in the physical sector $M_- > 0$, $M_+ > 0$ (see curve [*e*]{}). Therefore, these black shells can additionally be divided into the two subtypes with respect to the two-valued mass. Finally we calculate the proper total mass of the black bubble. From Eq. (\[eq17\]) one obtains M\_+ = , where the second root $M_+ = M_- = 0$ was rejected as trivial. Thus, in this letter the classification of some barotropic perfect fluid black thin shells was considered by means of the standard Lichnerowicz-Darmois-Israel formalism, thereby the nonlinear behaviour of their total mass was of special interest. Appendix: Frequently Asked Question {#appendix-frequently-asked-question .unnumbered} =================================== When this paper was published some people suggested me that Eq. (16) was wrongly ruled out from Eq. (9). It seems to be true for a first look. Indeed, when we blindly substitute $R=2 M_+$ into (9) we must obtain $$2 M_+ \left( \sqrt{1+ v^2 - \frac{M_-}{M_+}} - v \right) = m |_{R=2 M_+}$$ instead of (16). Let us show where the mistake appears. Eq. (9) follows from the equation $$(K_{\theta\theta})^+ - (K_{\theta\theta})^- = - m,$$ which when calculating extrinsic curvature $K_{\theta\theta}$ can be written as \[eq22\] (n\^r)\^+ - (n\^r)\^- = - m/R, where $n^{r\pm}$ are radial components of normal vector. Each of them is obtained from the equations [@isr]: &&g\_[00]{} (u\^t)\^2 - g\_[00]{}\^[-1]{} (u\^r)\^2 = -1,\ &&u\^t n\_t + u\^r n\_r = 0,\ &&g\_[00]{} (n\^t)\^2 - g\_[00]{}\^[-1]{} (n\^r)\^2 = 1, where $u^r=\dot R$, and $g_{00} =-(1-2M/R)$ for our case. Thus we have 3 equations for 3 unknown variables $u^t, n^t, n^r$. We find that \[eq23\] n\^r = , and Eq. (9) is evident. However, at $R=2 M$ we have $g_{00}=0$ whereas last expression was ruled out in the assumption $R \not= 2 M$. In other words, $g_{00}=0$ is the singular point of the system above and should be considered separately. We obtain $$\frac{(n^{r+})^2|_{R=2 M^+}}{0} = 1,$$ therefore, $(n^r)^+ = 0$ at the horizon point $R=2 M^+$. It does not seem to be extraordinary: the surface of black hole is known to be null, and vectors [**n**]{} and [**u**]{} are degenerated. Thus, the first term in (\[eq22\]) vanishes whereas the second is obtained from (\[eq23\]), i.e. Eq. (16) is true. Therefore, physical picture seems to be as follows. The hypersurfaces $R=\mbox{const}$ have to be timelike when $R > R_{R=2M_+}$, null at $R = R_{R=2M_+}$, and spacelike at $R < R_{R=2M_+}$. Therefore, if we wish to preserve the definition of the shell as the surface $R=R (s)$ (where $s$ is some evolution parameter) then we cannot rigidly fix whether a (two-sided) shell has timelike, spacelike or null surfaces for all $R$ (see also V. de la Cruz and W. Israel, Nuov. Cim. A 51 (1967) 744). Only then we obtain a model of the true collapse of standard 3D matter when the continuous set of timelike matter layers forms the black hole having null surface. \#1[\#1,]{} \#1[\#1]{} \#1[[*\#1*]{}]{} \#1[\#1]{} \#1[(\#1)]{} \#1[\#1]{} \#1\#2\#3\#4\#5[[\#1]{}[\#2]{} [\#3]{} [(\#5)]{} [\#4]{}]{} \#1\#2\#3\#4\#5[[*\#1*]{} ([\#2]{}, [\#3]{}, [\#4]{})[\#5]{}]{} \#1\#2\#3\#4\#5[[“\#1,”]{} [\#2]{}[\#3]{}[\#4]{}, [\#5]{} (unpublished)]{} [99]{} G. Dautcourt, . . . . . (Freeman, San Francisco) . . . . P. Hájíček and J. Bičák, ; J. L. Friedman, J. Louko, and S. N. Winters-Hilt, . S. Coleman and F. De Luccia, . P. Laguna-Castillo and R. A. Matzner, ; B. Jensen, . R. Mann, . D. Notzold, ; N. Dadhich, Preprint No. IUCAA-60/97 (1997). to 1cm [^1]: E-mail address: zlosh@usa.net
phyzzx.tex =0.2truein =0.1truein =6truein \#1[to]{} \#1 .5in \#1 \#1 .2in \#1 \#1 *and* \#1 \#1 .2in **Abstract** \#1 One school of thought on the black hole information-loss problem says that information encoded on infalling matter comes out subtly encoded in the Hawking radiation, which is only approximately thermal. The main problem faced by this proposal is that it produces a duplication of information: for a large black hole, the curvature at the horizon is small, so Alice should be able to cross the horizon unharmed. In addition to the original Alice, there will be a copy, who I will call Abby, encoded in the Hawking radiation. This sort of duplication of information causes evolution of pure states into mixed states, which is precisely the problem that was to be avoided. Black hole complementarity is a recent proposal by Susskind [*et al.*]{}$^{\complementarity }$ to avoid this difficulty. It states that the duplication of information is visible only to an unphysical superobserver. Attempts to bring the two copies together will either fail completely due to the causal structure of the black hole or will require Planck scale energies. However, Frolov and Novikov$^{\wormhole }$ showed that traversable wormholes could be used to significantly change the causal structure of a black hole with only small perturbations to the Schwarzschild metric. By letting one mouth cross the apparent event horizon while the other remains outside, we create timelike paths that cross the apparent horizon, pass through the wormhole, and then escape to infinity. If such a path does not pass very close to the singularity, at no time are large energies involved. This causal structure allows the violation of black hole complementarity, and in fact causes problems for any scheme that has infalling information duplicated in the Hawking radiation. Suppose Alice follows one of these time-like paths. Bob, who has waited outside the black hole, can then meet Alice as she leaves the wormhole mouth and wait with her until the black hole has completely evaporated. By studying the Hawking radiation, Bob and Alice can reconstruct Abby and allow her to meet her clone, in violation of complementarity. There are only two basic ways to avoid this conclusion while still permitting traversable wormholes. Either the wormhole pinches off, preventing Alice from escaping from the black hole; or the introduction or use of the wormhole distorts the Hawking radiation, preventing Abby from appearing. In the case where the black hole mass and radius are much larger than the mass and radius of a wormhole mouth, neither of these is possible. In order to prevent Alice from escaping back across the event horizon, the wormhole must pinch off immediately upon crossing the horizon. If the wormhole is ever traversable with a mouth on each side of the horizon, Alice can escape the black hole and confront Abby. Classical effects will not cause it to pinch off, but perhaps some quantum mechanical effect enters that closes the wormhole. However, the curvature due to an arbitrarily large black hole is arbitrarily small at the horizon. If one mouth of the wormhole is just inside the horizon and the other is just outside, a region containing both wormhole mouths is only infinitesimally different from flat Minkowski space with a wormhole. If the wormhole can exist at all, it should not pinch off at the black hole’s horizon. It may pinch off at a later time, but it will already be too late to preserve complementarity. It is similarly impossible for the wormhole to distort the Hawking radiation sufficiently to completely destroy Abby. Although the causal structure of black hole plus wormhole will be significantly different than of just a black hole, the metric will be very similar in the case of a small wormhole mouth. The Hawking radiation should therefore change very little when a wormhole mouth is dropped into it. It cannot, for example, be generated just outside the true horizon, which in the presence of the wormhole will be drastically different than the apparent horizon, for then it could only escape through the wormhole; this would result in a drastic change in the Hawking radiation as seen at infinity. Also, sending an object through the wormhole cannot affect the Hawking radiation at all, since the outer surface of the apparent horizon, where the Hawking radiation is generated, is spatially separated from the mouth inside the horizon. The radiation can therefore only be perturbed slightly, but in order to preserve black hole complementarity, a great deal of information needs to be erased. While Alice herself need not contain much information, suppose Jack and Jill cross the event horizon nearby. We can assume Alice, Jack, and Jill are in independent, uncorrelated states, so the amount of information available from all three is exactly the sum of the information available from the three as individuals. The wormhole has no way of telling how many or which of the three will pass through it, so it needs to destroy not only Abby, but also Jack and Jill’s copies. Instead of only three people, we could have chosen an arbitrarily large number, so long as they can fit in the past light cone of the wormhole mouth. However, the wormhole is only slightly perturbing the Hawking radiation, so it should not be able to erase an arbitrarily large amount of information. Even if the wormhole is somehow able to erase all that information, a new problem arises, or rather an old one reappears. Alice crosses the horizon, leaving no Abby behind due to the influence of the wormhole. Suppose, though, that Alice changes her mind and decides not to enter the wormhole. Neither Abby nor Alice escapes the evaporation of the black hole, and Alice’s information is lost to poor forlorn Bob. In fact, a similar problem afflicts any theory where information comes out in the Hawking radiation, even without complementarity. Suppose Bob, no doubt wondering what happened to Alice, decides to enter the black hole through the wormhole to look for her. Again, Bob’s passage cannot change the Hawking radiation if neither mouth is at the horizon. Bob never crossed the horizon, so no copy of him can be made. If Bob does not manage to leave back through the wormhole, but instead waits to hit the singularity, his information is lost completely to Cheryl, who remained outside waiting for them both. Bob can have mass much greater than the Planck mass, so we will eventually be left with a small black hole containing a large amount of information. This is not allowed in theories where information comes out in the radiation, but is not a problem for remnants$^{\remnants }$ or for theories where the information is stored near the singularity and comes out at the end. The only remaining resolution that permits black hole complementarity is the non-existence of traversable wormholes. It is not sufficient for it to be impossible to create a wormhole where there was none before – we can consider a universe that started out with traversable wormholes. This choice of initial condition should not affect the resolution of the information-loss problem. Therefore, no theory can simultaneously permit traversable wormholes and black hole complementarity. I would like to thank John Preskill for many helpful conversations. This material is based upon work supported under a National Science Foundation Graduate Research Fellowship and by the U.S. Dept. of Energy under Grant No. DE-FG03-92-ER40701.
--- abstract: 'Two string links are equivalent up to $2n$-moves and link-homotopy if and only if their all Milnor link-homotopy invariants are congruent modulo $n$. Moreover, the set of the equivalence classes forms a finite group generated by elements of order $n$. The classification induces that if two string links are equivalent up to $2n$-moves for every $n>0$, then they are link-homotopic.' address: - 'Institute for Mathematics and Computer Science, Tsuda University, 2-1-1 Tsuda-Machi, Kodaira, Tokyo, 187-8577, Japan' - 'Faculty of Education and Integrated Arts and Sciences, Waseda University, 1-6-1 Nishi-Waseda, Shinjuku-ku, Tokyo, 169-8050, Japan' - 'Faculty of Commerce, Waseda University, 1-6-1 Nishi-Waseda, Shinjuku-ku, Tokyo, 169-8050, Japan' author: - 'Haruko A. Miyazawa' - Kodai Wada - Akira Yasuhara title: | Classification of string links\ up to $2n$-moves and link-homotopy --- [^1] [^2] [^3] Introduction ============ In 1950s, J. Milnor [@M54; @M57] defined a family of link invariants, known as [*Milnor ${\overline{\mu}}$-invariants*]{}. For an ordered oriented $m$-component link $L$ in the $3$-sphere $S^3$, the [*Milnor number $\mu_{L}(I)$*]{} $(\in\mathbb{Z})$ of $L$ is specified by a finite sequence $I$ of elements in $\{1,\ldots,m\}$. This number is only well-defined up to a certain indeterminacy $\Delta_L(I)$, i.e. the residue class ${\overline{\mu}}_{L}(I)$ of $\mu_{L}(I)$ modulo $\Delta_L(I)$ is a link invariant. The invariant ${\overline{\mu}}_{L}(ij)$ for a sequence $ij$ is just the linking number between the $i$th and $j$th component of $L$. This justifies regarding ${\overline{\mu}}$-invariants as “generalized linking numbers”. In [@HL] N. Habegger and X.-S. Lin defined Milnor numbers for [*string links*]{} and proved that Milnor numbers are well-defined invariants without taking modulo. These numbers are called [*Milnor $\mu$-invariants*]{}. It is remarkable that $\mu$-invariants for non-repeated sequences classify string links up to link-homotopy [@HL] (whereas ${\overline{\mu}}$-invariants are not enough strong to classify links with four or more components up to link-homotopy [@L]). Here the [*link-homotopy*]{}, introduced by Milnor in [@M54], is the equivalence relation on (string) links generated by self-crossing changes and ambient isotopies. In addition to link-homotopy, there are various “geometric”  equivalence relations on (string) links that are related to Milnor invariants, e.g. concordance [@S; @Casson], (self) $C_{k}$-equivalence [@H; @FY; @Yagt; @Ytrans; @MY10] and Whitney tower concordance [@CST1; @CST2; @CST3], etc. A [*$2n$-move*]{} is a local move illustrated in Figure \[n-move\], and the [*$2n$-move equivalence*]{} is the equivalence relation generated by $2n$-moves and ambient isotopies. The $2n$-moves were probably first studied by S. Kinoshita in 1957 [@K57]. It is known that several $2n$-move equivalence invariants are derived from polynomial invariants, the Alexander [@K80], Jones, Kauffman and HOMFLYPT polynomials [@P]. Besides polynomial invariants, Fox colorings and Burnside groups give $2n$-move equivalence invariants [@DP02; @DP04]. [n-move.eps]{} (172,-2)[[$1$]{}]{} (210.5,-2)[[$2$]{}]{} (287,-2)[[$2n$]{}]{} Both Milnor invariants and $2n$-moves are well-studied in Knot Theory. However, to the best of the authors’ knowledge, there are no research articles relating Milnor invariants and $2n$-moves (except for the easily observed fact that the linking numbers modulo $n$ are $2n$-move equivalence invariants). In this paper, we show the following theorem that establishes an unexpected relationship between Milnor link-homotopy invariants and $2n$-moves. \[th-sl\] Let $n$ be a positive integer. Two string links $\sigma$ and $\sigma'$ are $(2n+{\rm lh})$-equivalent if and only if $\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{n}$ for any non-repeated sequence $I$. Here, the [*$(2n+{\rm lh})$-equivalence*]{} is the equivalence relation generated by $2n$-moves, self-crossing changes and ambient isotopies. Note that “$2n+{\rm lh}$” stands for the combination of $2n$-move equivalence and link-homotopy. In order to prove Theorem \[th-sl\], we give a complete list of representatives for string links up to $(2n+{\rm lh})$-equivalence (Proposition \[prop-rep-2nlh\]). Let $\mathcal{SL}(m)$ denote the set of $m$-component string links. Since the set of link-homotopy classes of $\mathcal{SL}(m)$ forms a group [@HL], it is not hard to see that the set of $(2n+\rm{lh})$-equivalence classes is also a group. Moreover we have the following. \[cor-group\] The set of $(2n+\rm{lh})$-equivalence classes of $\mathcal{SL}(m)$ forms a finite group generated by elements of order $n$, and the order of the group is $n^{s_{m}}$, where $s_{m}=\sum_{r=2}^{m}(r-2)!\binom{m}{r}$. The link-homotopy, concordance and $C_k$-equivalence give group structures on those equivalence classes of $\mathcal{SL}(m)$, respectively [@HL; @H]. The set of link-homotopy classes is a torsion free group of rank $s_m$ (see [@HL Section 3]), and the concordance classes contain elements of order 2. It is still open if the concordance classes contain elements of order $\geq 3$ and if the $C_k$-equivalence classes have torsion elements. In contrast to these facts, Corollary \[cor-group\] implies that, for any integer $n\geq2$, the $(2n+\rm{lh})$-equivalence classes contain elements of order $n$. As a consequence of Theorem \[th-sl\], we obtain a necessary and sufficient condition for which a link in $S^3$ is $(2n+\rm{lh})$-equivalent to the trivial link by means of Milnor [*numbers*]{}. Let $n$ be a positive integer. An $m$-component link $L$ in $S^3$ is $(2n+{\rm lh})$-equivalent to the trivial link if and only if $\mu_L(I)\equiv0\pmod{n}$ for any non-repeated sequence $I$. In [@Fox], R. H. Fox introduced the notion of [*congruence classes modulo $(n,q)$*]{} of knots in $S^{3}$ for integers $n>0$ and $q\geq 0$, and asked whether the set of congruence classes of a knot determines the knot type. More precisely, he asked the following question: If two knots are congruent modulo $(n,q)$ for every $n$ and $q$, then are they ambient isotopic? We note that the notion of congruences and the question can be extended to (string) links. It is known in [@Fox; @NS; @N; @La] that the Alexander and Jones polynomials restrict the possible congruence classes. In particular, M. Lackenby proved that if two links are congruent modulo $(n,2)$ for every $n$, then they have the same Jones polynomial [@La Corollaly 2.4]. Since the $2n$-move equivalence implies the congruence modulo $(n,2)$, it would be interesting to ask whether the set of $2n$-move equivalence classes of a (string) link determines the link type. Theorem \[th-sl\] implies that if two string links are $2n$-move equivalent for every $n$, then they share all Milnor invariants for non-repeated sequences. Combining this and the classification of string links up to link-homotopy [@HL], we have the following corollary. If two string links are $2n$-move equivalent for every $n$, then they are link-homotopic. In particular, if a $($string$)$ link $L$ is $2n$-move equivalent to the trivial one for every $n$, then $L$ is link-homotopically trivial. Preliminaries {#sec-sl} ============= In this section, we summarize the definitions of string links and their Milnor invariants from [@M57; @F; @HL; @Yagt]. String links and Milnor $\mu$-invariants ---------------------------------------- Let $\mathbb{D}^{2}$ be the unit disk in the plane equipped with $m$ points $x_{1},\ldots,x_{m}$ in its interior, lying in order on the $x$-axis. Let $I_{1},\ldots,I_{m}$ be $m$ copies of $[0,1]$. An [*$m$-component string link*]{} is the image of a proper embedding $$\bigsqcup_{i=1}^{m}I_{i}\longrightarrow \mathbb{D}^{2}\times [0,1]$$ such that the image of each $I_{i}$ runs from $(x_{i},0)$ to $(x_{i},1)$. Each strand of a string link is oriented upward. The $m$-component string link $\{x_{1},\ldots,x_{m}\}\times [0,1]$ in $\mathbb{D}^{2}\times [0,1]$ is called the [*trivial $m$-component string link*]{}, and is denoted by $\mathbf{1}_{m}$. Given an $m$-component string link $\sigma$, let $G(\sigma)$ denote the fundamental group of the complement $(\mathbb{D}^{2}\times [0,1])\setminus\sigma$ with a base point on the boundary of $\mathbb{D}^{2}\times\{0\}$, and let $G(\sigma)_{q}$ denote the $q$th term of the lower central series of $G(\sigma)$. Let $\alpha_{i}$ and $l_{i}$ be the $i$th meridian and the $i$th longitude of $\sigma$, respectively, illustrated in Figure \[peripheral\]. Abusing notation, we still denote by $\alpha_{i}$ the image of $\alpha_{i}$ in the $q$th nilpotent quotient $G(\sigma)/G(\sigma)_{q}$. We assume that each $l_{i}$ is the [*preferred*]{} longitude, i.e. the zero-framed parallel copy of the $i$th component of $\sigma$. Since $G(\sigma)/G(\sigma)_{q}$ is generated by $\alpha_{1},\ldots,\alpha_{m}$ (see [@C; @S]), the $i$th longitude $l_{i}$ is expressed as a word in $\alpha_{1},\ldots,\alpha_{m}$ for each $i\in\{1,\ldots,m\}$. We denote by $\lambda_{i}$ this word. [peripheral.eps]{} (11,125.5)[[$x_{1}$]{}]{} (52,125.5)[[$x_{i}$]{}]{} (91.5,125.5)[[$x_{m}$]{}]{} (11,11)[[$x_{1}$]{}]{} (55.5,11)[[$x_{i}$]{}]{} (89.5,11)[[$x_{m}$]{}]{} (40,7)[$\alpha_{i}$]{} (197.5,125.5)[[$x_{1}$]{}]{} (239,125.5)[[$x_{i}$]{}]{} (278,125.5)[[$x_{m}$]{}]{} (197.5,11)[[$x_{1}$]{}]{} (239,11)[[$x_{i}$]{}]{} (276,11)[[$x_{m}$]{}]{} (263,52)[$l_{i}$]{} Let $\langle\alpha_{1},\ldots,\alpha_{m}\rangle$ denote the free group on $\{\alpha_{1},\ldots,\alpha_{m}\}$, and let $\mathbb{Z}\langle\langle X_{1},\ldots,X_{m}\rangle\rangle$ denote the ring of formal power series in non-commutative variables $X_{1},\ldots,X_{m}$ with integer coefficients. The [*Magnus expansion*]{} is a homomorphism $$E:\langle\alpha_{1},\ldots,\alpha_{m}\rangle \longrightarrow \mathbb{Z}\langle\langle X_{1},\ldots,X_{m}\rangle\rangle$$ defined by, for $1\leq i\leq m$: $$E(\alpha_{i})=1+X_{i}, \ E(\alpha_{i}^{-1})=1-X_{i}+X_{i}^{2}-X_{i}^{3}+\cdots.$$ Let $I=j_{1}j_{2}\ldots j_{k}i$ $(k<q)$ be a sequence of elements in $\{1,\ldots,m\}$. The coefficient of $X_{j_{1}}\cdots X_{j_{k}}$ in the Magnus expansion $E(\lambda_{i})$ is called the [*Milnor $\mu$-invariant*]{} for the sequence $I$ and is denoted by $\mu_{\sigma}(I)$ [@HL]. (We define $\mu_{\sigma}(i)=0$.) The length $|I|$ $(=k+1)$ of $I$ is called the [*length*]{} of $\mu_{\sigma}(I)$. Milnor’s algorithm {#subsec-algorithm} ------------------ To compute $\mu_{\sigma}(I)$ we need to obtain the word $\lambda_{i}$ in $\alpha_{1},\ldots,\alpha_{m}$ concretely, which represents the $i$th longitude $l_{i}$. In [@M57], Milnor introduced an algorithm to give $\lambda_{i}$ by using the Wirtinger presentation of $G(\sigma)$ and a sequence of homomorphisms $\eta_{q}$ as follows. (Although this algorithm was actually given for Milnor invariants of links in $S^{3}$, it can be applied to those of string links.) Given an $m$-component string link $\sigma$, consider its diagram $D_{1}\cup\cdots\cup D_{m}$. Put labels $a_{i1},a_{i2},\ldots,a_{ir(i)}$ in order on all arcs of the $i$th component $D_{i}$ while we go along orientation on $D_{i}$ from the initial arc, where $r(i)$ denotes the number of arcs of $D_{i}$ $(i=1,\ldots,m)$. Then the Wirtinger presentation of $G(\sigma)$ has the form $$\left\langle a_{ij}\ (1\leq i\leq m,1\leq j\leq r(i))~\vline~a_{ij+1}^{-1}u_{ij}^{-1}a_{ij}u_{ij}\ (1\leq i\leq m,1\leq j\leq r(i)-1)\right\rangle,$$ where the $u_{ij}$ are generators or inverses of generators which depend on the signs of the crossings. Here we set $$v_{ij}=u_{i1}u_{i2}\ldots u_{ij}.$$ Let $\overline{A}$ denote the free group on the Wirtinger generators $\{a_{ij}\}$, and let $A$ denote the free subgroup generated by $a_{11},a_{21},\ldots,a_{m1}$. A sequence of homomorphisms $\eta_{q}:\overline{A}\rightarrow A$ is defined inductively by $$\begin{aligned} \eta_{1}(a_{ij})=a_{i1},\ \eta_{q+1}(a_{i1})=a_{i1}, \\ \eta_{q+1}(a_{ij+1})=\eta_{q}(v_{ij}^{-1}a_{i1}v_{ij}). \end{aligned}$$ Let $\overline{A}_{q}$ denote the $q$th term of the lower central series of $\overline{A}$, and let $N$ denote the normal subgroup of $\overline{A}$ generated by the Wirtinger relations $\{a_{ij+1}^{-1}u_{ij}^{-1}a_{ij}u_{ij}\}$. Milnor proved that $$\label{eq-Milnor} \eta_{q}(a_{ij})\equiv a_{ij}\pmod{\overline{A}_{q}N}.$$ By the construction of the Wirtinger presentation, $a_{i1}$ represents the $i$th meridian of $\sigma$. Hence, we have the natural homomorphism $$\phi:A\longrightarrow\langle\alpha_{1},\ldots,\alpha_{m}\rangle$$ defined by $\phi(a_{i1})=\alpha_{i}$ $(i=1,\ldots,m)$. Since $v_{ir(i)-1}=u_{i1}\ldots u_{ir(i)-1}$ represents an $i$th longitude, for the preferred longitude $l_{i}$ we regard that $l_{i}=a_{i1}^{s}v_{ir(i)-1}$ for some $s\in\mathbb{Z}$. Moreover, we can identify $\phi\circ\eta_{q}(l_{i})$ with $\lambda_{i}$ by Congruence (\[eq-Milnor\]). Milnor invariants and $2n$-moves ================================ In this section, we discuss the invariance of Milnor invariants under $2n$-moves. Milnor link-homotopy invariants and $2n$-moves ---------------------------------------------- The following theorem reveals how Milnor link-homotopy invariants, i.e. $\mu$-invariants for non-repeated sequences, behave under $2n$-moves. \[th-invariance\] Let $n$ be a positive integer. If two string links $\sigma$ and $\sigma'$ are $(2n+{\rm lh})$-equivalent, then $\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{n}$ for any non-repeated sequence $I$. For $P,Q\in\mathbb{Z}\langle\langle X_{1},\cdots,X_{m}\rangle\rangle$, we use the notation $P\overset{(n)}{\equiv}Q$ if $P-Q$ is contained in the ideal generated by $n$. To show Theorem \[th-invariance\] we need the following lemma. \[lem-invariance\] Let $n\geq2$ be an integer and $\sigma$ an $m$-component string link. For any Wirtinger generators $a_{ij}$ and $a_{kl}$ of $G(\sigma)$, there exists $R(X_{i},X_{k})\in\mathbb{Z}\langle\langle X_{1},\cdots,X_{m}\rangle\rangle$ such that each term of $R(X_{i},X_{k})$ contains $X_{i}$ and $X_{k}$, and $$E\left(\phi\circ\eta_{q}\left(\left(a_{ij}^{{\varepsilon}} a_{kl}^{{\delta}}\right)^{\pm n}\right)\right)\overset{(n)}{\equiv}1+\binom{n}{2}R(X_{i},X_{k})+\mathcal{O}(2),$$ where ${\varepsilon},{\delta}\in\{1,-1\}$ and $\mathcal{O}(2)$ denotes $0$ or the terms containing $X_{r}$ at least two for some $r$ $(=1,\ldots,m)$. By the definition of $\eta_{q}$, $\phi\circ\eta_{q}\left(a_{ij}^{{\varepsilon}}\right)=w^{-1}\alpha_{i}^{{\varepsilon}}w$ for some word $w$ in $\alpha_{1},\ldots,\alpha_{m}$. Set $E(w)=1+W$ and $E(w^{-1})=1+\overline{W}$, where $W$ and $\overline{W}$ denote the terms of degree $\geq1$ such that $\left(1+\overline{W}\right)\left(1+W\right)=1$. It follows that $$\begin{aligned} E\left(\phi\circ\eta_{q}\left(a_{ij}^{{\varepsilon}}\right)\right) &=&E\left(w^{-1}\alpha_{i}^{{\varepsilon}}w\right) \\ &=&\left(1+\overline{W}\right)\left(1+{\varepsilon}X_{i}\right)\left(1+W\right)+\mathcal{O}(2) \\ &=&1+{\varepsilon}X_{i}+{\varepsilon}X_{i}W+{\varepsilon}\overline{W}X_{i}+{\varepsilon}\overline{W}X_{i}W+\mathcal{O}(2) \\ &=&1+{\varepsilon}P(X_{i})+\mathcal{O}(2), \end{aligned}$$ where $P(X_{i})=X_{i}+X_{i}W+\overline{W}X_{i}+\overline{W}X_{i}W$. Note that each term in $P(X_{i})$ contains $X_{i}$. Similarly, we have $$E\left(\phi\circ\eta_{q}\left(a_{kl}^{{\delta}}\right)\right)=1+{\delta}Q(X_{k})+\mathcal{O}(2),$$ where $Q(X_{k})$ denotes the terms of degree $\geq1$, each of which contains $X_{k}$. Therefore we have the following. $$\begin{aligned} &&E\left(\phi\circ\eta_{q}\left(\left(a_{ij}^{{\varepsilon}} a_{kl}^{{\delta}}\right)^{n}\right)\right) \\ &&=\left(\left(1+{\varepsilon}P(X_{i})+\mathcal{O}(2)\right)\left(1+{\delta}Q(X_{k})+\mathcal{O}(2)\right)\right)^{n} \\ &&=\left(1+{\varepsilon}P(X_{i})+{\delta}Q(X_{k})+{\varepsilon}{\delta}P(X_{i})Q(X_{k})+\mathcal{O}(2)\right)^{n} \\ &&=1+\sum_{r=1}^{n}\binom{n}{r} \left({\varepsilon}P(X_{i})+{\delta}Q(X_{k})+{\varepsilon}{\delta}P(X_{i})Q(X_{k})+\mathcal{O}(2)\right)^{r} \\ &&\overset{(n)}{\equiv}1+\binom{n}{2} \left(P(X_{i})+Q(X_{k})+P(X_{i})Q(X_{k})+\mathcal{O}(2)\right)^{2}+\mathcal{O}(2) \\ &&=1+\binom{n}{2} \left(P(X_{i})Q(X_{k})+Q(X_{k})P(X_{i})+\mathcal{O}(2)\right)+\mathcal{O}(2) \\ &&=1+\binom{n}{2} \left(P(X_{i})Q(X_{k})+Q(X_{k})P(X_{i})\right)+\mathcal{O}(2). \end{aligned}$$ Similarly, we have $$\begin{aligned} E\left(\phi\circ\eta_{q}\left(\left(a_{ij}^{{\varepsilon}} a_{kl}^{{\delta}}\right)^{-n}\right)\right) &=&E\left(\phi\circ\eta_{q}\left(\left(a_{kl}^{-{\delta}}a_{ij}^{-{\varepsilon}} \right)^{n}\right)\right) \\ &\overset{(n)}{\equiv}& 1+\binom{n}{2}\left(Q(X_{k})P(X_{i})+P(X_{i})Q(X_{k})\right) +\mathcal{O}(2) \\ &=&1+\binom{n}{2} \left(P(X_{i})Q(X_{k})+Q(X_{k})P(X_{i})\right)+\mathcal{O}(2). \end{aligned}$$ Setting $R(X_{i},X_{k})=P(X_{i})Q(X_{k})+Q(X_{k})P(X_{i})$, we obtain the desired congruence. It is obvious for $n=1$, and hence we consider the case $n\geq2$. Since $\mu$-invariants for non-repeated sequences are link-homotopy invariants, we show that their residue classes modulo $n$ are preserved under $2n$-moves. Assume that two $m$-component string links $\sigma$ and $\sigma'$ are related by a single $2n$-move. A $2n$-move involving two strands of a single component is realized by link-homotopy. Furthermore, a $2n$-move whose two strands are oriented antiparallel is generated by link-homotopy and a $2n$-move whose strands are oriented parallel, see Figure \[anti-parallel\]. (Note that $2$-component string links having the same linking number are link-homotopic.) Thus, we may assume that two strands performing the $2n$-move, which relates $\sigma$ to $\sigma'$, are oriented parallel and belong to different components. [anti-parallel.eps]{} (151,124)[[isotopy]{}]{} (141,37)[[link-homotopy]{}]{} (271,61)[$2n$-move]{} (12,40)[[1]{}]{} (118.5,40)[[$2n$]{}]{} (208.5,40)[[1]{}]{} (315,40)[[$2n$]{}]{} There are diagrams $D$ and $D'$ of $\sigma$ and $\sigma'$, respectively, which are identical except in a disk $\Delta$ where they differ as illustrated in Figure \[2n-move\]. (It can be seen that the move in the disk $\Delta$ of Figure \[2n-move\] is equivalent to a $2n$-move.) Put labels $a_{ij}$ $(1\leq i\leq m$, $1\leq j\leq r(i))$ on all arcs of $D$ as described in Section \[subsec-algorithm\], and put labels $a'_{ij}$ on all arcs in $D'\setminus\Delta$ which correspond to the arcs labeled $a_{ij}$ in $D\setminus\Delta$. Also put labels $b'_{1},\ldots,b'_{2n},c'_{1},\ldots,c'_{2n}$ on the arcs of $D'$ in $\Delta$ as illustrated in Figure \[2n-move\]. Let $\overline{A'}$ be the free group on $\{a'_{ij}\}\cup\{b'_{1},\ldots,b'_{2n},c'_{1},\ldots,c'_{2n}\}$ and $A'$ the free subgroup on $\{a'_{11},a'_{21},\ldots,a'_{m1}\}$. Let $\eta'_{q}:\overline{A'}\rightarrow A'$ denote the sequence of homomorphisms associated with $D'$ given in Section \[subsec-algorithm\], and define a homomorphism $\phi':A'\rightarrow\langle\alpha_{1},\ldots,\alpha_{m}\rangle$ by $\phi'(a'_{i1})=\alpha_{i}$ $(i=1,\ldots,m)$. [2n-move.eps]{} (-21,219)[$D$ :]{} (-21,79)[$D'$ :]{} (162,180)[$2n$-move]{} (152,244)[$\Delta$]{} (152,-10)[$\Delta$]{} (11,239)[$a_{kl-1}$]{} (11,201)[$a_{gh-1}$]{} (47,239)[$a_{kl}$]{} (47,201)[$a_{gh}$]{} (277,239)[$a_{kl+1}$]{} (277,201)[$a_{gh+1}$]{} (11,101)[$a'_{kl-1}$]{} (11,59)[$a'_{gh-1}$]{} (46,101)[$a'_{kl}$]{} (46,59)[$a'_{gh}$]{} (277,101)[$a'_{kl+1}$]{} (277,59)[$a'_{gh+1}$]{} (138,78)[[$b'_{1}$]{}]{} (122,78)[[$c'_{1}$]{}]{} (155,99)[[$b'_{2}$]{}]{} (137,123)[[$c'_{2}$]{}]{} (98,110)[[$b'_{2n}$]{}]{} (67,126)[[$c'_{2n}$]{}]{} (102,42)[[$b'_{2n-2}$]{}]{} (63,29)[[$c'_{2n-2}$]{}]{} For the $i$th preferred longitudes $l_{i}$ and $l'_{i}$ associated with $D$ and $D'$, respectively, it is enough to show that $$\label{eq-longitude} E\left(\phi\circ\eta_{q}\left(l_{i}\right)\right) \overset{(n)}{\equiv}E\left(\phi'\circ\eta'_{q}\left(l'_{i}\right)\right)+\mathcal{O}(2)+\mathcal{P}(X_{i})$$ for any $1\leq i\leq m$, where $\mathcal{P}(X_{i})$ denotes the terms containing $X_{i}$. To show the congruence above, we need the following claim. \[eq-generator\] For any $1\leq i\leq m$ and $1\leq j\leq r(i)$, we have $$E\left(\phi\circ\eta_{q}\left(a_{ij}\right)\right) \overset{(n)}{\equiv}E\left(\phi'\circ\eta'_{q}\left(a'_{ij}\right)\right)+\mathcal{O}(2).$$ Before showing Claim \[eq-generator\], we observe that it implies Congruence (\[eq-longitude\]). Without loss of generality we may assume that $i=1$, i.e. we compare the preferred longitudes $l_{1}=a_{11}^{s}v_{1r(1)-1}$ and $l'_{1}={a'}^{t}_{11}v'_{1r(1)-1}$ $(s,t\in\mathbb{Z})$. Since two strands in $\Delta$ belong to different components, we only need to consider two cases. If both of the two strands in $\Delta$ do not belong to the $1$st component, then $s=t$ and $l'_{1}$ is obtained from $l_{1}$ by replacing $u_{1j}$ with $u'_{1j}$ $(j=1,\ldots,r(1)-1)$ and $a_{11}$ with $a'_{11}$. Therefore, Congruence (\[eq-longitude\]) follows from Claim \[eq-generator\]. If one of the two strands in $\Delta$ belongs to the $1$st component, then Figure \[2n-move\] indicates that $l_{1}$ and $l'_{1}$ can be written respectively in the forms $$l_{1}=a_{11}^{s}u_{11}\ldots u_{1h-1}u_{1h}\ldots u_{1r(1)-1}$$ and $$l'_{1}={a'}^{s-n}_{11}u'_{11}\ldots u'_{1h-1}\left(a'_{1h}a'_{kl}\right)^{n}u'_{1h}\ldots u'_{1r(1)-1}.$$ Both $E\left(\phi\circ\eta_{q}\left(a_{11}^{s}\right)\right)$ and $E\left(\phi'\circ\eta'_{q}\left({a'}^{s-n}_{11}\right)\right)$ have the form $1+\mathcal{P}(X_{1})$. Furthermore, by Lemma \[lem-invariance\], we have $$E\left(\phi'\circ\eta'_{q}\left(\left(a'_{1h}a'_{kl}\right)^{n}\right)\right) \overset{(n)}{\equiv}1+\binom{n}{2}R(X_{1},X_{k})+\mathcal{O}(2) =1+\mathcal{P}(X_{1})+\mathcal{O}(2).$$ Therefore, this together with Claim \[eq-generator\] proves Congruence (\[eq-longitude\]). Now, we turn to the proof of Claim \[eq-generator\]. The proof is done by induction on $q$. The assertion certainly holds for $q=1$. Recall that $$\phi\circ\eta_{q+1}\left(a_{ij+1}\right)=\phi\circ\eta_{q}\left(v_{ij}^{-1}a_{i1}v_{ij}\right)$$ and $$\phi'\circ\eta'_{q+1}\left(a'_{ij+1}\right)=\phi'\circ\eta'_{q}\left({v'_{ij}}^{-1}a'_{i1}v'_{ij}\right).$$ If $v_{ij}$ does not pass through $\Delta$, then it is clear that $v'_{ij}$ is obtained from $v_{ij}$ by replacing $a_{ij}$ with $a'_{ij}$, and hence $E\left(\phi\circ\eta_{q}\left(v_{ij}\right)\right)\overset{(n)}{\equiv}E\left(\phi'\circ\eta'_{q}\left(v'_{ij}\right)\right)+\mathcal{O}(2)$ by the induction hypothesis. This implies that $$\begin{aligned} E\left(\phi\circ\eta_{q+1}\left(a_{ij+1}\right)\right) &=&E\left(\phi\circ\eta_{q}\left(v_{ij}^{-1}a_{i1}v_{ij}\right)\right) \\ &\overset{(n)}{\equiv}&E\left(\phi'\circ\eta'_{q}\left({v'_{ij}}^{-1}a'_{i1}v'_{ij}\right)\right)+\mathcal{O}(2) \\ &=&E\left(\phi'\circ\eta'_{q+1}\left(a'_{ij+1}\right)\right)+\mathcal{O}(2). \end{aligned}$$ If $v_{ij}$ passes through $\Delta$, then $v_{ij}$ and $v'_{ij}$ can be written respectively in the forms $$v_{ij}=u_{i1}\ldots u_{ih-1}u_{ih}\ldots u_{ij}$$ and $$v'_{ij}=u'_{i1}\ldots u'_{ih-1}\left(a'_{ih}a'_{kl}\right)^{n}u'_{ih}\ldots u'_{ij}.$$ Set $E\left(\phi\circ\eta_{q}\left(u_{i1}\ldots u_{ih-1}\right)\right)=1+F$, $E\left(\phi\circ\eta_{q}\left(\left(u_{i1}\ldots u_{ih-1}\right)^{-1}\right)\right)=1+\overline{F}$, $E\left(\phi\circ\eta_{q}\left(u_{ih}\ldots u_{ij}\right)\right)=1+G$ and $E\left(\phi\circ\eta_{q}\left(\left(u_{ih}\ldots u_{ij}\right)^{-1}\right)\right)=1+\overline{G}$, where $F,\overline{F},G$ and $\overline{G}$ denote the terms of degree $\geq1$. Then we have $$E\left(\phi\circ\eta_{q+1}\left(a_{ij+1}\right)\right) =\left(1+\overline{G}\right)\left(1+\overline{F}\right)\left(1+X_{i}\right)\left(1+{F}\right)\left(1+G\right).$$ It follows from the induction hypothesis that $$\begin{aligned} E\left(\phi'\circ\eta'_{q+1}\left(a'_{ij+1}\right)\right) &\overset{(n)}{\equiv}&\left(1+\overline{G}\right)E\left(\phi'\circ\eta'_{q}\left(\left(a'_{ih}a'_{kl}\right)^{-n}\right)\right) \left(1+\overline{F}\right)\left(1+X_{i}\right) \\ &&\times\left(1+F\right) E\left(\phi'\circ\eta'_{q}\left(\left(a'_{ih}a'_{kl}\right)^{n}\right)\right)\left(1+G\right)+\mathcal{O}(2). \end{aligned}$$ Lemma \[lem-invariance\] implies that $$\begin{aligned} E\left(\phi'\circ\eta'_{q+1}\left(a'_{ij+1}\right)\right) &\overset{(n)}{\equiv}&\left(1+\overline{G}\right) \left(1+\binom{n}{2} R(X_{i},X_{k})\right) \left(1+\overline{F}\right)\left(1+X_{i}\right) \\ &&\times\left(1+F\right) \left(1+\binom{n}{2} R(X_{i},X_{k})\right) \left(1+G\right)+\mathcal{O}(2). \end{aligned}$$ In particular, we have the following. $$\begin{aligned} &&\left(1+\binom{n}{2} R(X_{i},X_{k})\right) \left(1+\overline{F}\right)\left(1+X_{i}\right) \left(1+F\right)\left(1+\binom{n}{2} R(X_{i},X_{k})\right) \\ &&=\left(1+\binom{n}{2} R(X_{i},X_{k})\right) \left(1+\left(1+\overline{F}\right)X_{i}\left(1+F\right)\right)\left(1+\binom{n}{2} R(X_{i},X_{k})\right) \\ &&=1+\left(1+\overline{F}\right)X_{i}\left(1+F\right)+2\binom{n}{2} R(X_{i},X_{k})+\mathcal{O}(2) \\ &&\overset{(n)}{\equiv} 1+\left(1+\overline{F}\right)X_{i}\left(1+F\right)+\mathcal{O}(2) \\ &&=\left(1+\overline{F}\right)\left(1+X_{i}\right)\left(1+{F}\right)+\mathcal{O}(2). \end{aligned}$$ This proves Claim \[eq-generator\], and hence completes the proof of Theorem \[th-invariance\]. Milnor isotopy invariants and $2p$-moves ---------------------------------------- For Milnor isotopy invariants, i.e. $\mu$-invariants possibly with [*repeated*]{} sequences, we have the following. \[prop-inv-prime\] Let $p$ be a prime number. If two string links $\sigma$ and $\sigma'$ are $2p$-move equivalent, then $\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{p}$ for any sequence $I$ of length $\leq p$. (1)  The restriction on the length of sequences in Proposition \[prop-inv-prime\] must be necessary. In fact, there exists the following example: Let $\sigma=\sigma_{1}^{4}$, where $\sigma_{1}$ is the generator of $2$-braids. We can verify that $\mu_{\sigma}(112)=1$ by using a computer program written by Takabatake, Kuboyama and Sakamoto [@TKS].[^4] While $\sigma$ is $4$-move equivalent to $\mathbf{1}_{2}$, $\mu_{\sigma}(112)$ is not congruent to $0$ modulo $2$. (2)  Proposition \[prop-inv-prime\] cannot be extended to the $2n$-move equivalence classes of string links for a nonprime number $n$. For example, let $\sigma=\sigma^{8}_{1}$ then the computer program of Takabatake-Kuboyama-Sakamoto gives us that $\mu_{\sigma}(211)=10$. While $\sigma$ is $8$-move equivalent to $\mathbf{1}_{2}$, $\mu_{\sigma}(211)$ is not congruent to $0$ modulo $4$. Let $D$ and $D'$ be diagrams of $m$-component string links $\sigma$ and $\sigma'$, respectively. Assume that $D$ and $D'$ are related by a single $2p$-move whose strands are oriented parallel. (In the case where the orientations of two strands of a $2p$-move are antiparallel, the proof is strictly similar. Hence, we omit the case.) We use the same notation as in the proof of Theorem \[th-invariance\]. It is enough to show that, for any $1\leq i\leq m$, $$E\left(\phi\circ\eta_{q}\left(l_{i}\right)\right) \overset{(p)}{\equiv}E\left(\phi'\circ\eta'_{q}\left(l'_{i}\right)\right)+(\text{terms of degree $\geq p$}).$$ By arguments similar to those in the proof of Theorem \[th-invariance\], $l'_{i}$ is obtained from $l_{i}$ by replacing $a_{kl}$ with $a'_{kl}$ for some $k,l$ and inserting the $p$th powers of elements in the free group $\overline{A'}$ on the Wirtinger generators of $G(\sigma')$. The following claim completes the proof. \[claim-prime\] (1)  For any word $w$ in $\alpha_{1},\ldots,\alpha_{m}$, we have $$E\left(w^{p}\right) \overset{(p)}{\equiv} 1+(\text{terms of degree $\geq p$}).$$ (2)  For any $1\leq i\leq m$ and $1\leq j\leq r(i)$, we have $$E\left(\phi\circ\eta_{q}\left(a_{ij}\right)\right) \overset{(p)}{\equiv} E\left(\phi'\circ\eta'_{q}\left({a'}_{ij}\right)\right) \overset{(p)}{\equiv} 1+(\text{terms of degree $\geq p$}).$$ Set $E\left(w\right)=1+W$, where $W$ denotes the terms of degree $\geq1$. Then $E\left(w^{p}\right)=\left(1+W\right)^{p}\overset{(p)}{\equiv}1+W^{p}$. This proves Claim \[claim-prime\] (1). By arguments similar to those in the proof of Claim \[eq-generator\], $\eta'_{q+1}\left(a'_{ij}\right)$ is obtained from $\eta_{q+1}\left(a_{ij}\right)$ by replacing $\eta_{q}\left(a_{kl}\right)$ with $\eta'_{q}\left(a'_{kl}\right)$ some $k,l$ and inserting $\eta'_{q}\left(w^{p}\right)$ for some elements $w$ in $\overline{A'}$. Therefore, using Claim \[claim-prime\] (1), we complete the proof of Claim \[claim-prime\] (2) by induction on $q$. Claspers {#sec-clasper} ======== To show Theorem \[th-sl\], we will use the theory of claspers introduced by K. Habiro in [@H]. In this section, we briefly recall the basic notions of clasper theory from [@H]. We only need the notion of $C_{k}$-tree in this paper, and refer the reader to [@H] for the general definition of claspers. Definitions ----------- Let $\sigma$ be a string link in $\mathbb{D}^{2}\times [0,1]$. An embedded disk $T$ in $\mathbb{D}^{2}\times [0,1]$ is called a [*tree clasper*]{} for $\sigma$ if it satisfies the following: 1. $T$ decomposes into disks and bands. 2. Bands are called [*edges*]{} and each of them connects two distinct disks. 3. Each disk has either one or three incident edges, and is then respectively called a [*disk-leaf*]{} or [*node*]{}. 4. $\sigma$ intersects $T$ transversely and the intersections are contained in the union of the interior of the disk-leaves. We say that $T$ is a [*$C_{k}$-tree*]{} if the number of disk-leaves of $T$ is $k+1$, and is [*simple*]{} if each disk-leaf of $T$ intersects $\sigma$ at a single point. (Note that a tree clasper is called a [*strict tree clasper*]{} in [@H].) We will make use of the drawing convention for claspers of [@H Figure $7$] except for the following: a (resp. ) on an edge represents a positive (resp. negative) half-twist. This replaces the circled $S$ (resp. $S^{-1}$) notation used in [@H]. Given a $C_k$-tree $T$ for a string link $\sigma$, there is a procedure to construct a zero-framed link $\gamma(T)$ in the complement of $\sigma$. [*Surgery along $T$*]{} means surgery along $\gamma(T)$. Since surgery along $\gamma(T)$ preserves the ambient space, surgery along the $C_k$-tree $T$ can be regarded as a local move on $\sigma$ in $\mathbb{D}^{2}\times [0,1]$. Denote by $\sigma_{T}$ the string link in $\mathbb{D}^{2}\times [0,1]$ which is obtained from $\sigma$ by surgery along $T$. Similarly, we define the string link $\sigma_{T_1\cup\cdots \cup T_r}$ obtained from $\sigma$ by surgery along a disjoint union of tree claspers $T_1\cup\cdots \cup T_r$. A $C_k$-tree $T$ having the shape of the tree clasper in Figure \[linear\] (with possibly some half-twists on the edges of $T$) is called a [*linear $C_k$*]{}-tree. As illustrated in Figure \[linear\], surgery along a simple linear $C_{k}$-tree for $\sigma$ is ambient isotopic to a band summing of $\sigma$ and the $(k+1)$-component Milnor link[^5]  (see [@M54 Fig. 7]). [linear.eps]{} (143,52)[surgery]{} The [*$C_{k}$-equivalence*]{} is the equivalence relation on string links generated by surgery along $C_{k}$-trees and ambient isotopies. Habiro proved that two string links $\sigma$ and $\sigma'$ are $C_{k}$-equivalent if and only if there exists a disjoint union of simple $C_{k}$-trees $T_{1}\cup\cdots\cup T_{r}$ such that $\sigma'$ is ambient isotopic to $\sigma_{T_{1}\cup\cdots\cup T_{r}}$ [@H Theorem 3.17]. This implies that surgery along any $C_{k}$-tree can be replaced with surgery along a disjoint union of [*simple*]{} $C_{k}$-trees. Hereafter, by a $C_{k}$-tree we mean a simple $C_{k}$-tree. Some technical lemmas --------------------- This subsection gives some lemmas, which will be used to show Theorem \[th-sl\]. Given a $C_k$-tree $T$ for an $m$-component string link $\sigma=\sigma_{1}\cup\cdots\cup \sigma_{m}$, the set $\{\ i~\vline~\sigma_{i}\cap T \neq \emptyset, 1\leq i\leq m\}$ is called the [*index*]{} of $T$ and is denoted by ${\rm Ind}(T)$. The following is a direct consequence of [@FY Lemma 1.2]. \[lem-self\] Let $T$ be a $C_{k}$-tree for a string link $\sigma$ with $|{\rm Ind}(T)|\leq k$. Then $\sigma_{T}$ is link-homotopic to $\sigma$. The set of ambient isotopy classes of $m$-component string links has a monoid structure under the [*stacking product*]{} “$*$”, and with the trivial $m$-component string link $\mathbf{1}_{m}$ as the unit element. Combining Lemma \[lem-self\] and [@Ytrans Lemma 2.4], we have the following. \[lem-halftwist\] Let $T$ be a $C_{k}$-tree for $\mathbf{1}_{m}$, and let $\overline{T}$ be a $C_{k}$-tree obtained from $T$ by adding a half-twist on an edge. Then $(\mathbf{1}_{m})_{T}*(\mathbf{1}_{m})_{\overline{T}}$ is link-homotopic to $\mathbf{1}_{m}$. By Lemma \[lem-self\] together with [@MY12 Lemma 2.2 (2) and Remark 2.3], we have the following. \[lem-cc\] Let $T_{1}$ be a $C_{k}$-tree for a string link $\sigma$, and $T_{2}$ a $C_{l}$-tree for $\sigma$. Let $T'_{1}\cup T'_{2}$ be obtained from $T_{1}\cup T_{2}$ by changing a crossing of an edge of $T_{1}$ and that of $T_{2}$. Then $\sigma_{T_{1}\cup T_{2}}$ is link-homotopic to $\sigma_{T'_{1}\cup T'_{2}}$. Here, by [*parallel*]{} tree claspers we mean a family of $r$ parallel copies of a tree clasper $T$ for some $r\geq1$. We call $r$ the [*multiplicity*]{} of the parallel clasper. The following can be proved by Lemma \[lem-self\] and  [@MY12 Lemma 2.2 (1) and Remark 2.3]. \[lem-sliding\] Let $T_{1}$ be a $C_{k}$-tree for a string link $\sigma$, and $T_{2}$ a parallel $C_{l}$-tree with multiplicity $r$ for $\sigma$. Let $T'_{1}\cup T'_{2}$ be obtained from $T_{1}\cup T_{2}$ by sliding a leaf $f$ of $T_{1}$ over $r$ parallel leaves of $T_{2}$ $($see Figure $\ref{sliding}$$)$. Then $\sigma_{T_{1}\cup T_{2}}$ is link-homotopic to $\sigma_{T'_{1}\cup T'_{2}\cup Y}$, where $Y$ denotes the parallel $C_{k+l}$-tree with multiplicity $r$ obtained by inserting a vertex $v$ in the edge $e$ of $T_{2}$ and connecting $v$ to the edge incident to $f$ as illustrated in Figure $\ref{sliding}$. [sliding.eps]{} (23,71)[$T_{2}$]{} (64,71)[$T_{1}$]{} (-8,11)[$\sigma$]{} (54.5,-8)[$r$]{} (10,-3)[$f$]{} (73,22)[$e$]{} (217,71)[$Y$]{} (237,27)[$v$]{} (155,71)[$T'_{2}$]{} (266,71)[$T'_{1}$]{} (287,11)[$\sigma$]{} (158,-8)[$r$]{} (219,-8)[$r$]{} Proof of Theorem \[th-sl\] ========================== This section is devoted to the proof of Theorem \[th-sl\]. Habegger and Lin [@HL] proved that Milnor link-homotopy invariants classify string links up to link-homotopy. In [@Ytrans], the third author gave an alternative proof for this by using clasper theory. Actually, he constructed explicit representatives, determined by Milnor link-homotopy invariants, for the link-homotopy classes as follows. Let $\pi:\{1,\ldots,k\}\rightarrow\{1,\ldots,m\}$ $(2\leq k\leq m)$ be an injection such that $\pi(i)<\pi(k-1)<\pi(k)$ $(i=1,\ldots,k-2)$, and let $\mathcal{F}_{k}$ be the set of such injections. Given $\pi\in\mathcal{F}_{k}$, let $T_{\pi}$ and $\overline{T}_{\pi}$ be linear $C_{k-1}$-trees with index $\{\pi(1),\ldots,\pi(k)\}$ illustrated in the left- and right-hand side of Figure \[representative\], respectively. Here, Figure \[representative\] describes the images of homeomorphisms from neighborhood of $T_{\pi}$ and $\overline{T}_{\pi}$ to the $3$-ball. Setting $V_{\pi}=(\mathbf{1}_{m})_{T_{\pi}}$ and $V^{-1}_{\pi}=(\mathbf{1}_{m})_{\overline{T}_{\pi}}$, we have the following. [representative.eps]{} (70,54)[$T_{\pi}$]{} (241,54)[$\overline{T}_{\pi}$]{} (-3,63)[$\pi(k)$]{} (118,63)[$\pi(k-1)$]{} (12,-7)[$\pi(1)$]{} (38,-7)[$\pi(2)$]{} (111,-7)[$\pi(k-2)$]{} (169,63)[$\pi(k)$]{} (290,63)[$\pi(k-1)$]{} (185,-7)[$\pi(1)$]{} (210,-7)[$\pi(2)$]{} (282,-7)[$\pi(k-2)$]{} \[th-representative\] Let $\sigma$ be an $m$-component string link. Then $\sigma$ is link-homotopic to $\sigma_{1}*\cdots*\sigma_{m-1}$, where for each $k$, $$\sigma_{k}=\prod_{\pi\in\mathcal{F}_{k+1}}V_{\pi}^{x_{\pi}},$$ $$x_{\pi}=\left\{ \begin{array}{lll} \mu_{\sigma}(\pi(1)\pi(2)) & (k=1), \\ \mu_{\sigma}(\pi(1)\ldots\pi(k+1)) -\mu_{\sigma_{1}*\cdots*\sigma_{k-1}}(\pi(1)\ldots\pi(k+1)) & (k\geq 2). \end{array} \right.$$ The following is the key lemma to show Theorem \[th-sl\]. \[lem-del-clasper\] Let $n$ be a positive integer and ${\varepsilon}\in\{1,-1\}$. Then, for any $\pi\in\mathcal{F}_{k+1}$ $(1\leq k\leq m-1)$, $V_{\pi}^{{\varepsilon}n}$ is $(2n+{\rm lh})$-equivalent to $\mathbf{1}_{m}$. Since $V_{\pi}^{-n}*V_{\pi}^{n}$ is link-homotopic to $\mathbf{1}_{m}$ by Lemma \[lem-halftwist\], it is enough to show the case ${\varepsilon}=1$, i.e. for any $\pi\in\mathcal{F}_{k+1}$, $V_{\pi}^{n}$ is $(2n+{\rm lh})$-equivalent to $\mathbf{1}_{m}$. For the case $k=1$, we see that $V_{\pi}^{n}$ and $\mathbf{1}_{m}$ are related by a single $2n$-move. Assume that $k\geq2$. Let $T_{1}$ be the linear $C_{k-1}$-tree for $\mathbf{1}_{m}$ of Figure \[del-clasper\] (a) with index $\{\pi(1),\ldots,\pi(k)\}$, and let $\overline{T}_{1}$ be obtained from $T_{1}$ by adding a positive half-twist on an edge. Then $\mathbf{1}_{m}$ is link-homotopic to $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T_{1}}$ by Lemma \[lem-halftwist\]. Let $T_{2}$ be the parallel $C_{1}$-tree of Figure \[del-clasper\] (b) with multiplicity $n$. Since surgery along $T_{2}$ is realized by a $2n$-move, $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T_{1}}$ is $2n$-move equivalent to $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T_{1}\cup T_{2}}$ in Figure \[del-clasper\] (b). Let $T'_{1}\cup T'_{2}$ be obtained from $T_{1}\cup T_{2}$ by sliding a leaf of $T_{1}$ over $n$ parallel leaves of $T_{2}$, and let $Y$ be the parallel $C_{k}$-tree with multiplicity $n$ as illustrated in Figure \[del-clasper\] (c). It follows from Lemmas \[lem-cc\] and \[lem-sliding\] that $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T_{1}\cup T_{2}}$ is link-homotopic to $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T'_{1}\cup T'_{2}\cup Y}$. Furthermore, by Lemma \[lem-halftwist\], $(\mathbf{1}_{m})_{\overline{T}_{1}\cup T'_{1}\cup T'_{2}\cup Y}$ is $(2n+{\rm lh})$-equivalent to $(\mathbf{1}_{m})_{Y}=V_{\pi}^{n}$. This completes the proof. [del-clasper.eps]{} (94,85)[(a)]{} (94,-15)[(b)]{} (6,171)[[$\pi(k+1)$]{}]{} (171,171)[[$\pi(k)$]{}]{} (22,99)[[$\pi(1)$]{}]{} (70,99)[[$\pi(2)$]{}]{} (163,99)[[$\pi(k-1)$]{}]{} (100,159)[$T_{1}$]{} (111,143)[$\overline{T}_{1}$]{} (116,53)[$T_{1}$]{} (127,38)[$\overline{T}_{1}$]{} (36,53)[$T_{2}$]{} (-9,40.5)[$n$]{} [del-clasper2.eps]{} (140,-15)[(c)]{} (165,93)[$Y$]{} (112,41)[$T'_{1}$]{} (125,18)[$\overline{T}_{1}$]{} (45,41)[$T'_{2}$]{} (-9,75)[$n$]{} (-9,39)[$n$]{} Combining Theorem \[th-representative\] and Lemma \[lem-del-clasper\], we give a complete list of representatives for string links up to $(2n+{\rm lh})$-equivalence as follows. \[prop-rep-2nlh\] Let $\sigma$ be an $m$-component string link and $x_{\pi}$ as in Theorem $\ref{th-representative}$. Then $\sigma$ is $(2n+{\rm lh})$-equivalent to $\tau_{1}*\cdots*\tau_{m-1}$, where for each $k$, $$\tau_{k}=\prod_{\pi\in\mathcal{F}_{k+1}}V_{\pi}^{y_{\pi}}$$ with $0\leq y_{\pi}<n$ and $y_{\pi}\equiv x_{\pi}\pmod{n}$. It follows from Theorem \[th-representative\] that $\sigma$ is link-homotopic to $\sigma_{1}*\cdots*\sigma_{m-1}$, where $$\sigma_{k}=\prod_{\pi\in\mathcal{F}_{k+1}}V_{\pi}^{x_{\pi}}.$$ By Lemmas \[lem-del-clasper\] and \[lem-halftwist\], we can insert/delete $V_{\pi}^{\pm n}$ and remove $V_{\pi}^{{\varepsilon}}*V_{\pi}^{-{\varepsilon}}$ up to $(2n+{\rm lh})$-equivalence $({\varepsilon}\in\{1,-1\})$. Therefore, $\sigma_{k}$ is $(2n+{\rm lh})$-equivalent to $\tau_{k}$ for each $k$. This follows from Theorem \[th-invariance\] and Proposition \[prop-rep-2nlh\]. By combining Theorem \[th-sl\], Lemma \[lem-del-clasper\] and Proposition \[prop-rep-2nlh\], we have the corollary. Theorem \[th-sl\] characterizes Milnor link-homotopy invariants modulo $n$ by two local moves, the $2n$-move and self-crossing change. In [@ABMW], B. Audoux, P. Bellingeri, J.-B. Meilhan and E. Wagner defined Milnor invariants, denoted by $\mu^{{\rm w}}$, for [*welded string links*]{} and proved that $\mu^{{\rm w}}$-invariants for non-repeated sequences classify welded string links up to [*self-crossing virtualization*]{}. (Later, this classification led to a link-homotopy classification of [*$2$-dimensional string links*]{} in $4$-space [@AMW]). For welded string links, we can show a similar result to Theorem \[th-sl\] that characterizes $\mu^{{\rm w}}$-invariants for non-repeated sequences modulo $n$ in terms of the $2n$-move and self-crossing virtualization. While the idea of the proof is similar to that of Theorem \[th-sl\], we need [*arrow calculus*]{} and representatives for welded string links up to self-crossing virtualization given in [@MY19] instead of clasper calculus and representatives for string links up to link-homotopy. We will give the details in a future paper. Links in $S^{3}$ {#sec-link} ================ In the previous sections, we have studied [*string links*]{}. We now address the case of [*links*]{} in $S^{3}$. Given an $m$-component string link $\sigma$, its [*closure*]{} is an $m$-component link in $S^{3}$ obtained from $\sigma$ by identifying points on the boundary of $\mathbb{D}^{2}\times [0,1]$ with their images under the projection $\mathbb{D}^{2}\times [0,1]\rightarrow \mathbb{D}^{2}$. The link inherits an ordering and orientation from $\sigma$. Note that every link can be represented by the closure of some string link. Habegger and Lin proved that for two link-homotopic links $L$ and $L'$, and for a string link $\sigma$ whose closure is $L$, there exists a string link $\sigma'$ whose closure is $L'$ such that $\sigma'$ is link-homotopic to $\sigma$ [@HL Lemma 2.5]. Similarly we have the following. \[lem-2nlh\] Let $n$ be a positive integer. Let $L$ and $L'$ be $(2n+{\rm lh})$-equivalent $($resp. $2n$-move equivalent$)$ links and $\sigma$ a string link whose closure is $L$. Then there exists a string link $\sigma'$ whose closure is $L'$ such that $\sigma'$ is $(2n+{\rm lh})$-equivalent $($resp. $2n$-move equivalent$)$ to $\sigma$. The proof is strictly similar to that of [@HL Lemma 2.5], and hence we omit it. Let $\sigma$ be a string link. We define $\Delta_{\sigma}(I)$ to be the greatest common divisor of all $\mu_{\sigma}(J)$ such that $J$ is obtained from $I$ by removing at least one index and permuting the remaining indices cyclically. It is known in [@HL] that the integer $\Delta_{\sigma}(I)$ and the residue class of $\mu_{\sigma}(I)$ modulo ${\Delta_{\sigma}(I)}$ are invariants of the closure of $\sigma$. For a link $L$, we define $\Delta_{L}^{(n)}(I)$ to be $\gcd\{\Delta_{\sigma}(I),n\}$ and ${\overline{\mu}}_{L}^{(n)}(I)$ to be the residue class of $\mu_{\sigma}(I)$ modulo $\Delta_{L}^{(n)}(I)$ for a string link $\sigma$ whose closure is $L$. Obviously, $\Delta_{L}^{(n)}(I)$ and ${\overline{\mu}}_{L}^{(n)}(I)$ are invariants of $L$. Moreover we have the following. \[prop-inv-link\] Let $L$ and $L'$ be links. The following [(1)]{} and [(2)]{} hold: 1. Let $n$ be a positive integer. If $L$ and $L'$ are $(2n+{\rm lh})$-equivalent, then $\Delta_{L}^{(n)}(I)=\Delta_{L'}^{(n)}(I)$ and ${\overline{\mu}}_{L}^{(n)}(I)={\overline{\mu}}_{L'}^{(n)}(I)$ for any non-repeated sequence $I$. 2. Let $p$ be a prime number. If $L$ and $L'$ are $2p$-move equivalent, then $\Delta_{L}^{(p)}(I)=\Delta_{L'}^{(p)}(I)$ and ${\overline{\mu}}_{L}^{(p)}(I)={\overline{\mu}}_{L'}^{(p)}(I)$ for any sequence $I$ of length $\leq p$. Let $\sigma$ be a string link whose closure is $L$. By Lemma \[lem-2nlh\], there exists a string link $\sigma'$ whose closure is $L'$ such that $\sigma'$ is $(2n+{\rm lh})$-equivalent to $\sigma$. By Theorem \[th-invariance\], for any non-repeated sequence $I$, $\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{n}$. Therefore, $$\Delta_{L}^{(n)}(I)=\gcd{\{\Delta_{\sigma}(I),n\}}=\gcd{\{\Delta_{\sigma'}(I),n\}}=\Delta_{L'}^{(n)}(I).$$ Since $\Delta_{L}^{(n)}(I)$ divides $n$, it follows that $$\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{\Delta_{L}^{(n)}(I)}.$$ This completes the proof of Proposition \[prop-inv-link\] (1). Using Proposition \[prop-inv-prime\] instead of Theorem \[th-invariance\], Proposition \[prop-inv-link\] (2) is similarly shown. Proposition \[prop-inv-link\] (1) together with Theorem \[th-sl\] implies the following. \[th-link\] Let $n$ be a positive integer, and let $L$ and $L'$ be $m$-component links. Assume that $\Delta_{L}^{(n)}(I)=\Delta_{L'}^{(n)}(I)=n$ for any non-repeated sequence $I$ of length $m$. Then, $L$ and $L'$ are $(2n+{\rm lh})$-equivalent if and only if ${\overline{\mu}}_{L}^{(n)}(I)={\overline{\mu}}_{L'}^{(n)}(I)$ for any non-repeated sequence $I$ of length $m$. Since the “only if” part directly follows from Proposition \[prop-inv-link\] (1), it is enough to show the “if” part. Let $\sigma$ and $\sigma'$ be string links whose closures are $L$ and $L'$, respectively. Since $\Delta_{L}^{(n)}(I)=\Delta_{L'}^{(n)}(I)=n$ for any non-repeated sequence $I$ of length $m$, it follows that $$\mu_{\sigma}(J)\equiv\mu_{\sigma'}(J)\equiv0\pmod{n}$$ for any non-repeated sequence $J$ of length $< m$. Furthermore, since ${\overline{\mu}}_{L}^{(n)}(I)={\overline{\mu}}_{L'}^{(n)}(I)$ for any non-repeated sequence $I$ of length $m$, we have $$\mu_{\sigma}(I)\equiv\mu_{\sigma'}(I)\pmod{n}.$$ Therefore, $\sigma$ and $\sigma'$ are $(2n+{\rm lh})$-equivalent by Theorem \[th-sl\]. This completes the proof. As a consequence of Theorem \[th-link\], we have the following. \[cor-trivial\] Let $n$ be a positive integer. An $m$-component link $L$ is $(2n+{\rm lh})$-equivalent to the trivial link if and only if $\Delta_{L}^{(n)}(I)=n$ and ${\overline{\mu}}_{L}^{(n)}(I)=0$ for any non-repeated sequence $I$ of length $m$. This follows from Proposition \[prop-inv-link\] (1) and Theorem \[th-link\]. [99]{} B. Audoux, P. Bellingeri, J.-B. Meilhan, E. Wagner, [*Homotopy classification of ribbon tubes and welded string links*]{}, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) [**17**]{} (2017) 713–761. B. Audoux, J.-B. Meilhan, E. Wagner, [*On codimension two embeddings up to link-homotopy*]{}, J. Topol. [**10**]{} (2017) 1107–1123. A. J. Casson, [*Link cobordism and Milnor’s invariant*]{}, Bull. London Math. Soc. [**7**]{} (1975) 39–40. K. T. Chen, [*Commutator calculus and link invariants*]{}, Proc. Amer. Math. Soc. [**3**]{} (1952) 44–55. J. Conant, R. Schneiderman, P. Teichner, [*Higher-order intersections in low-dimensional topology*]{}, Proc. Natl. Acad. Sci. USA [**108**]{} (2011) 8131–8138. J. Conant, R. Schneiderman, P. Teichner, [*Whitney Tower concordance of classical links*]{}, Geom. Topol. [**16**]{} (2012) 1419–1479. J. Conant, R. Schneiderman, P. Teichner, [*Milnor invariants and twisted Whitney towers*]{}, J. Topol. [**7**]{} (2014) 187–224. M. K. Dabkowski, J. H. Przytycki, [*Burnside obstructions to the Montesinos-Nakanishi $3$-move conjecture*]{}, Geom. Topol. [**6**]{} (2002) 355–360. M. K. Dabkowski, J. H. Przytycki, [*Unexpected connections between Burnside groups and knot theory*]{}, Proc. Natl. Acad. Sci. USA [**101**]{} (2004) 17357–17360. R. H. Fox, [*Congruence classes of knots*]{}, Osaka Math. J. [**10**]{} (1958) 37–41. R. Fenn, [*Techniques of geometric topology*]{}, London Mathematical Society Lecture Note Series [**57**]{} (Cambridge University Press, Cambridge, 1983). T. Fleming, A. Yasuhara, [*Milnor’s invariants and self $C_{k}$-equivalence*]{}, Proc. Amer. Math. Soc. [**137**]{} (2009) 761–770. N. Habegger, X.-S. Lin, [*The classification of links up to link-homotopy*]{}, J. Amer. Math. Soc. [**3**]{} (1990) 389–419. K. Habiro, [*Claspers and finite type invariants of links*]{}, Geom. Topol. [**4**]{} (2000) 1–83. S. Kinoshita, [*On Wendt’s theorem of knots*]{}, Osaka Math. J. [**9**]{} (1957) 61–66. S. Kinoshita, [*On the distribution of Alexander polynomials of alternating knots and links*]{}, Proc. Amer. Math. Soc. [**79**]{} (1980) 644–648. M. Lackenby, [*Fox’s congruence classes and the quantum-$SU(2)$ invariants of links in $3$-manifolds*]{}, Comment. Math. Helv. [**71**]{} (1996) 664–677. J. P. Levine, [*An approach to homotopy classification of links*]{}, Trans. Amer. Math. Soc. [**306**]{} (1988) 361–387. J.-B. Meilhan, A. Yasuhara, [*Characterization of finite type string link invariants of degree $<5$*]{}, Math. Proc. Cambridge Philos. Soc. [**148**]{} (2010) 439–472. J.-B. Meilhan, A. Yasuhara, [*Milnor invariants and the HOMFLYPT polynomial*]{}, Geom. Topol. [**16**]{} (2012) 889–917. J.-B. Meilhan, A. Yasuhara, [*Arrow calculus for welded and classical links*]{}, Algebr. Geom. Topol. [**19**]{} (2019) 397–456. J. Milnor, [*Link groups*]{}, Ann. of Math. (2) [**59**]{} (1954) 177–195. J. Milnor, [*Isotopy of links*]{}, from: Algebraic geometry and topology; A Symposium in Honor of S. Lefschetz (Princeton University Press, Princeton, NJ, 1957) 280–306. Y. Nakanishi, [*On Fox’s congruence classes of knots. II*]{}, Osaka J. Math. [**27**]{} (1990) 207–215. Y. Nakanishi, S. Suzuki, [*On Fox’s congruence classes of knots*]{}, Osaka J. Math. [**24**]{} (1987) 217–225. J. H. Przytycki, [*$t_{k}$ moves on links*]{}, from: Braids (Santa Cruz, CA, 1986), Contemp. Math. [**78**]{} (Amer. Math. Soc., Providence, RI, 1988) 615–656. J. Stallings, [*Homology and central series of groups*]{}, J. Algebra [**2**]{} (1965) 170–181. Y. Takabatake, T. Kuboyama, H. Sakamoto, [*stringcmp: Faster calculation for Milnor invariant*]{}, available at https://code.google.com/archive/p/stringcmp/ A. Yasuhara, [*Classification of string links up to self delta-moves and concordance*]{}, Algebr. Geom. Topol. [**9**]{} (2009) 265–275. A. Yasuhara, [*Self delta-equivalence for links whose Milnor’s isotopy invariants vanish*]{}, Trans. Amer. Math. Soc. [**361**]{} (2009) 4721–4749. [^1]: [^2]: The second author was supported by a Grant-in-Aid for JSPS Research Fellow (\#17J08186) of the Japan Society for the Promotion of Science. [^3]: The third author was partially supported by a Grant-in-Aid for Scientific Research (C) (\#17K05264) of the Japan Society for the Promotion of Science and a Waseda University Grant for Special Research Projects (\#2018S-077). [^4]: Using the technique of “grammar compression”, Takabatake, Kuboyama and Sakamoto [@TKS] made a computer program in the program language C++, based on Milnor’s algorithm, which is able to give us $\mu$-invariants of length at least $\leq16$. [^5]: Also referred to as the [*Sutton Hoo link*]{} because of a cauldron chain from the Sutton Hoo exhibited in the British Museum [@F page 222].
--- abstract: 'We show that the supersymmetric Wilson loops in IIB matrix model give a transition operator from reduced supersymmetric Yang-Mills theory to supersymmetric space-time theory. In comparison with Green-Schwarz superstring we identify the supersymmetric Wilson loops with the asymptotic states of IIB superstring. It is pointed out that the supersymmetry transformation law of the Wilson loops is the inverse of that for the vertex operators of massless modes in the $U(N)$ open superstring with Dirichlet boundary condition.' --- qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq = qqqqqqqqqqqqq\ hep-th/9706187\ [^1] [*National Laboratory for High Energy Physics (KEK),*]{}\ [*Tsukuba, Ibaraki 305, Japan*]{} Introduction ============ For a long time it has been hoped that the large $N$ gauge theory [@t] will give a nonperturbative definition of string theory. In the begining of ’90 2D string has solved exactly in terms of matrix models [@bk] and many works have been carried out in this field [@bw]. The identification with the continuum theory was done in the direct calculations of amplitudes [@gl] and then was completed using the $W_{\infty}$ symmetry [@ha]. Recently more realistic matrix models called M(atrix) theory [@bfss] and IIB matrix theory [@ikkt] (and also see \[8 – 15\]) have been proposed, which are described in terms of the D-particles and the D-instantons [@p; @gg]. In this paper we study the IIB matrix model, which is hoped to give the type IIB superstring. In this case, other than 2D string, the oscillation modes with continuous momenta will arise. The aim of this paper is to clarify how the oscillation modes arise in the IIB matrix theory. We study such an issue using the supersymmetry. The Wilson loops will describe the operators which create and annihilate strings [@ikkt; @fkkt]. We here introduce supersymmetric Wilson loops in IIB matrix theory and identify it with the asymptotic state of superstring. To carry out the program we first study the supersymmetry transformation law of the wave function of IIB superstring which is constructed by acting the vertex operator of the D-instanton [@hb] on the boundary state [@gg]. We then show that the supersymmetric Wilson loop just has the same property as that the state of superstring has, where the supersymmetry transformation of reduced super Yang-Mills theory acts on it as a counterpart of that of world-sheet theory. The state of IIB superstring ============================ We first construct the eigenstate of Hamiltonian using Green-Schwarz superstring quantized in the light-cone gauge [@gsb; @gsw] and then discuss its supersymmetry transformation law. Let us consider cylinder frame with Dirichlet boundary at $\t=0$. The boundary state is defined by the conditions X\^()|B&gt; = 0  , S\^[+a]{}()|B&gt; = 0  , where $S^{\pm a}= \fr{1}{\sq 2}( S^a \pm i\tS^a )$ and $\mu = (+,-, I) \quad I=1, \cdots ,8$. This is the D-instanton state discussed in [@gg; @hb]. In the following we mainly use the notations and conventions of ref. [@hb]. The conditions can be solved easily and we obtain |B&gt; = |B\_0&gt;  , where $|B_0>=|I>|I> -i|\da>| \da>$. The mode expansions of string coordinates are defined by && X\^I (,) = x\^I + p\^I + \_[n 0]{} ( \^I\_n \^[-2in(-)]{} +\^I\_n \^[-2in(+)]{} )  ,\ && S\^a(,) = \_n S\^a\_n \^[-2in(-)]{}  ,\ && \^a(,) = \_n \^a\_n \^[-2in(+)]{}  . The vertex operator of (single) D-instanton [@hb] is defined in terms of the broken currents $\tpd X^{\mu}$ and $S^-$ for translational invariance and supersymmtery in the form V (x\^(),() ) = \^\_0 { x\_() - i [|]{}()\_ } X\^  , where $ \theta = (\pconst 2i \sq{\momp})^{-1}\Gm^+ S^-$. We here consider the $\s$-dependent functions $x^{\mu}(\s)$ and $\eta(\s)$. In the light-cone gauge defined by $x^+(\s)= x^+ = \t$ and $\Gm^+ \eta=0$, the vertex operator reduces to the simple form V (x\^,) = \^\_0 { x\^I ()X\^I - x\^-() + i \^a()S\^[-a]{} }  . Let us consider the Wilson loop operator $w =\exp(-iV)$ and act it on the boundary state. Using the Baker-Campbell-Hausdorff formula we can obtain the following state: && |x,&gt; = w |B&gt;\ &&   = (-ix\^I\_0 p\^I +ix\^-\_0 +\^a\_0 S\^[-a]{}\_0 )\ &&\ && |B\_0&gt;  , where the mode expansions of $x$ and $\eta$ are defined by x\^I()=\_n x\^I\_n \^[2in]{}  , \^a()=\_n \^a\_n \^[2in]{}  . This state satisfies the boundary conditions $X^I(\s)|x,\eta>=x^I(\s)|x,\eta>$ and $S^{+a}(\s)|x,\eta>=-2^{-\fr{1}{4}} \hbox{$\sq{\momp}$}\eta^a(\s)|x,\eta>$. From the expression of vertex operator, the operators $\tpd X^I(\s)$ and $S^{-a}(\s)$ are described in terms of the functional derivatives of $x^I(\s)$ and $\eta^a (\s)$, respectively. The light-cone Hamiltonian is given by H = \^\_0  . Therefore the state $|x,\eta,\t> =\e^{i\t H}|x,\eta>$ satisfies the Schrödinger equation -i |x,,&gt; = H |x,,&gt; = h |x,,&gt; where && h = \^\_0 and the wave function is defined by $\Phi^{\star}(x,\eta,\t)=<\Phi |x,\eta,\t>$. In the last of this section we discuss supersymmetry transformation law of the state. The supersymmetry transformation of vertex operator with respect to the unbroken supercharge $Q^+$ is translated into the supersymmetry transformation on $x^{\mu}$ and $\eta$. Using the equations && \^[(+)]{}\_(X\^I)= \^\^I\_[a]{}S\^[-a]{}  ,\ && \^[(+)]{}\_ S\^[-a]{}= \^a +\^\^I\_[a]{}X\^I  , where $\hdelta^{(+)}_{\a}=[\a^a Q^{+a} +\a^{\da} Q^{+\da}, \quad]$, we obtain the equation \^[(+)]{}\_ V(x\^, ) = V(\_x\^, \_)  , where && \_x\^I() =-i \^ \^I\_[a]{}\^a()  ,\ && \_x\^-\_0 =i\^a \^a\_0  , \[eq:transf-c\]\ && \_\^a () = -\^\^I\_[a]{}x\^I()  . Using this we obtain $\hdelta^{(+)} w =\bdelta w$. Thus the Wilson loop gives the transition operator from the world-sheet theory to the space-time theory. Supersymmetric Wilson loops in IIB matrix model =============================================== The world-sheet is regulated in the large $N$ picture. The reduced supersymmetric Yang-Mills theory will play an fundamental role to make a world-sheet. The identification of gauge fields with space-time coordinates will gives a non-pertubative definition of the type IIB superstring [@ikkt]. The states in IIB matrix model should have the same supersymmetry transformation law as that the continuum theory has. In this section we show that the supersymmetric Wilson loop just has the expected property. So we identify it with the IIB superstring state (in momentum space). The supersymmetric Wilson loop operator we introduce here is w(C) = tr \^M\_[j=1]{} U\_j  , U\_j = \^[-iV\_j]{}  , where $V_j$ is defined using the superfield in the form $V_{j} = k^{\mu}_j {\cal A}_{\mu}(\lam_j)$, where ${\cal A}_{\mu}(\lam_j) = \e^{{\bar \lam}_j G} A_{\mu}\e^{-{\bar \lam}_j G}$. $G$ is the generator of the supersymmetric Yang-Mills transformation \_ A\_ = i[|]{}\_ , \_ = -\[A\_ , A\_\]\^ . Thus we define && V\_j = k\^\_j ( A\_ -i[|]{} \_ \_j + \[A\_ ,A\_\] [|]{}\_j \_\^ \_j\ && - \[ A\_ ,[|]{}\]\_\_j [|]{}\_j \_\^\_j + )  . In the following we work in the light-cone gauge [^2] k\^+\_j =k\^+  , \^+ \_j =0  . Then we can see that the following supersymmetry transformation is realized: \_ w(C) = \_ w(C)  , where && \_k\^I\_j = 2i \^ \^I\_[a]{}\^a\_j  ,\ && \_ k\^+ =0  , \[eq:transf-mom\]\ && \_ \^a\_j = \^a + k\^I\_j \^I\_[a]{} \^ and $\Delta \lam^a_j = \fr{1}{\eps}(\lam^a_{j+1}-\lam^a_j )$. The transformation of $k^-_j$ is defined through the equation k\^-\_j = ( k\^I\_j )\^2 - i\^a\_j \^a\_j \[eq:constraint\] in the form $\bdelta_{\a} k^-_j =-i\twosq \a^a \Delta \lam^a_j -i \Delta (\fr{1}{k^+} k^I_j \lam^a_j \gm^I_{a\da} \a^{\da} )$. In the continuum limit $k^{\mu}_j \rightarrow k^{\mu}(\s)$ and $\Delta \lam^a_j \rightarrow \spd \lam^a (\s)$, this just corresponds to the supersymmetry transformation in momentum space derived in the continuum theory (\[eq:transf-c\]) [^3]. In this case the supersymmetric Yang-Mills transformation $\delta$ just corresponds to the variation of world-sheet theory $\hdelta^{(+)}$. The constraint (\[eq:constraint\]) corresponds to the boundary condition $\tpd X^- |B> = \fr{1}{2\momp}[(\tpd X^I)^2 -i S^{-a}\spd S^{-a}] |B> $. The above transformation law can be proved in the following. In the light-cone gauge the matrix $V_j$ is described in $SO(8)$ notation as V\_j = V\_j\^0 +V\_j\^1 + V\_j\^2 + V\_j\^3 +  , where && V\_j\^0 = k\^I\_j A\^I -k\^+ A\^- -k\^-\_j A\^+  ,\ && V\_j\^1 = -ik\^+ \^a\_j \^a -ik\^I\_j \^a\_j \^I\_[a]{} \^  ,\ && V\_j\^2 = \[A\^I, A\^J\] \^a\_j \^[IJ]{}\_[ab]{}\^b\_j - k\^I\_j \[A\^+ ,A\^J\] \^a\_j (\^I \^J)\_[ab]{} \^b\_j  ,\ && V\_j\^3 = - k\^+ \[A\^I , \^\^J\_[a]{} \^a\_j \] \^b\_j \^[IJ]{}\_[bc]{} \^c\_j + k\^I\_j \[A\^+ , \^\^J\_[a]{} \^a\_j \] \^b\_j (\^I \^J)\_[bc]{} \^c\_j  . The supersymmetry transformation is described in the $SO(8)$ notation as && \_ A\^I = -i\^ \^I\_[a]{}\^a -i\^a \^I\_[a]{}\^  ,\ && \_ A\^+ = i \^ \^  ,\ && \_ A\^- = i \^a \^a  ,\ && \_ \^a = \[A\^I , A\^J\] \^[IJ]{}\_[ab]{}\^b - i\^I\_[a]{}\^ + i\[A\^+ , A\^-\] \^a  ,\ && \_ \^ = \[A\^I , A\^J\] \^[IJ]{}\_\^ - i\^I\_[a]{}\^a - i\[A\^+ , A\^-\] \^  . Let us first consider the variation of $V^0_j$ under the supersymmetry transformation $\delta$. We can easily obtain the following equation: \_ V\^0\_j = \_(V\^0\_j + V\^1\_j ) + f\^0\_j  , where f\^0\_j = -2i \^\^I\_[a]{} \^a\_j A\^I - i\^a \^a\_j A\^+ + k\^I\_j \^ \^I\_[a]{}\^a\_j A\^+  . In the next step we obtain \_ (V\^0\_j + V\^1\_j) = \_(V\^0\_j + V\^1\_j + V\^2\_j ) + Y\^0\_j + (f\^0\_j +f\^1\_j)  , where Y\^0\_j = i\[f\^0\_j ,V\^0\_j\] \[eq:y-term\] and f\^1\_j = i\[A\^+ ,A\^J\] \^\^I\_[a]{}\^a\_j \^b\_j (\^I \^J)\_[bc]{}\^c\_j  . In general we will obtain the equation \_ V\_j = \_V\_j + Y\_j + f\_j  , \[eq:general-eq\] where $Y_j =Y^0_j + Y^1_j + \cdots$ and $f_j = f^0_j + f^1_j + \cdots$. We will see that the extra term $Y_j$ is canceled in the Wilson loop. Let us consider the supersymmetry transformation of the Wilson loop. Using the relation (\[eq:general-eq\]) we obtain \_ w(C) & =& - tr \^M\_[l=1]{} ( \^[l-1]{}\_[j=1]{}U\_j ) i\_ V\_l ( \^M\_[j=l]{}U\_j )\ & =& - tr \^M\_[l=1]{} ( \^[l-1]{}\_[j=1]{}U\_j ) i( \_V\_l +Y\_l +f\_l ) ( \^M\_[j=l]{}U\_j )  . Noting that $\Delta f_l = \fr{1}{\eps}(f_{l+1}-f_l)$ and $f_l$ is the matrix such that $\fr{1}{\eps}f_l U_{l-1} = U_{l-1} (\fr{1}{\eps}f_l - i[f_l , V_{l-1}] +o(\eps) )$, we get the following expression: \_ w(C) = - tr \^M\_[l=1]{} ( \^[l-1]{}\_[j=1]{}U\_j ) i( \_V\_l +Y\_l - i\[f\_l ,V\_[l-1]{}\] ) ( \^M\_[j=l]{}U\_j )  . As shown in the above calculation (\[eq:y-term\]), $Y_l$ cancels $i[f_l , V_{l-1}]$ iteratively in the continuum limit. This cancellation is an analogy of that by contact terms in open superstring with Chan-Paton factor [@hb; @gs]. Thus we can prove the supersymmetry transformation law of the Wilson loop. This is the inverse picture of supersymmetry transformation law of the vertex operator for massless mode in the $U(N)$ Dirichlet open superstring derived in [@hb], where the roles of the world-sheet theory and space-time theory are exchanged. Supersymmetric Yang-Mills theory now plays an role of world-sheet theory, not of the space-time one. The S-matrix ============ In the previous section we discussed the supersymmetric Wilson loop in IIB matrix theory. We proposed that, in the symmetrical point of view, it corresponds to the asymptotic state of IIB superstring. The correlation function of the Wilson loops is defined by &lt;w(k\_1,\_1) w(k\_L,\_L)&gt; = dA dw(C\_1) w(C\_L) (-S) where $S$ is the reduced supersymmetric Yang-Mills acton. The continuum limit is defined by $M\eps =1$ and $g^2 N =1$, where $g$ is the gauge coupling behaived as $g \sim \eps$. The momentum conservation comes from the integration over the $U(1)$ part in $U(N)$ matrix. The $U(1)$ part of $A^-$ integral gives the delta function $\delta (k^+_1 + \cdots + k^+_L)$ and others gives $\delta ( \eps\sum^M_{j=1}k^{\mu}_{1j} + \cdots + \eps\sum^M_{j=1}k^{\mu}_{Lj})$, where $\mu=-,I$. In the continuum limit these give the momentum conservations of zero-modes, $k_0^{\mu}=\int \fr{d\s}{\pi} k^{\mu}(\s)$. Thus the $S$-matrix is defined by attaching the wave functions of oscillation modes $\Phi(k,\lam)$ in the form: S\_[i f]{} = \^L\_[q=1]{} \[D\^k\^I\_q\]\[D\^a\_q\] (k\_q,\_q) &lt; w(k\_1,\_1) w(k\_L,\_L)&gt; where $[D^{\pp}k^I ][D \lam^a]$ means the integration over transverse oscillation modes and the prime stands for the exclusion of the zero-modes. Incoming states (outgoing states) are defined by the Wilson loops with $k^+$ positive (negative). Finally we briefly comment on the Schwinger-Dyson equation of the followin type: dA d \^M\_[l=1]{}tr ( \^[l-1]{}\_[j=1]{} U\_j   t\^  \^M\_[j=l]{} U\_j ) (-S) =0  . This is likely to correspond to the equation $<0| H |\Phi>=0$ in the continuum theory. Here the effects of the terms corresponding to $(\spd x^I)^2$ and $\eta^a \spd \eta^a$ in the Hamiltonian will be included in the derivative of action with respect to $A^+$. This is similar to the picture of the Hartle-Hawking wave function. [**Acknowledgements**]{} I wish to thank our colleagues at KEK, especially N. Ishibashi, H. Kawai and A. Tsuchiya, for discussions. [99]{} G. ’t Hooft, Nucl. Phys. [**B72**]{} (1974) 461. E. Brezin and V. Kazakov, Phys. Lett. [**B236**]{} (1990) 144; M. Douglas and S. Shenker, Nucl. Phys. [**B335**]{} (1990) 635; D. Gross and A. Migdal, Phys. Rev. Lett. [**64**]{} (1990) 127. E. Brezin and S. R. Wadia, [*The large N expansion in quantum field theory and statistical physics: from spin systems to two-dimensional gravity*]{} World Scientific (1993). M. Goulian and M. Li, Phys. Rev. Lett. [**66**]{} (1991) 2051; Y. Kitazawa, Phys. Lett. [**B265**]{} (1991) 262; P. DiFranceso and D. Kutasov, Phys. Lett. [**B261**]{} (1991) 385; V. Dotsenko, Mod. Phys. Lett. [**A6**]{} (1991) 3601; K. Aoki and E. D’Hoker, Mod. Phys. Lett. [**A7**]{} (1992) 235. K. Hamada, Phys. Lett. [**B324**]{} (1994) 141; [**B462**]{} (1996) 192. T. Banks, W. Fischler, S. H. Shenker and L. Susskind, Phys. Rev. [**D55**]{} (1997) 5112, hep-th/9610043. N. Ishibashi, H. Kawai, Y. Kitazawa and A. Tsuchiya, [*A Large-N Reduced Model as Superstring*]{}, hep-th/9612115. V. Periwal, Phys. Rev. [**D55**]{} (1997) 1711, hep-th/9611103. M. Li, [*Strings from IIB Matrices*]{}, hep-th/961222. A. Tseytlin, [*On Non-abelian Generalization of Born-Infeld Action in String Theory*]{}, hep-th/9701125; I. Chepelev and A. Tseytlin, [*Interactions of Type IIB D-branes from D-instanton Matrix Model*]{}, hep-th/9705120. I. Chepelev, Y. Makeenko and K. Zarembo, [*Properties of D-branes in Matrix Model of IIB Superstring*]{}, hep-th/9701151. A. Fayyazuddin and D. J. Smith, [*P-Brane Solutions in IKKT IIB Matrix Theory*]{}, hep-th/9701168. A. Fayyazuddin, Y. Makeenko, P. Olesen, D. J. Smith and K. Zarembo, [*Towards a Non-perturbative Formulation of IIB Superstrings by Matrix Models*]{}, hep-th/9703038. T. Yoneya, [*Schild Action and Space-Time Uncertainty Principle in String Theory*]{}, hep-th/9703078. M. Fukuma, H. Kawai, Y. Kitazawa and A. Tsuchiya, [*String Field Theory from IIB Matrix Model*]{}, hep-th/9705128. J. Polchinski, [*TASI Lectures on D-branes*]{}, hep-th/9611050. M. B. Green and M. Gutperle, Nucl. Phys. [**B476**]{} (1996) 484, hep-th/9604091. K. Hamada, [*Vertex Operators for Super Yang-Mills and Multi D-branes in Green-Schwarz Superstring*]{}, hep-th/9612234, to appear in Nucl. Phys. B. M. B. Green, J. H. Schwarz and L. Brink, Nucl. Phys. [**B219**]{} (1983) 437. M. B. Green, J. H. Schwarz and E. Witten, [*Superstring theory*]{} (CUP, 1987). M. B. Green and N. Seiberg, Nucl. Phys. [**B299**]{} (1988) 559. [^1]: E-mail address : hamada@theory.kek.jp [^2]: The covariant description of supersymmetry does not go well, where the constraint equation which serves as eq. (\[eq:constraint\]) is not known. [^3]: The transition function between coordinate space to momentum space is given by $w_f = \exp \{ i (-x^+ k^-_0 -x^-_0 k^+ +\int \fr{d\s}{\pi} x^I (\s)k^I(\s) +i\twosq k^+ \int \fr{d\s}{\pi} \eta^a(\s)\lam^a(\s) ) \}$ such that $\bdelta^{(c)} w_f=\bdelta^{(m)} w_f$, where $\bdelta^{(c)}$ and $\bdelta^{(m)}$ are defined by (\[eq:transf-c\]) and (\[eq:transf-mom\]), respectively.
--- abstract: 'Low-$Q^2$ photons do not resolve partons in the proton, which gives problems when applying the deep inelastic scattering formalism, such as an unphysical, negative gluon density extracted from data. Considering instead hadronic fluctuations of the photon, we show that the generalised vector meson dominance model (GVDM) gives a good description of the measured cross section at low $Q^2$, [*i.e.*]{} reproduces $F_2(x,Q^2)$, using only few parameters with essentially known values. Combining GVDM and parton density functions makes it possible to obtain a good description of $F_2$ data over the whole range of $x$ and $Q^2$.' --- TSL/ISV-2004-0277\ June 2004\ [**Interpretation of electron-proton scattering at low $Q^2$**]{}\ [[**J. Alwall$^a\,$**]{}[^1], [**G. Ingelman$^{a,b}\,$**]{}[^2]]{}\ \ [$^b$ Deutsches Elektronen-Synchrotron DESY, D-22603 Hamburg, Germany]{}\ Introduction ============ Experimental measurements on electron-proton ($ep$, and also $\mu p$) scattering are usually interpreted in terms of the theoretical formalism for deep inelastic scattering (DIS). The differential cross section is then expressed in terms of proton structure functions given by the density functions for different partons, [*i.e.*]{} $q(x,Q^2)$ and $g(x,Q^2)$ for quarks and gluons carrying a fraction $x$ of the proton’s energy-momentum when probed with the scale $Q^2$. The structure function $F_2$, which gives the dominant contribution to the cross section, is in leading order given by $F_2(x,Q^2) = \sum_q e_q^2 \left( xq(x,Q^2) + x\bar{q}(x,Q^2)\right)$ while the gluon density enters indirectly via the logarithmic $Q^2$ dependence of perturbative QCD. This formalism has also been applied to $F_2$ data at low photon virtuality $Q^2$, where the exchanged photon is not far from being on-shell. Parametrising such $F_2$ data in terms of quark and gluon density functions results in gluon distributions that tend to be negative for small $x$ at small $Q^2$ ([*e.g.*]{} $x\sim 10^{-4}$, $Q^2\sim 2\, \rm{GeV}^2$) [@negative-gluon; @mrst]. The reason for this is that the DGLAP evolution, driven primarily by the gluon at small $x$, otherwise gives too large parton densities and thereby a poor fit to $F_2$ in the genuine DIS region at large $Q^2$. Although one may argue that the gluon density is not a directly observable quantity and hence might be negative, it certainly is in conflict with the interpretation of the probability for a gluon with momentum fraction $x$ in the proton. In particular, such a gluon distribution could be just an effective description for a more proper theoretical understanding. It need not have the same universality as proper parton density functions, thus giving incorrect results when applied to other interactions. For example, differences in the predicted Higgs production cross section (dominated by $gg\rightarrow H$) at the Tevatron and LHC arise depending on whether the gluon parametrisation is forced to be positive definite or allowed to be negative at small $x$ [@mrst]. In this Letter, we argue that the root of the problem is the application of the formalism for DIS also in the low-$Q^2$ region, where the momentum transfer is not large enough that the parton structure of the proton is clearly resolved. The smallest distance that can be resolved is basically given by the momentum transfer of the exchanged photon through $d=0.2/\sqrt{Q^2}$, where $d$ is in Fermi if $Q^2$ is in GeV$^2$. This indicates that partons are resolved only for $Q^2\gtrsim 1\, \rm{GeV}^2$. For $Q^2\lesssim 1\, \rm{GeV}^2$, there is no hard scale involved and a parton basis for the description is not justified. Instead, the interaction is here of a soft kind between the nearly on-shell photon and the proton. The cross section is then dominated by the process where the photon fluctuates into a virtual vector meson state which then interacts with the proton in a strong interaction. This is the essence of the vector meson dominance model (VDM), for a review see [@VDM]. In the following we use the original generalised vector meson dominance model (GVDM) [@GVDM] for $ep$ scattering at low $Q^2$. We show that it gives a good description of the recent HERA data extending the $Q^2$ region to very low values, which are of particular importance for the GVDM approach (for a review of GVDM models, see [@donnachie-shaw]). Furthermore, the GVDM model based on hadronic fluctuations of the photon is natural to combine with our model [@EI] for hadronic fluctuations of the target proton, which has been used to derive the non-perturbative $x$-shape of the proton’s parton density functions. Combining parton density functions including DGLAP evolution [@DGLAP] with GVDM gives a good description of data over the full $Q^2$ region. This extends earlier work [@badelek; @sch-sp] on applying GVDM and is complementary to theoretical developments where GVDM is connected with a QCD dipole approach [@Nikolaev:1990ja; @Frankfurt:1997zk; @Cvetic:2001ie; @Kuroda:2003np]. Vector meson dominance model for $ep$ at low $Q^2$ ================================================== The occurence of quantum fluctuations implies that a photon may also appear as a vector meson such that the quantum state should be expressed as $$\label{eq:photon-fluctuation} |\gamma\rangle = C_0|\gamma_0\rangle + \sum_V \frac{e}{f_V}|V\rangle + \int_{m_0}dm (\cdots)$$ The first vector meson dominance model included only the sum over the vector meson states $V = \rho^0, \omega, \phi \ldots$, whereas the generalised model [@GVDM] also includes the integral over a continuous mass spectrum (not written out explicitly in eq. (\[eq:photon-fluctuation\])). This hadronic fluctuation of the photon then interacts with the target proton with a normal hadronic cross section dominated by soft processes without any hard scale involved. Total cross sections for different beam hadrons at different energies are well measured and given by standard parametrisations to be discussed below. The overall cross section is then a convolution of the photon-to-meson fluctuation probability with the meson propagator and the meson-proton cross section. In $ep$ scattering[^3] data is given in terms of the proton structure function $F_2$ extracted from the differential cross section $d\sigma/dxdQ^2$ for electromagnetic interactions (one-photon exchange), since the weak interactions are completely negligible for $Q^2\ll m_{Z,W}^2$. The structure function $F_2$ can be expressed as [@hand; @VDM] $$\label{F2-sigmaTL} F_2(x, Q^2) = \frac{Q^2 (1-x)}{4\pi^2\alpha \left( 1+4x^2m_p^2/Q^2 \right)} [\sigma_T(x, Q^2)+\sigma_L(x, Q^2)]$$ in terms of the total cross sections $\sigma_T$ and $\sigma_L$ for transverse and longitudinal virtual photons. These cross sections are obtained by squaring the amplitude involving expression (\[eq:photon-fluctuation\]) whose continuous part results in a double mass integral $\int_{m_0^2}dm^2d{m'}^2 \frac{\tilde\rho_{T,L}(W^2,m^2,{m'}^2)m^2{m'}^2}{(m^2+Q^2)({m'}^2+Q^2)}$ [@GVDM]. Off-diagonal contributions having $m\ne m^\prime$ [@Fraas:gh] are normally neglected in phenomenological studies on nucleons, although they cannot be neglected for nuclei [@Nikolaev:1990ja; @Shaw]. Since we here only consider nucleons, we take this integral to be diagonal, [*i.e.*]{}$\tilde\rho_{T,L}(W^2,m^2,{m'}^2) = \rho_{T,L}(W^2,m^2)\delta(m^2-{m'}^2)$. The spectral weight function $\rho_T(W^2,m^2)$ is phenomenologically chosen to fit data, [*e.g.*]{}$\rho_T=m_0^2/m^4$ to obtain scaling at larger $Q^2$, while $\rho_L=\xi_C\frac{Q^2}{m^2}\rho_T$. In this GVDM approach, the resulting cross-sections are [@GVDM] $$\begin{aligned} \label{sigmaTL-GVDM} \sigma_T^\textrm{\tiny GVDM} &=& \sum_{V} \frac{4\pi\alpha}{f_V^2} \left(\frac{m_V^2}{Q^2+m_V^2}\right)^2 \sigma_{V p} \;\; + \;\; \frac{m_0^2}{Q^2+m_0^2 }\sigma_{C p} \\ \sigma_L^\textrm{\tiny GVDM} &=& \sum_{V} \frac{4\pi\alpha}{f_V^2} \frac{Q^2}{m_V^2} \left(\frac{m_V^2}{Q^2+m_V^2}\right)^2 \xi_V \sigma_{V p} \nonumber \\ & & \;\; + \;\; \left(\frac{m_0^2}{Q^2} \ln\left(1 + \frac{Q^2}{m_0^2}\right) - \frac{m_0^2}{Q^2+m_0^2}\right)\xi_C \sigma_{C p}\end{aligned}$$ In the sums over the discrete vector meson states one recognises the well-known factors $4\pi\alpha/f_V^2$ (involving the vector meson decay constant $f_V$) which give the probabilities of the fluctuations $\gamma \to V$ for real photons, followed by the squared propagator of the meson with mass $m_V$ and the meson-proton total cross section $\sigma_{V p}$. The terms proportional to $\sigma_{C p}=r_C\, \sigma_{\gamma p}$ (defined exactly below) originates from the integral over the continuous vector meson mass spectrum with a lower limit given by the parameter $m_0$. The parameters $\xi_V = \sigma^L_{Vp}/\sigma^T_{V p}$ and $\xi_C = \sigma^L_{C p}/\sigma^T_{C p}$ accounts for the possibility of different cross sections for transverse and longitudinal polarisation states. It is assumed that they are independent of $x$ and $Q^2$ and expected that they are less than unity. The total cross-sections $\sigma_{V p}$ and $\sigma_{\gamma p}$ can be directly taken as the well known and generally used parametrisation [@DL] $$\label{sigma-total} \sigma(ip\rightarrow X) = A_i s^\epsilon + B_i s^{-\eta}$$ for the total cross section of a particle $i$ on a proton. The first term is for pomeron exchange and the second one for reggeon exchange. The energy dependence is given by the parameters $\epsilon\approx 0.08$ and $\eta\approx 0.45$ which are universal and obtained from fits to a wealth of data on total cross sections, whereas the normalisation parameters $A_i,B_i$ are different for different particles. At high energies the reggeon term can be neglected in comparison to the dominating pomeron term. This parametrisation applies not only to the vector mesons ($i=V$) but also to photons ($i=\gamma$) which are on-shell or nearly so. Thus we have $\sigma_{V p} = A_V s_\gamma^\epsilon + B_V s_\gamma^{-\eta}$ and $\sigma_{\gamma p} = A_\gamma s_\gamma^\epsilon + B_\gamma s_\gamma^{-\eta}$. The fractions of the $\gamma p$ cross section accounted for by the discrete vector mesons $V$ are then $r_V=\frac{4\pi\alpha}{f_V^2}\cdot \frac{A_V}{A_\gamma}$, and we can specify $r_C = 1- \sum_V r_V$ as the fraction from the continuous mass spectrum. Inserting these GVDM expressions for $\sigma_{T,L}$ in eq. (\[F2-sigmaTL\]) one obtains $$\begin{aligned} \label{F2-GVDM} F_2(x,Q^2) & = & \frac{(1-x)Q^2}{4\pi^2\alpha} \left\{ \sum_{V=\rho, \omega, \phi} r_V \left(\frac{m_V^2}{Q^2 + m_V^2}\right)^2 \left(1 + \xi_V\frac{Q^2}{m_V^2}\right) \right. \nonumber \\ & & \left. +\; r_C\left[ (1-\xi_C)\frac{m_0^2}{Q^2 + m_0^2} + \xi_C \frac{m_0^2}{Q^2}\ln{(1 + \frac{Q^2}{m_0^2})} \right] \right\} A_\gamma \frac{Q^{2\epsilon}}{x^\epsilon}\end{aligned}$$ where the following approximations, which are justified for the region of $x$ and $Q^2$ of HERA data, have been made: In the prefactor the term $4x^2m_p^2/Q^2\ll 1$ and is hence neglected. The last factor originating from $\sigma_{Vp}$ and $\sigma_{Cp}$ only includes the pomeron term, since the reggeon term is negligible, and the energy variable is $s_{\gamma p} = Q^2\: \frac{1-x}{x} + m_p^2 \approx Q^2/x$ at small-$x$. The parameters involved in eq. (\[F2-GVDM\]) are all essentially known from GVDM phenomenology. The values $r_{V=\rho,\omega,\phi,C} = 0.67, 0.062, 0.059, 0.21$ are quite well determined [@VDM]. Although $m_0\approx 1$ GeV is expected [@sch-sp], it is not well known and is here taken as a free parameter. The parameters $\xi_V$ are assumed to be the same for $V=\rho,\omega,\phi$ and expected to be $\xi_V\approx 0.25$ based on the early study in [@GVDM] and supported by [@Kuroda:2003np] including recent HERA data. A similar magnitude is expected for $\xi_C$. Lacking established numbers and wanting to have as few parameters as possible, we use the common parameter $\xi=\xi_V=\xi_C$ as a free parameter to be fitted. For the pomeron intercept parameter the value $\epsilon=0.09$ has been obtained in recent fits [@cudell], but we take it as a free parameter in order to check the expected consistency with this universal value. Also the overall normalisation constant $A_\gamma$ of the photon-proton cross section is taken as a free parameter. Thus, we have the four parameters $\xi,m_0,\epsilon,A_\gamma$ to be fitted to data. Comparison to $F_2$ data ======================== ![$F_2$ at low $Q^2$: HERA $ep$ data from ZEUS [@ZEUS] compared to GVDM as in eq. (\[F2-GVDM\]) (full curves). Model results are also given when the longitudinal contribution of the continuum is excluded ($\xi_C=0$) and when excluding the continuous contribution altogether (setting $r_C=0$) giving VDM.[]{data-label="fig:lowQ2"}](fig1.eps){width="11cm"} The GVDM expression for $F_2$ in eq. (\[F2-GVDM\]) gives a very good description of the HERA data on $F_2$ at low $Q^2$, as shown in Fig. \[fig:lowQ2\]. The fit gives $\chi^2/\rm{d.o.f.} = 87/(70-4) = 1.3$ with parameter values as expected: $\epsilon = 0.091$, $\xi = 0.34$, $m_0=1.5$ GeV just above the discrete vector meson masses and $A_\gamma = 71\, \mu\rm{b}$ in accordance with the measured photon-proton cross section (cf. [@sch-sj]). This demonstrates that for $Q^2$ clearly below 1 GeV$^2$ the HERA $ep$ cross section can be fully accounted for by GVDM using parameter values as determined from old investigations related to fixed target data. For completeness, both the transverse and longitudinal contributions to the integral over the continuous mass spectrum are here included, although the latter is numerically small as demonstrated in Fig. \[fig:lowQ2\]. VDM, which lacks the continuum part, falls below the data and decreases too fast with $Q^2$. This $Q^2$ behaviour becomes even worse if the longitudinal contribution is neglected ([*i.e.*]{} $\xi_V=0$), as is done in some simplified treatments of VDM. The $Q^2$ dependence of these different contributions is shown in Fig. \[fig:SLAC\]. ![The $Q^2$ dependence of $F_2$ from GVDM (full curve) with its contributions from transverse (T) and longitudinal (L) parts of the discrete vector meson spectrum (VDM) and the continuous (Cont.) mass spectrum. Data from SLAC [@slac] are included for comparison.[]{data-label="fig:SLAC"}](fig2.eps){width="8cm"} We have also compared with data on $F_2$ from SLAC [@slac] and NMC [@nmc]. Due to the lower energies of these fixed target experiments, one must here include also the reggeon term in the Donnachie-Landshoff parameterisation of the total cross section and we use $\eta=0.45, B_\gamma=90\, \mu\rm{b}$ ([*cf.*]{} [@sch-sj; @DL]). Keeping the values of the other parameters fixed, we obtain good agreement as long as $x$ and $Q^2$ are not too large ([*cf.*]{} [@Donnachie:2001xx]). At larger $Q^2$, this original GVDM does not have the correct behaviour since $F_2$ in eq. (\[F2-GVDM\]) increases with $Q^2$ for all $x$. This can be cured phenomenologically by introducing for the spectral weight function mentioned above a suitable form $\rho_T = N \ln{(W^2/am^2)}/m^4$ [@sch-sp]. With suitable values of the free parameters $m_0,N,a$ it is then possible to reproduce HERA $F_2$ data also at larger $Q^2$. A theoretically more advanced alternative is to instead include off-diagonal contributions [@Nikolaev:1990ja; @Cvetic:2001ie]. This connects naturally to the dipole formalism of DIS and include effects of perturbative QCD evolution. This off-diagonal GVDM framework should then apply in the full $Q^2$ region, as long as $x$ is sufficiently small, and HERA data can here be reproduced [@Cvetic:2001ie]. At high $Q^2$ the conventional description is in terms of parton density functions, which also includes the large-$x$ valence region. As argued above, this approach does not apply at very small $Q^2$ and one must therefore complement it with GVDM to account for this region. To cover the full $x$ and $Q^2$ region one should combine these two descriptions, but due to the confinement problem, there is no proper theoretical way to do the transition from GVDM formulated in a hadron basis to the parton model in a parton basis. Although GVDM can be extended to large $Q^2$, this would imply double counting if combined with the conventional parton description. To use the latter one must, therefore, phase out GVDM. Thinking in terms of the resolution scale discussed above, it is quite natural that the original hadron-based GVDM only applies at low $Q^2$ and there should be a transition to the DIS formalism of resolved partons at high $Q^2$. In particular, the total cross sections $\sigma_{Vp},\sigma_{Cp}$ used in GVDM applies to soft hadronic processes for (nearly) on-shell particles. It is therefore very reasonable to phase out GVDM at larger $Q^2$ by applying a form factor suppression. A factor like $m_V^2/(m_V^2 + Q^2)$ [@f-sj] would, however, ruin the very good description at low $Q^2$ seen in Fig. \[fig:lowQ2\]. Instead, a sharper transition to DIS in the region $Q^2 = 0.6 - 1.5 \rm{\,GeV}^2$ is required. This is in accordance with the rather abrupt change of the slope parameter $\lambda$ in $F_2(x)\sim x^{-\lambda}$ observed in HERA data at $Q^2\approx 1\, \rm{GeV}^2$ [@lambda] and may be seen more generally as a rather sharp transition from soft, non-perturbative to hard, perturbative QCD dynamics. We therefore introduce the phenomenological form factor $(Q^2_C/Q^2)^a$ for $Q^2>Q^2_C$ to phase out GVDM above a critical $Q^2_C$. As shown in Fig. \[fig:intermediateQ2\], a good description of HERA $F_2$ data at intermediate $Q^2$ can then be obtained by combining GVDM and parton density functions that fit HERA $F_2$ data at larger $Q^2$. This requires $Q^2_C\approx 1\, \rm{GeV}^2$ as expected from the discussed transition, and $a\approx 2$ giving $\sim Q^{-4}$ as a reasonable form factor damping. The exact values of the parameters are fitted and depend on the details of the DIS parton densities. With such a form factor suppression, the GVDM contribution is negligible for $Q^2\gtrsim 4\, \rm{GeV}^2$ (see fig. \[fig:intermediateQ2\]), where DIS parton density parametrisations are usually considered trustworthy. Any parametrisation of parton densities which is good enough to reproduce the measured $F_2$ in the DIS region can be used, provided the GVDM component is taken into account when low-$Q^2$ data are included in the fits. ![$F_2$ at intermediate $Q^2$: contribution of GVDM with a form factor $(1.24/Q^2)^{1.63}$ (full curve) and the complete model (dashed curve), including also DIS parton density functions from our model, compared to H1 data [@H1].[]{data-label="fig:intermediateQ2"}](fig3.eps){width="10cm"} For Fig. \[fig:intermediateQ2\] we have, however, used a physically motivated model [@EI] where the parton momentum distributions are obtained from gaussian fluctuations having widths related to the uncertainty relation and the proton size. Valence distribtuions arise from the ‘bare‘ proton, whereas sea distributions originate from mesons in hadronic fluctuations of the proton $|p\rangle = \alpha_0|p_0\rangle + \alpha_{p\pi}|p_0\pi^0\rangle + \alpha_{n\pi}|n \pi^+\rangle + \ldots + \alpha_{\Lambda K}|\Lambda K^+\rangle + \ldots$ This gives the $x$-shape of the parton densities at $Q^2_0\approx 1\, \rm{GeV}^2$ and the DGLAP equations are then used to evolve to larger $Q^2$, resulting in a good fit to HERA $F_2$ data using only six parameters with physically motivated values [@EI]. Furthermore, this model gives [@AI] $u_v(x)\ne d_v(x)$ and $\bar{u}(x)\ne \bar{d}(x)$ in qualitative agreement with data, as well as $s(x)\ne \bar{s}(x)$ of interest for the NuTeV anomaly [@nutev]. It is interesting that combining these models involving quantum fluctuations of both the photon and the target proton results in a good description of the $ep$ cross section, or equivalently $F_2$, at both low and high $Q^2$. Conclusions =========== The conventional parton model formulation of deep inelastic scattering is not applicable at very low $Q^2$, where no hard scale is available to resolve the partons. Instead, HERA $F_2$ data are here well reproduced by the original generalised vector meson dominance model, including contributions from a continuous mass spectrum and longitudinal polarisation states, and using parameter values in agreement with old analyses at fixed target energies. At large $Q^2$, GVDM with off-diagonal contributions can be used as long as $x$ is small. To cover the full $x$-region, including the valence part, the proton structure must be introduced via parton density functions in the conventional DIS formalism. We have shown that one can combine the GVDM and parton density descriptions in a two-component phenomenological model. GVDM then accounts fully for the cross section below $Q^2\lesssim 1$ GeV, but although it contributes also at large $Q^2$ it must here be phased out in order to avoid double counting with the standard parton density formulation. We have found that a form factor damping of GVDM gives a smooth transition into the deep inelastic region described by parton distribution functions. Here, any good parametrisation of parton densities can be used, provided the GVDM component is taken into account at low $Q^2$ as shown above when fitting the parameters. In this way one obtains a good overall result at both low and high $Q^2$. In particlar, there is no need for a negative gluon density in the region of low $x$ and low $Q^2$. The reason is that the cross section is here dominated by the GVDM contribution, which is based on fundamental quantum fluctuations that should not be neglected. [**Acknowledgments:**]{} We are grateful to Dieter Schildknecht for interesting and helpful discussions and to Johan Rathsman for a critical reading of the manuscript. [22]{} A.D. Martin [*et al.*]{}, Eur. Phys. J. [**C23**]{} (2002) 73;\ J. Pumplin [*et al.*]{}, JHEP [**0207**]{} (2002) 012. A. D. Martin [*et al.*]{}, Eur. Phys. J. C [**28**]{} (2003) 455; arXiv:hep-ph/0308087. T.H. Bauer [*et al.*]{}, Rev. Mod. Phys. [**50**]{} (1978) 261. J.J. Sakurai and D. Schildknecht, Phys. Lett. [**B40**]{} (1972) 121. A. Donnachie and G. Shaw in [*Electromagnetic Interactions of Hadrons*]{}, vol. 2, editors A. Donnachie and G. Shaw, Plenum Press, New York, 1978. A. Edin and G. Ingelman, Phys. Lett. [**B432**]{} (1998) 402; Nucl. Phys. Proc. Suppl. B [**79**]{} (1999) 189. V. N. Gribov and L. N. Lipatov, Sov. J. Nucl. Phys.  [**15**]{}, 438 (1972);\ G. Altarelli and G. Parisi, Nucl. Phys. B [**126**]{}, 298 (1977);\ Yu. L. Dokshitzer, Sov. Phys. JETP [**46**]{}, 641 (1977). B. Badelek and J. Kwiecinski, Rev. Mod. Phys.  [**68**]{} (1996) 445 and references therein. D. Schildknecht and H. Spiesberger, arXiv:hep-ph/9707447. N. N. Nikolaev and B. G. Zakharov, Z. Phys. C [**49**]{} (1991) 607. L. Frankfurt, V. Guzey and M. Strikman, Phys. Rev. D [**58**]{} (1998) 094039. G. Cvetic, D. Schildknecht, B. Surrow and M. Tentyukov, Eur. Phys. J. C [**20**]{} (2001) 77. M. Kuroda and D. Schildknecht, arXiv:hep-ph/0309153. L.N. Hand, Phys. Rev. [**129**]{} (1963) 1834. H. Fraas, B. J. Read and D. Schildknecht, Nucl. Phys. B [**86**]{} (1975) 346. G. Shaw, Phys. Lett. B [**228**]{} (1989) 125; Phys. Rev. D [**47**]{} (1993) 3676. A. Donnachie and P.V. Landshoff, Phys. Lett. B [**296**]{} (1992) 227. J. R. Cudell [*et al.*]{}, Phys. Rev. D [**61**]{} (2000) 034019 \[Erratum-ibid. D [**63**]{} (2001) 059901\];\ J. R. Cudell [*et al.*]{}, Phys. Lett. B [**395**]{} (1997) 311. G.A. Schuler and T. Sjöstrand, Nucl. Phys. B [**407**]{} (1993) 539. J. Breitweg [*et al.*]{} \[ZEUS Collaboration\], Phys. Lett. B [**487**]{} (2000) 53. L.W. Whitlow [*et al.*]{}, Phys. Lett. [**B282**]{} (1992) 475 (data on $F_2^p$ obtained from the collaboration). M. Arneodo [*et al.*]{}, Nucl. Phys. [**B483**]{} (1997) 3. A. Donnachie and P. V. Landshoff, Phys. Lett. B [**518**]{} (2001) 63. C. Friberg and T. Sjöstrand, JHEP [**0009**]{} (2000) 010. J. Breitweg [*et al.*]{}, Eur. Phys. J. [**C7**]{} (1999) 609. C. Adloff [*et al.*]{}, Eur. Phys. J. [**C21**]{} (2001) 33. J. Alwall and G. Ingelman, in preparation. G. P. Zeller [*et al.*]{} \[NuTeV Collaboration\], Phys. Rev. Lett. [**88**]{} (2002) 091802. [^1]: E-mail: johan.alwall@tsl.uu.se [^2]: E-mail: gunnar.ingelman@tsl.uu.se [^3]: The DIS variables are defined through $Q^2=-q^2=-(p_e-p_e^\prime)^2$, $x=Q^2/2P\cdot q$, $y=P\cdot q/P\cdot p_e$ in terms of the four-momenta $P,p_e, p_e^\prime, q$ of the incoming proton, incoming and scattered electron and the exchanged photon, respectively.
--- abstract: 'A speculative agent with Prospect Theory preference chooses the optimal time to purchase and then to sell an indivisible risky asset as to maximize the expected utility of the round-trip profit net of transaction costs. The optimization problem is formulated as a sequential optimal stopping problem and we provide a complete characterization of the solution. Depending on the preference and market parameters as well as the initial price of the asset, the optimal strategy can be “buy and hold”, “buy low sell high”, “buy high sell higher” or “no trading”. Transaction costs do not necessarily curb speculative trading. For example, while a large proportional transaction cost on sale can unambiguously suppress trading participation, introducing a fixed market entry fee will indeed encourage trading when the asset price level is high.' author: - 'Alex S.L. Tse and Harry Zheng' bibliography: - 'ref.bib' title: 'Speculative Trading, Prospect Theory and Transaction Costs' --- Introduction {#sect:intro} ============ When it comes to modeling of trading behaviors, the standard economic paradigm is the maximization of risk-averse agents’ expected utility in a frictionless market. This criteria however has been criticized on many levels. In terms of trading environment, financial friction is omnipresent in reality where transactions are subject to various costs; In terms of agents’ preferences, behavioral economics literature suggests that many individuals do not make decisions in accordance to expected utility theory. First, utilities are not necessarily derived from final wealth but typically what matters is the change in wealth relative to some reference point. Second, individuals are usually risk-averse over the domain of gains but risk-seeking over the domain of losses - this can be captured by an S-shaped utility function. Finally, individuals may fail to take portfolio effect into account when making investment decision and this phenomenon is known as narrow framing. These psychological ideas are explored for example in the seminal work of [@kahneman-tversky79], [@tversky-kahneman81], [@tversky-kahneman92] and [@kahneman-lovallo93]. We develop a simple dynamic trading model which captures a number of stylized behavioral biases of individuals and market friction. In our setup, trading is costly due to proportional transaction costs as well as a fixed market entry fee. The goal of an agent is to find the optimal time to buy and then to sell an indivisible risky asset to maximize the expected utility of the round-trip profit under Prospect Theory preference of [@tversky-kahneman92]. While a realistic economy can consist of multiple assets, we can interpret the assumption of a single indivisible asset as a manifestation of narrow framing such that the trading decision associated with one particular unit of asset can be completely isolated from the other investment opportunities. We believe the model is the best suitable to describe the trading behaviors of speculative agents. These “less-than-fully” rational agents purchase and sell an asset with a narrow objective of making a one-off round-trip profit rather than supporting consumption or stipulating a long term portfolio growth. We solve a sequential optimal stopping problem under S-shaped utility function to identify the entry and exit time of the market by the agent. In the first stage of the problem we focus on the exit strategy of the agent: Conditional on the ownership of the asset purchased at a given price level (which determines the agent’s reference point), the optimal liquidation problem is solved. Then the value function of the exit problem reflects the utility value of purchasing the asset at different price level. Upon comparison against the utility value of inaction, this constitutes the payoff function of the real option to purchase the asset which is then used in the second stage problem concerning the entry decision of the agent: The agent picks the optimal time to enter the trade as to maximize the expected payoff of this real option to purchase the asset. Martingale method is employed to solve the underlying optimal stopping problems, which has an important advantage over the HJB equation approach. The traditional route to solve an optimal stopping problem is to first conjecture a candidate optimal stopping rule and then the dynamic programming principle is invoked to derive a free boundary value problem that the value function should satisfy. This approach will work for as long as we are able to identify the correct form of the optimal stopping rule but this exercise may not be trivial. As it turns out, the optimal continuation region of our entry problem can either be connected or disconnected depending on the transaction costs level. It is thus difficult to adopt the HJB equation approach since we do not know the correct form of the optimal stopping rule upfront. In contrast, the martingale method does not require any priori conjecture on the optimal strategy. The optimal continuation/stopping set can be deduced directly by studying the smallest concave majorant to a suitably scaled payoff function. Despite its relatively simple nature, our model is capable of generating a rich variety of trading behaviors such as “buy and hold”, “buy low sell high”, “buy high sell higher” and “no trading”. The risk-seeking preference of a behavioral agent over the loss domain will typically induce him to enter the trade at all price level. His trading behaviors can be heavily influenced by transaction costs, but the precise effect crucially depends on the nature of these costs. Generally speaking, a high proportional (fixed) transaction cost discourages trading at a high (low) nominal price. When proportional costs are high and the asset is expensive, the agent prefers waiting until the price level declines – hence he is more inclined to consider a “buy low sell high’’ strategy. But if instead the fixed entry fee is high and the asset is cheap, the agent might prefer delaying the purchase decision until the asset reaches a higher price level, and this leads to a trading pattern of “buy high sell higher’’. The subtle impact of transaction costs leads to interesting policy implications on how speculative trading can be curbed effectively. For example, a surprisingly result is that imposing a fixed market entry fee might indeed accelerate rather than cool down trading participation. Our paper is closely related to the literature of optimal stopping under S-shaped utility function. [@kyle-ouyang-xiong06] and [@henderson12] consider a one-off optimal liquidation problem in which the agent solves for the optimal time to liquidate an endowed risky asset to maximize the expected Prospect Theory utility. They do not consider the purchase decision and the reference point is taken as some exogenously given status quo. Our mathematical setup follows that of [@henderson12] featuring geometric Brownian motion of price process and piecewise power utility function of [@tversky-kahneman92]. A main contribution of our paper is that we further endogenize the reference point which depends on the purchase price of the asset, and the optimal purchase price must be determined as a part of the optimization problem. In addition, we also highlight the roles of transaction costs on the agents’ trading behaviors. Another very relevant class of work is the realization utility model which further incorporates reinvestment possibility within a behavioral optimal stopping model. In [@barberis-xiong12], [@ingersoll-jin13] and [@he-yang19], the agent repeatedly purchases and sells an asset to maximize the sum of utility bursts realized from the gain and loss associated with each round-trip transaction. In a certain sense, their models consider endogenized reference point which is continuously updated based on the most recent purchase price of the asset. However, the purchase decision is exogenously given in many of these models where the agent is simply assumed to buy the asset again immediately after a sale. The only exception is [@he-yang19] who carefully analyze the purchase decision of the agent, but in any case they find that the purchase strategy is trivial where the agent either buys the asset immediately after a sale or never enters the trade again. Our model differs from the realization utility model in a way that we do not consider perpetual reinvestment opportunities (which can be understood as narrow framing that the agent only focuses on a single episode of the trading experience when evaluating the entry and exit strategies). Nonetheless, the optimal purchase region of our model is non-trivial under typical parameters which encapsulates many realistic trading strategies. Beyond the context of behavioral economics, there are a few works attempting to model the sequential purchase and sale decisions under stochastic control framework. However, identification of a modeling setup which can generate reasonable trading patterns proves to be much more difficult than expected. On the one hand, Zervos et al. ([-@zervos-johnson-alazemi13], p.561) report that “...the prime example of an asset price process, namely, the geometric Brownian motion, does not allow for optimal buying and selling strategies that have a sequential nature”. Indeed, existing literature which gives “buy low sell high” as an optimal trading strategy often relies on extra statistical features of the asset price process such as mean reversion.[^1] See for example [@zhang-zhang08], [@song-yin-zhang09], [@leung-li-wang15] and [@leung-li15]. On the other hand, momentum-based trading strategy is also rarely studied in mathematical finance literature. The scarce examples include the work of [@dai-zhang-zhu10] and [@dai-yang-zhang-zhu16] who find that trend-following strategy is optimal under a regime-switching model of asset price. We contribute to this strand of literature by showing that a trading model based on a simple geometric Brownian motion can also generate many realistic trading patterns including both reversal strategy (buy low sell high) and momentum strategy (buy high sell higher). This is achieved via incorporating elements of behavioral preferences and market friction. The rest of the paper is organized as follows. Section \[sect:prob\] provides a description of the model and the underlying optimization problem. We outline the solution methods to solve the optimal stopping problems in Section \[sect:solmethod\]. The main results and the economic intuitions are discussed in Section \[sect:main\]. Section \[sect:conc\] concludes. The technical proofs are collected in the appendix. Problem description {#sect:prob} =================== Trading environment and agent’s preference {#sect:env} ------------------------------------------ Let $(\Omega, \mathcal{F}, \{\mathcal{F}_t\},\mathbb{P})$ be a filtered probability space satisfying the usual conditions which supports a one-dimensional Brownian motion $B=(B_t)_{t\geq 0}$. There is a single indivisible risky asset in the economy. Its price process $P=(P_t)_{t\geq 0}$ is modeled by a one-dimensional diffusion with state space $\mathcal{J}\subseteq \mathbb{R}_{+}$ and dynamics of $$\begin{aligned} dP_t=\mu(P_t)dt+\sigma(P_t)dB_t,\end{aligned}$$ where $\mu:\mathcal{J}\to\mathbb{R}$ and $\sigma:\mathcal{J}\to(0,\infty)$ are Borel functions. $\mathcal{J}$ is assumed to be an interval with endpoints $0\leq a_{\mathcal{J}}<b_{\mathcal{J}}\leq \infty$ and that $P$ is regular in $(a_{\mathcal{J}},b_{\mathcal{J}})$. We assume that interest rate is zero in our exposition. For the non-zero interest rate case one can interpret the process $P$ as the numeraire-adjusted price of the asset. Then the drift term $\mu(\cdot)$ can be viewed as the instantaneous excess return of the risky asset. Trading in the asset is costly. If the agent wants to purchase the asset at its current price $p$, he will need to pay $\lambda p + \Psi$ to initiate the trade where $\lambda\in[1,\infty)$ is the proportional transaction cost on purchase and $\Psi\geq 0$ represents a fixed market entry fee. When the agent sells the asset at price $p$, he will only receive $\gamma p$ where $\gamma\in(0,1]$ is the proportional transaction cost on sale.[^2] Suppose the agent has executed a round-trip trade where he bought and then sold the asset at time $\tau$ and $\nu$ (with $\tau\leq \nu$), the financial payoff of the trade net of all transaction costs is $$\begin{aligned} \gamma P_{\nu}-\lambda P_{\tau}-\Psi. \label{eq:payoff}\end{aligned}$$ Preference of the agent is described by Prospect Theory of [@tversky-kahneman92]. Under this framework, utility is derived from gains and losses relative to some reference point rather than the total wealth. Individuals are typically risk-averse over the domain of gains and risk-seeking over the domain of losses. This can be captured by an S-shaped utility function $U:\mathbb{R}\to\mathbb{R}$ with $U(0)=0$ and that $U$ is concave (resp. convex) over $\mathbb{R}_{+}$ (resp. $\mathbb{R}_{-}$). Finally, individuals also exhibit loss-aversion such that the negative utility brought by a unit of loss is much larger in magnitude than the positive utility from a unit of gain. Denote by $R$ the constant reference point of the agent. $R$ can simply be an exogenously given constant outside the model specification, but it can also be interpreted as a preference parameter of the agent which reflects his “aspiration level” in the sense of [@lopes-oden99] where a motivated agent will set a higher economic benchmark. The Prospect Theory value of a random variable $X$ is evaluated by $\mathbb{E}[U(X-R)]$. Agent’s objective {#sect:obj} ----------------- Loosely speaking, the objective of the agent is to find the optimal time to buy and then to sell the asset to maximize the Prospect Theory value of the speculative trading payoff . The precise problem formulation is given in the following two subsections. Similar two-stage sequential stopping problems are considered by [@leung-li-wang15] and [@leung-li15] as well. ### Exit problem {#sect:exit} Suppose for the moment that the agent is endowed with one unit of the asset to begin with and we also temporarily relabel the reference point of the agent as a constant $H$. The goal of the agent in the exit problem is to find the optimal time to sell the endowed asset. When the asset is sold at time $\nu$, the utility of gain and loss relative to the reference point is $G_1(P_{\nu};H):=U(\gamma P_{\nu}-H)$ after taking the transaction costs into account. The agent solves for the optimal selling time $\nu$ to maximize the expected Prospect Theory utility of the proceed, which involves solving an optimal stopping problem of $$\begin{aligned} V_1(p; H):=\sup_{\nu\in\mathcal{T}}\mathbb{E}\left[G_1(P_{\tau};H)\Bigl | P_0=p\right]=\sup_{\nu\in\mathcal{T}}\mathbb{E}\left[U(\gamma P_{\nu}-H)\Bigl | P_0=p\right] \label{eq:exit}\end{aligned}$$ over the set of $\{\mathcal{F}_t\}$-stopping time denoted by $\mathcal{T}$. Write the optimizer to problem as $\nu^*(p)$ which in general depends on the initial price level $p$. ### Entry problem {#sect:entry} Now we assume that the agent does not own any asset to begin with. His economic objective is to determine the optimal times to purchase (and then to sell) the risky asset to maximize the expected utility of the profit of this round-trip trade under Prospect Theory preference. At a given current asset price level $p$, there are two possible actions for the agent. First, he can opt to initiate the speculative trade by buying the asset now and sell it in the future. When the asset is liquidated at his choice of the sale time $\nu$, the profit-and-loss of the trade (relative to his reference level $R$) is $\gamma P_{\nu}-\lambda p-\Psi-R$. The agent can find the best time of sale to maximize his expected utility by solving problem on setting $H=\lambda p + \Psi+R$.[^3] Then the best possible expected utility he can attain is $$\begin{aligned} \sup_{\nu\in\mathcal{T}}\mathbb{E}\left[U(\gamma P_{\nu}-\lambda p-\Psi-R)\Bigl | P_0=p\right]=V_{1}(p; \lambda p+\Psi+R)\end{aligned}$$ if he decides to enter the trade at the given price of $p$. Alternatively, the agent can forgo the opportunity to enter the trade. In this case, the financial payoff of this economic decision is a constant of zero. After taking the reference point $R$ into account, the utility he will receive is just a constant of $U(-R)$. Therefore, the opportunity to enter the speculative trade can be viewed as a real option. At a given price level $p$ the agent is willing to enter the trade if and only if the maximal expected utility of trading is not less than that of inaction, i.e. $V_{1}(p; \lambda p+\Psi+R)\geq U(-R)$. This is similar to a financial option being in-the-money. The payoff of this real option to the agent in utility terms as a function of price level $p$ is given by $$\begin{aligned} G_2(p):=\max\left\{V_1(p;\lambda p+\Psi+R),U(-R)\right\}. \label{eq:G2}\end{aligned}$$ The entry problem for the agent is to find the optimal time to initiate the trade as to maximize the expected value of . It is equivalent to solving $$\begin{aligned} V_2(p):=\sup_{\tau\in\mathcal{T}}\mathbb{E}\left[G_2(P_{\tau})|P_0=p\right]=\sup_{\tau\in\mathcal{T}}\mathbb{E}\left[\max\left\{V_1(P_\tau;\lambda P_\tau+\Psi+R),U(-R)\right\}|P_0=p\right] \label{eq:entry}\end{aligned}$$ provided that the exit problem value function $V_1$ is well defined. Let the optimizer to problem be $\tau^*(p)$. With $p$ being the initial price of the asset at $t=0$, the agent will purchase the asset at the stopping time $t=\tau^*(p)$. Then conditional on the realization of the entry price level $P_{\tau^*(p)}$, the agent solves the exit problem under initial value $P_{\tau^*(p)}$. The corresponding optimizer $\nu^*(P_{\tau^*(p)})$ reflects the time lapse between the initiation and closure of the trade. In particular, the agent will sell the asset at the stopping time $t=\tau^*(p)+\nu^*(P_{\tau^*(p)})$. This gives the complete characterization of the optimal entry and exit strategy of the agent.[^4] The solution methods {#sect:solmethod} ==================== In this section we discuss the martingale approach to solve a general one-dimensional stopping problem without discounting,[^5] which is based on [@dynkin-yushkevich69] and [@dayanik-karatzas03]. As explained in the introduction, the key advantage of this method over the classical HJB equation approach is that we do not have to provide a priori conjecture of the form of the optimal stopping rule. Consider a general optimal stopping problem in form of $$\begin{aligned} V(p)=\sup_{\tau\in\mathcal{T}}\mathbb{E}[G(P_{\tau})|P_0=p]\end{aligned}$$ for some payoff function $G$. Under standard theory of optimal stopping, the optimal stopping time can be characterized by the first exit time of the process from some open set $\mathcal{C}$, i.e. the optimal stopping time has the form $\tau=\inf\{t\geq 0: P_t\notin C\}$. In a one-dimensional diffusion setting, it is sufficient to consider stopping times of the class $\tau_{a,b}:=\tau_a \wedge \tau_b$ where $\tau_a:=\inf\{t\geq 0: P_t=a\}$ and $\tau_b:=\inf\{t\geq 0: P_t=b\}$ with $a\leq p\leq b$. Here $[a,b]\subseteq \mathcal{J}$ is the unknown interval to be identified (and it depends on $p$ in general). Let $s(\cdot)$ be the scale function of process $P$ (which is unique up to affine transformation) defined as a strictly increasing function such that $\Theta_t:=s(P_t)$ is a local martingale. A simple application of Ito’s lemma shows that $s(\cdot)$ should solve the second order differential equation $$\begin{aligned} \frac{\sigma^2(p)}{2}s''(p)+\mu(p)s'(p)=0. \label{eq:scaleode}\end{aligned}$$ Let $\theta:=s(p)$. Then $$\begin{aligned} J(p;\tau_{a,b}):=\mathbb{E}[G(P_{\tau_{a,b}})|P_{0}=p] &=\mathbb{E}[G(s^{-1}(\Theta_{\tau_{a,b}}))|\Theta_{0}=\theta]\\ &=\mathbb{E}[\phi(\Theta_{\tau_{a,b}})|\Theta_{0}=\theta]\\ &=\mathbb{P}(\tau_a<\tau_b|\Theta_{0}=\theta)\phi(s(a))+\mathbb{P}(\tau_b<\tau_a|\Theta_{0}=\theta)\phi(s(b))\\ &=\frac{s(b)-\theta}{s(b)-s(a)}\phi(s(a))+\frac{\theta-s(a)}{s(b)-s(a)}\phi(s(b))\end{aligned}$$ where $\phi:=G\circ s^{-1}$. The above can be maximized with respect to $a$ and $b$. Moreover, the dummy variables $a$ and $b$ can be replaced by $a'=s(a)$ and $b'=s(b)$. Hence $$\begin{aligned} V(p)=\sup_{a,b:a\leq p\leq b}J(p;\tau_{a,b})=\sup_{a',b':a'\leq \theta\leq b'}\left[\frac{b'-\theta}{b'-a'}\phi(a')+\frac{\theta-a'}{b'-a'}\phi(b')\right]=:v(\theta)\end{aligned}$$ and thus $V(p)=v(s(p))$. The scaled value function $v(\theta)$ can be characterized by the smallest concave majorant to the scaled payoff function $\phi(\theta)=G(s^{-1}(\theta))$ over $s(\mathcal{J})$ which is defined as an interval with endpoints $s(a_\mathcal{J})$ and $s(b_\mathcal{J})$. The continuation set associated with the optimal stopping time is given by $\mathcal{C}=\{p\in\mathcal{J}:v(s(p))>\phi(s(p))=G(p)\}$. This provides us an algorithm to solve problem and sequentially. Recall that the payoff function of the exit problem is $G_1(p; H)=U(\gamma p-H)$. The value function of the exit problem is then given by $V_1(p; H)=\bar{g}_1(s(p); H)$ where $\bar{g}_1=\bar{g}_1(\theta;H)$ is the smallest concave majorant of $$\begin{aligned} g_1(\theta;H):=G_1(s^{-1}(\theta); H)=U(\gamma s^{-1}(\theta)-H).\end{aligned}$$ In turn, the payoff function of the entry problem is $G_2(p):=\max\{V_1(p; \lambda p+\Psi + R),U(-R)$}. We identify $\bar{g}_2=\bar{g}_2(\theta)$ as the smallest concave majorant of $$\begin{aligned} g_2(\theta):=G_2(s^{-1}(\theta))&=\max\{V_1(s^{-1}(\theta); \lambda s^{-1}(\theta)+\Psi + R),U(-R)\}\\ &=\max\{\bar{g}_1(\theta; \lambda s^{-1}(\theta)+\Psi + R),U(-R)\}.\end{aligned}$$ Then the value function of the entry problem is $V_2(p)=\bar{g}_2(s(p))$. Main results {#sect:main} ============ The procedures described in Section \[sect:solmethod\] is very generic and can be applied to solve the sequential optimal stopping problem under a range of model specifications. To derive stronger analytical results, in the rest of this paper we specialize to the piecewise power utility function of [@tversky-kahneman92] in form of $$\begin{aligned} U(x)= \begin{cases} x^\alpha,&x>0; \\ -k |x|^\alpha,& x\leq 0. \end{cases}\end{aligned}$$ Here $\alpha\in(0,1)$ such that $1-\alpha$ is the level of risk-aversion and risk-seeking on the domain of gains and losses, and $k>1$ controls the degree of loss-aversion. Experimental results of [@tversky-kahneman92] give an estimation of $\alpha=0.88$ and $k=2.25$. The price process of the risky asset $P=(P_t)_{t\geq 0}$ is assumed to be a geometric Brownian motion $$\begin{aligned} dP_t=P_t(\mu dt +\sigma dB_t)\end{aligned}$$ with $\mu\geq 0$ and $\sigma>0$ being the constant drift and volatility of the asset. Define $\beta:=1-\frac{2\mu}{\sigma^2}\leq 1$, then by substituting $\mu(p)=\mu p$ and $\sigma(p)=\sigma p$ in a scale function of $P$ can be found as $$\begin{aligned} s(x)= \begin{cases} x^{\beta},& \beta > 0;\\ \ln x,& \beta=0;\\ x^{-\beta},& \beta<0. \end{cases}\end{aligned}$$ Finally, we assume $R>0$ so that the reference point of the agent is always positive. This is not unreasonable since the reference point in the context of investment is usually taken as some performance benchmark that an agent wants to outperform and such a goal is typically a positive one. We first state the solution to the exit problem which is based on [@henderson12]. For the exit problem : 1. If $\beta\leq 0$ or $0<\beta<\alpha<1$, the exit problem is ill-posed and the agent will never sell the asset. 2. If $\alpha<\beta\leq 1$ or $\alpha=\beta<1$, the agent will sell the asset when its price level first reaches $\frac{cH}{\gamma}$ or above where $c>1$ is a constant given by the solution to the equation $$\begin{aligned} \frac{\alpha}{\beta} c(c-1)^{\alpha-1}-(c-1)^{\alpha}-k=0. \label{eq:eqC} \end{aligned}$$ The value function is given by $$\begin{aligned} V_1(p; H)= \begin{cases} \frac{\alpha}{\beta} c^{1-\beta}(c-1)^{\alpha-1}H^{\alpha-\beta}(\gamma p)^\beta-kH^{\alpha},&p< \frac{cH}{\gamma}; \\ (\gamma p-H)^{\alpha},&p\geq \frac{cH}{\gamma}. \end{cases} \label{eq:exitvalufun} \end{aligned}$$ \[lem:henderson\] The exit problem is ill-posed under the parameters combination in case (1) of Lemma \[lem:henderson\], which arises when the performance of the asset is too good relative to the agent’s risk-aversion level over gains. In particular, one can consider a sequence of sale strategy in form of $\nu_n:=\inf\{t\geq 0: P_t\geq n\}$ and then the agent’s expected utility will tend to infinity when $n\to\infty$. This corresponds to a strategy that the agent never sells the endowed asset. In the non-degenerate case (2), the optimal sale strategy is a gain-exit rule where the agent is looking to sell the asset at a profit without considering stop-loss. Note that the gain-exit target $\frac{cH}{\gamma}$ is increasing in transaction costs (i.e. decreasing in $\gamma$). It means that the agent tends to delay the sale decision in a more costly trading environment. Under geometric Brownian motion model of price process, $P_t=\exp\left[\left(\mu-\frac{\sigma^2}{2}\right)t+\sigma B_t\right]=\exp\left[\sigma\left(-\frac{\sigma}{2}\beta t+ B_t\right)\right]$. If $\beta>0$, the Brownian motion in the exponent has negative drift so that $P$ may not reach any arbitrarily given level above its starting value in finite time and we have $\lim_{t\to\infty}P_t=0$ almost surely. If the initial price of the asset is below the gain-exit target then there is a strictly positive probability that the asset is never sold. Moreover, the agent who fails to sell the asset at his target gain-exit level will suffer a total loss in the long run. \[remark:bm\] The expression of the target gain-exit threshold and in turn the value function of the exit problem are available in close-form, thanks to the specialization that the degree of risk-aversion over gains is the same as that of risk-seeking over losses. This allows us to make a lot of analytical progress when studying the entry problem. We will also lose the close-form expressions in Lemma \[lem:henderson\] if fixed transaction cost on sale is introduced: In this case the agent will only sell the asset when the utility proceed from the sale $U(\gamma p - H-\Gamma)$ is larger than that of inaction $U(-H)$ where $\Gamma\geq 0$ represents a fixed market exit fee. Then the payoff function of the exit problem will become $G_1(p;H):=\max\{U(\gamma p - H-\Gamma),U(-H)\}$. \[remark:modelassump\] We now proceed to describe the optimal entry strategy of the agent. Suppose $\alpha\leq \beta< 1$. For the entry problem : 1. If $\frac{\lambda}{\gamma}\leq\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, there exists $p_1^{*}\in[0,\infty)$ such that the agent will enter the trade when the asset price is at or above $p_1^*$. 2. If $\frac{\lambda}{\gamma}> \left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, there exists a constant $C\in(0,\infty)$ independent of $\Psi$ and $R$ such that: 1. If $\frac{\Psi}{R}< C$, there exists $0\leq p_1^{*}<p_2^{*}<\infty$ such that the agent will enter the trade when the asset price is within the interval $[p_1^{*},p_2^{*}]$. 2. If $\frac{\Psi}{R}\geq C$, the agent will never enter the trade. In all cases, the value function is given by $V_2(p)=\bar{g}_2(p^{\beta})$ where $\bar{g}_2=\bar{g}_2(\theta)$ is the smallest concave majorant to $$\begin{aligned} g_2(\theta)&:=\max\left\{v_1(\theta),-kR^{\alpha}\right\}:=\max\left\{(R+\Psi)^{\alpha}f\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta\right),-kR^{\alpha}\right\} \label{eq:g2}\end{aligned}$$ with $$\begin{aligned} f(x):=\frac{\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}x-k(\frac{\lambda}{\gamma} x^{1/\beta}+1)^{\beta}}{(\frac{\lambda}{\gamma} x^{1/\beta}+1)^{\beta-\alpha}}. \label{eq:f}\end{aligned}$$ \[prop:entry\] Suppose $\alpha< \beta= 1$. For the entry problem : 1. If $\frac{\lambda}{\gamma}\leq \frac{\alpha}{k}(c-1)^{\alpha-1}$, there exists $p_1^{*}\in[0,\infty)$ such that the agent will enter the trade when the asset price is at or above $p_1^*$. 2. If $\frac{\alpha}{k}(c-1)^{\alpha-1}<\frac{\lambda}{\gamma}<\frac{(c-1)^{\alpha-1}}{k}$, there exists a constant $C\in(0,\infty)$ independent of $\Psi$ and $R$ such that: 1. If $\frac{\Psi}{R}< C$, there exists $0\leq p_1^{*}<p_2^{*}<\infty$ such that the agent will enter the trade when the asset price is within the interval $[p_1^{*},p_2^{*}]$. 2. If $\frac{\Psi}{R}\geq C$, the agent will never enter the trade. 3. If $\frac{\lambda}{\gamma}\geq \frac{(c-1)^{\alpha-1}}{k}$, the agent will never enter the trade. The value function has the same form in Proposition \[prop:entry\] on setting $\beta=1$. \[prop:entryspecial\] The value function of the entry problem is characterized by the smallest concave majorant to the payoff function defined in . Indeed, the function $v_1$ defined in is simply the scaled value function of the exit problem such that $v_1(\theta)=V_1(\theta^{1/\beta}; \lambda \theta^{1/\beta}+\Psi+R)$. At a mathematical level, the various cases arising in Proposition \[prop:entry\] and \[prop:entryspecial\] are due to the different possible shapes of $v_1$ under different combinations of parameters. Some illustrations are given in Figure \[fig:valfun\]. Economically, the optimal entry strategy crucially depends on the level of transaction costs relative to the market and preference parameters. A fixed market entry fee in general discourage trading when the asset price is low. Paying a flat fee of \$10 to purchase an asset at \$20 is much less appealing compared to the case that the asset is trading at \$1000, because in the former case the asset has to increase in value by 50% just for breakeven against the fixed transaction fee paid. Meanwhile, proportional transaction costs are the most significant for asset trading at high nominal price. A 10% transaction fee charged on a million worth of property is much more expensive in monetary terms relative to the same percentage fee charged on a penny stock. In case (1) of both Proposition \[prop:entry\] and \[prop:entryspecial\], the proportional transaction costs are relatively low. Hence the agent does not mind purchasing the asset at a high nominal price. He will just avoid purchasing the asset when its price is very low due to the consideration of fixed transaction costs and therefore the purchase region is in form of $[p_1^*,\infty)$. In case (2)(a), proportional transaction costs start becoming significant. On the one hand, the agent avoids initiating the trade when the asset price is too low since the fixed entry cost will be too large relative to the size of the trade. On the other hand, the agent does not want to trade an expensive asset when the proportional costs are large. Upon balancing these two factors, the agent will wait when asset price is either too low or too high, and will only purchase the asset when the price first enters an interval $[p_1^*, p_2^*]$. A very interesting feature of the optimal entry strategy is that the waiting region here is disconnected. In case (2)(b) of Proposition \[prop:entry\], or case (2)(b) and (3) of Proposition \[prop:entryspecial\], the overall level of transaction costs is too high and hence the agent is discouraged from entering the trade in the first place. The key difference between Proposition \[prop:entry\] and \[prop:entryspecial\] is that when the asset has a strictly positive drift ($\beta<1\iff \mu>0$), one must impose a strictly positive fixed entry cost in order to stop the agent from trading at all price levels (if $\Psi=0$, then either case (1) or (2)(a) applies in which case the agent is willing to enter the trade at a certain price level). When the asset is a statistically fair gamble ($\beta=1\iff \mu=0$), then a high proportional transaction cost alone is sufficient to discourage the agent from trading. It is interesting to note that the trading decision also depends on the agent’s reference point $R$. Comparing case (2)(a) and case (2)(b) in Proposition \[prop:entry\] and \[prop:entryspecial\], a low value of $R$ will more often lead to the “no trading” case. The economic interpretation is that an agent with low aspiration level (i.e. a low target benchmark) is less likely to participate trading, especially when the (proportional) costs of trading are high. When viewed in conjunction with the optimal exit strategy (as per Lemma \[lem:henderson\]), our model can encapsulate many styles of trading behaviors. If $\beta\leq 0$ or $\beta<\alpha<1$ such that the exit problem is ill-posed, then any purchase strategy can lead to infinite expected utility in the entry problem . For example, one can purchase the asset at time zero (i.e. the choice of entry time is $\tau=0$) and then consider a sequence of sale time $\nu_n:=\inf\{t\geq 0: P_t\geq n\}$. When $n\to \infty$, the expected utility approaches infinity. This corresponds to a “buy and hold” strategy. In case (1) or (2)(a) of Proposition \[prop:entry\] and \[prop:entryspecial\], if the asset price starts below $p_1^*$ at time zero, then the agent will purchase the asset when its price level increases to $p_1^*$.[^6] The agent will then seek to sell this asset later when its price level further increases to $\frac{c(\lambda p_1^*+\Psi+R)}{\gamma}$. This trading rule is thus a momentum strategy in form of “buy high and sell higher”. If the asset price starts above $p_2^*$ at time zero in case (2)(a), then the agent will buy the asset when its price level drops to $p_2^*$ and later to sell the asset when it increases to $\frac{c(\lambda p_2^*+\Psi+R)}{\gamma}$. This is a counter-trend trading strategy in form of “buy low sell high”. Finally, in the high transaction cost cases (case (2)(b) of Proposition \[prop:entry\], and case (2)(b) or (3) of Proposition \[prop:entryspecial\]) the agent will never participate in trading at any asset price level. The various cases above are generated by different level of transaction costs relative to the other model parameters. The following two corollaries highlight the role of transaction costs in relationship to the optimal trading strategies. If $\lambda=\gamma=1$, the agent will purchase the asset when its price level is at or above $p_1^*$ for some $p_1^*\in[0,\infty)$. \[cor:noprop\] Under the parameter combinations that $p_1^*$ is well defined, if $\Psi=0$ then we have $p_1^*=0$. \[cor:nofix\] From Corollary \[cor:noprop\], if there is no proportional transaction cost then the agent does not care about entering the trade at a high nominal price level because he no longer needs to worry about the large magnitude of trading fee arising from the proportional nature of the transaction costs. Hence “buy low sell high” will not be observed as an optimal strategy. Similarly, Corollary \[cor:nofix\] suggests that in absence of fixed market entry fee the agent is happy to purchase an asset of any arbitrarily low price (in the non-degenerate case) since now he does not need to take the size of the trade into account against any fixed cost for breakeven consideration. Thus “buy high sell higher” will not be an optimal strategy in this special case. The critical trading boundaries in Proposition \[prop:entry\] and \[prop:entryspecial\], although not being available in close-form in general, can be characterized easily and in turn we can deduce some useful comparative statics. In case (1) or case (2)(a) of Proposition \[prop:entry\] and \[prop:entryspecial\], $p_1^*=\frac{R+\Psi}{\gamma} (x_1^*)^{1/\beta}$ where $x^*_1$ is the (smaller) solution to $$\begin{aligned} \left(1+\frac{\Psi}{R}\right)^{\alpha}\frac{k\left(\frac{\lambda}{\gamma}x^{\frac{1}{\beta}}+1\right)^\beta\left[\frac{\lambda }{\gamma}\left(1-\frac{\alpha}{\beta}\right)x^{\frac{1}{\beta}}+1\right]-\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}\frac{\lambda}{\gamma}\left(1-\frac{\alpha}{\beta}\right)x^{\frac{1}{\beta}+1}}{\left(\frac{\lambda}{\gamma}x^{\frac{1}{\beta}}+1\right)^{\beta-\alpha+1}}=k. \label{eq:p1eq}\end{aligned}$$ In case (2)(a) of Proposition \[prop:entry\] and \[prop:entryspecial\], $p_2^*=\frac{R+\Psi}{\gamma}(x_2^*)^{1/\beta}$ where $x_2^*$ is the unique solution to $$\begin{aligned} c^{1-\beta}(c-1)^{\alpha-1}\left(x^{-\frac{1}{\beta}}+\frac{\lambda \alpha}{\gamma\beta}\right)-\frac{k\lambda}{\gamma} \left(x^{-\frac{1}{\beta}}+\frac{\lambda}{\gamma}\right)^{\beta}=0. \label{eq:p2eq}\end{aligned}$$ In the special case of $\alpha< \beta=1$, we have $$\begin{aligned} p_2^{*}=\frac{(R+\Psi)[(c-1)^{\alpha-1}-k\lambda/\gamma]}{\lambda[k\lambda/\gamma-\alpha(c-1)^{\alpha-1}]}.\end{aligned}$$ \[prop:tradingregion\] Under the parameters combination such that $p_1^*$ and/or $p_2^*$ are well defined. We have: 1. $p_1^*$ is decreasing in $\gamma$ and increasing in $\Psi$. 2. $p_2^*$ is decreasing in $\lambda$, increasing in $\gamma$ and increasing in $\Psi$. \[prop:compstat\] Figure \[fig:compgamma\] shows the critical purchase boundary $p_1^*$ and $p_2^*$ as $\gamma$ varies. For very large value of $\gamma$ such that the condition in case (1) of Proposition \[prop:entry\] holds, the optimal strategy is to buy the asset when its price exceeds $p_1^*$ and that the agent is willing to enter the trade no matter how high the price is. Once $\gamma$ is smaller than a certain critical value (labeled by the vertical dotted line on the figure), parameters condition in case (2)(a) of Proposition \[prop:entry\] applies. The optimal strategy now becomes to purchase the asset only when its price is within a bounded range $[p_1^*,p_2^*]$. As $\gamma$ further decreases, $p_1^*$ increases while $p_2^*$ decreases so that the purchase region $[p_1^*,p_2^*]$ shrinks. Once $\gamma$ reaches another critical value, $p_1^*$ and $p_2^*$ converge and the purchase region diminishes entirely. This corresponds to case (2)(b) of Proposition \[prop:entry\] that the agent will not enter the trade at any price level.[^7] We do not mention in Proposition \[prop:compstat\] the effect of $\lambda$ on $p_1^*$. While the example in Figure \[fig:complambda\] shows that $p_1^*$ is increasing in $\lambda$, numerical results show that $p_1^*$ is not monotonic in $\lambda$ in general. See Figure \[fig:lam\_countereg\]. Hence, when viewed in conjunction with $p_2^*$ the purchase region $[p_1^*, p_2^*]$ does not necessarily shrink uniformly when proportional cost on purchase increases, i.e. the agent may not delay the purchase decision. Similar observations regarding potential non-monotonicity of trading decisions with respect to (proportional) transaction costs are made by [@hobson-tse-zhu19a] and [@hobson-tse-zhu19b] in the context of portfolio optimization. Similarly, we can also examine the impact of the fixed market entry cost on the purchase decision. As shown in Figure \[fig:comppsi\], $p_1^*$ and $p_2^*$ are both increasing in $\Psi$. The agent in general is looking to buy the asset at low price and then sell it at high price to make a profit. However, the fixed entry cost makes it less appealing to trade an asset with low nominal price. As a result, the purchase region $[p_1^*,p_2^*]$ shifts upwards as $\Psi$ increases and thus the agent will only enter the trade when the price level is reasonably high relative to the fixed cost. Once $\Psi$ reaches a critical high value, $p_1^*$ and $p_2^*$ coincide and the trading region vanishes. This reflects the high fixed transaction cost scenario in case (2)(b) of Proposition \[prop:entry\]. Suppose there is a policy maker who wants to discourage the agent from purchasing the asset (for example, as a mean to cool down a highly speculative real estate market). A natural measure to curb trading participation is to increase transaction costs. However, Figure \[fig:compstat\] reveals that there is a subtle difference between the impact of proportional and fixed transaction cost on the agent’s trading behavior. From Figure \[fig:compgamma\], the effect of increasing proportional transaction cost on sale (i.e. decreasing $\gamma$) is “monotonic” in terms of changing the trading decision of the agent. At any given current asset price level, decreasing $\gamma$ can only take the agent from the purchase region to the no trade region. Increasing proportional transaction cost on sale can therefore unambiguously suppress the trading activities in the market. In contrast, the impact of the fixed market entry cost is somewhat unclear. Take Figure \[fig:comppsi\] as an example and suppose the current price of the asset is \$100. If there is no fixed market entry fee initially (i.e. $\Psi=1$), the agent will not participate in trading as he is in the no trade region. However, a policy of increasing $\Psi$ from zero to \$4 will now put the agent in the purchase region such that he is willing to purchase the asset immediately. It is exactly opposite to the intended outcome of the policy because the increase in $\Psi$ actually encourages trading participation. The rationale behind this phenomenon is as follows: Without any fixed transaction cost, the agent in general wants to wait when the asset price is high as to get a lower entry level (and to mitigate the proportional transaction costs when the asset is expensive). When the fixed entry cost increases, purchasing the asset at a low price is no longer favorable and hence the agent may not want to delay the purchase decision anymore. Of course, as the fixed cost further increases, say from $\Psi=4$ to $\Psi=8$, the agent will eventually enter the no trade region again. Nonetheless, when the economy is consisting of multiple agents with heterogeneous preference parameters, it is unclear from the outset whether increasing the fixed transaction costs can uniformly discourage trading participation for all agents. Similarly, the non-monotonicity of $p_1^*$ with respect to $\lambda$ the proportional transaction cost on purchase also implies that an increase in $\lambda$ can potentially bring certain agents from the no trade region to the purchase region. Our results suggest that proportional transaction cost on sale can serve as a superior tool to control speculative trading in a market as its effect is unambiguous. Concluding remarks {#sect:conc} ================== This paper considers a dynamic trading model under Prospect Theory preference with transaction costs. By solving a sequential optimal stopping problem, we find that the optimal trading strategy can have various forms depending on the model parameters and the price level of the asset. The impact of transaction costs is subtle. In contrast to conventional wisdom, increasing the fixed market entry cost does not necessarily deter economic agents from trading participation. These results could potentially be useful to policy makers to better understand how undesirable speculative trading behaviors in certain markets can be effectively curbed. Our key mathematical results are derived under a somewhat stylized modeling specification. In particular, asymmetry of degree of risk-aversion/seeking over gains/losses, fixed transaction cost on sale, negative aspiration level and depreciating asset are currently omitted from the analysis. While these simplifications allow us to derive very sharp characterization and comparative statics of the optimal trading rules, it will nonetheless be constructive to extend the model to further examine the impact of other economic factors. For example, feature of stop-loss is currently absent among all the non-trivial strategies derived in our model. Inspired by [@henderson12], voluntary stop-loss can be observed in this style of optimal stopping model when the excess return of the asset is negative. A more ambitious goal is to further incorporate probability weighting within our continuous-time optimal stopping model (as per [@xu-zhou13] and [@henderson-hobson-tse18]) to fully reflect the features of Prospect Theory framework of [@tversky-kahneman92]. However, technical subtleties are likely to arise due to the time-inconsistency nature brought by probability weighting. Precise formulation of the problem as well as development of the appropriate mathematical techniques should prove to be an another interesting proposal for future research. Appendix: proof of main results {#appendix-proof-of-main-results .unnumbered} =============================== This largely follows from Henderson (2012) and here we will provide a quick sketch of the proof. If $\beta<0$, the problem is clearly ill-posed since $P_t=\exp\left[\left(\mu-\frac{\sigma^2}{2}\right)t+\sigma B_t\right]=\exp\left[\sigma\left(-\frac{\sigma}{2}\beta t+ B_t\right)\right]$ such that the drift term in the exponent is non-negative. The price process $P$ will hence reach any strictly positive level in finite time almost surely. For example, a sequence of stopping rules $\nu_{n}:=\inf\{t>0:P_t\geq n\}$ will give $\mathbb{E}[G_1(P_{\nu_n};H)]=U(\gamma n - H)\to \infty$ as $n\to \infty$. For $\beta>0$, the scaled payoff function is given by $$\begin{aligned} g_1(\theta)=g_1(\theta;H)=G_1(s^{-1}(\theta);H)&=U(\gamma \theta^{\frac{1}{\beta}} -H) = \begin{cases} (\gamma \theta^{\frac{1}{\beta}} -H)^{\alpha},& \theta\geq \left(\frac{H}{\gamma}\right)^\beta; \\ -k(H - \gamma \theta^{\frac{1}{\beta}})^{\alpha},& 0\leq \theta< \left(\frac{H}{\gamma}\right)^\beta. \end{cases}\end{aligned}$$ It is straightforward to work out the derivatives of $g_1$ as $$\begin{aligned} g_1'(\theta)= \begin{cases} \frac{\alpha\gamma}{\beta}\theta^{\frac{1}{\beta}-1}(\gamma \theta^{\frac{1}{\beta}} -H)^{\alpha-1},& \theta\geq \left(\frac{H}{\gamma}\right)^\beta; \\ \frac{k\alpha\gamma}{\beta}\theta^{\frac{1}{\beta}-1}(H - \gamma \theta^{\frac{1}{\beta}})^{\alpha-1},& 0\leq \theta< \left(\frac{H}{\gamma}\right)^\beta, \end{cases}\end{aligned}$$ and $$\begin{aligned} g_1''(\theta)= \begin{cases} \frac{\alpha\gamma}{\beta}\theta^{\frac{1}{\beta}-2}(\gamma\theta^{\frac{1}{\beta}}-H)^{\alpha-2}\left[\frac{\gamma(\alpha-\beta)}{\beta}\theta^{\frac{1}{\beta}}-\frac{1-\beta}{\beta}H\right],& \theta\geq \left(\frac{H}{\gamma}\right)^\beta; \\ \frac{k\alpha\gamma}{\beta}\theta^{\frac{1}{\beta}-2}(H-\gamma\theta^{\frac{1}{\beta}})^{\alpha-2}\left[\frac{\gamma(\beta-\alpha)}{\beta}\theta^{\frac{1}{\beta}}+\frac{1-\beta}{\beta}H\right],& 0\leq \theta< \left(\frac{H}{\gamma}\right)^\beta. \end{cases}\end{aligned}$$ When $0<\beta<\alpha<1$, over $\theta>\left(\frac{H}{\gamma}\right)^\beta$ we have $g_1$ being first increasing concave and then increasing convex with $\lim_{\theta\to\infty} g_1'(\theta)=\infty$. The smallest concave majorant of $g_1$ is not well defined in this case. Then again sequence of stopping times $\nu_n$ with $\nu_n\uparrow \infty$ can be constructed which yields infinite expected utility. If $\alpha<\beta\leq 1$ or $\alpha=\beta<1$, then $g_1$ is increasing concave on $\theta>\left(\frac{H}{\gamma}\right)^\beta$ and is increasing convex on $0\leq \theta<\left(\frac{H}{\gamma}\right)^\beta$. The smallest concave majorant can be formed by drawing a straight line from $(0,g_1(0))$ which touches $g_1$ at some $\theta^{*}>\left(\frac{H}{\gamma}\right)^\beta$. In particular, $\theta^*$ is a solution to $\frac{g_1(\theta)-g_1(0)}{\theta}=g_1'(\theta)$ on $\theta>\left(\frac{H}{\gamma}\right)^\beta$, i.e. $$\begin{aligned} \frac{\alpha\gamma}{\beta}\theta^{\frac{1}{\beta}-1}(\gamma \theta^{\frac{1}{\beta}}-H)^{\alpha-1}=\frac{(\gamma \theta^{\frac{1}{\beta}} -H)^{\alpha}+kH^\alpha}{\theta}.\end{aligned}$$ Conjecturing the solution in form of $\theta^*=c^\beta \left(\frac{H}{\gamma}\right)^\beta$ for some constant $c>1$. Then direct substitution shows that the constant $c$ should solve . The smallest concave majorant of $g_1$ is then $$\begin{aligned} \bar{g}_1(\theta)&= \begin{cases} g_1(\theta),& \theta>\theta^*\\ g_1(0)+\theta g_1'(\theta^*),& 0\leq \theta<\theta^* \end{cases}\\ &= \begin{cases} (\gamma \theta^{\frac{1}{\beta}} -H)^{\alpha},& \theta>c^\beta \left(\frac{H}{\gamma}\right)^\beta\\ -kH^{\alpha}+\frac{\alpha}{\beta}H^{\alpha-\beta}c^{1-\beta}(c-1)^{\alpha-1}\gamma^\beta\theta,&0\leq \theta<c^\beta \left(\frac{H}{\gamma}\right)^\beta \end{cases}\end{aligned}$$ The value function is given by $V_1(p;H)=\bar{g}_1(s(p))=\bar{g}_1(p^\beta)$ leading to . The corresponding optimal stopping time is $\tau=\inf\left\{t\geq 0:\Theta_t=\theta^{*}\right\}=\inf\left\{t\geq 0:P_t=c \left(\frac{H}{\gamma}\right)\right\}$. We start with two useful lemmas before proving Proposition \[prop:entry\] and \[prop:entryspecial\]. Write $\xi:=\frac{\lambda}{\gamma}$. For the function $f$ defined in we have $$\begin{aligned} \lim_{x\to +\infty}f(x)= \begin{cases} +\infty,& \xi<\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}};\\ 0,& \xi=\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}};\\ -\infty,& \xi>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}. \end{cases} \end{aligned}$$ Moreover: 1. Suppose $\alpha<\beta<1$: 1. If $\xi\leq\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $f$ is an increasing concave function. 2. If $\xi>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $f$ is concave increasing on $[0,x_2^*]$, concave decreasing on $[x_2^*,\tilde{x}]$ and convex decreasing on $[\tilde{x},\infty)$. Here $x_2^*$ and $\tilde{x}$ are respectively the solutions to the equation $$\begin{aligned} c^{1-\beta}(c-1)^{\alpha-1}\left(x^{-\frac{1}{\beta}}+\frac{\xi \alpha}{\beta}\right)-k\xi\left(x^{-\frac{1}{\beta}}+\xi\right)^\beta=0 \label{eq:turnpt} \end{aligned}$$ and $$\begin{aligned} c^{1-\beta}(c-1)^{\alpha-1}\left[-\frac{\xi \alpha}{ \beta^2}(\beta-\alpha)+\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)x^{-\frac{1}{\beta}}\right]-k\left(\xi+x^{-\frac{1}{\beta}}\right)^\beta\left[-\xi\left(1-\frac{\alpha}{\beta}\right)+\left(\frac{1}{\beta}-1\right)x^{-\frac{1}{\beta}}\right]=0. \label{eq:inflexpt} \end{aligned}$$ 2. Suppose $\alpha<\beta=1$: 1. If $\xi\leq\frac{\alpha}{k}(c-1)^{\alpha-1}$, then $f$ is an increasing concave function. 2. If $\frac{\alpha}{ k}(c-1)^{\alpha-1}<\xi\leq\frac{1}{ k}(c-1)^{\alpha-1}$, then $f$ is concave increasing on $[0,x_2^*]$, concave decreasing on $[x_2^*,\tilde{x}]$ and convex decreasing on $[\tilde{x},\infty)$ with $$\begin{aligned} x^*_2:=\frac{(c-1)^{\alpha-1}-k\xi}{\xi\left[k\xi-\alpha(c-1)^{\alpha-1}\right]},\qquad \tilde{x}:=\frac{2(c-1)^{\alpha-1}-k\xi}{\xi\left[k\xi-\alpha(c-1)^{\alpha-1}\right]}. \end{aligned}$$ 3. If $\xi>\frac{1}{ k}(c-1)^{\alpha-1}$, then $f$ is a decreasing function. \[lem:shapef\] We can rewrite $f$ as $$\begin{aligned} f(x)=\frac{\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}-k(\xi +x^{-\frac{1}{\beta}})^{\beta}}{(\xi +x^{-\frac{1}{\beta}})^{\beta-\alpha}}x^{\frac{\alpha}{\beta}} \end{aligned}$$ such that $\displaystyle \lim_{x\to \infty}f(x)=\pm \infty$ when $\xi \gtrless \left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$. The corner case of $\xi = \left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$ can be analyzed by a simple application of L’Hospital’s rule. We now derive the shapes of $f$ by first focusing on the case of $\beta\neq 1$. Direct differentiation gives $$\begin{aligned} f'(x)&=\frac{\alpha}{\beta}\frac{c^{1-\beta}(c-1)^{\alpha-1}\left(\frac{\xi\alpha}{\beta}x^{\frac{1}{\beta}}+1\right)-k\xi x^{\frac{1}{\beta}-1}\left(\xi x^{\frac{1}{\beta}}+1\right)^\beta}{\left(\xi x^{\frac{1}{\beta}}+1\right)^{\beta-\alpha+1}} =\frac{\alpha x^{\frac{1}{\beta}}h_1(x^{-\frac{1}{\beta}})}{\beta\left(\xi x^{\frac{1}{\beta}}+1\right)^{\beta-\alpha+1}} \label{eq:f_firstder} \end{aligned}$$ with $$\begin{aligned} h_1(z):=c^{1-\beta}(c-1)^{\alpha-1}\left(z+\frac{\xi \alpha}{\beta}\right)-k\xi\left(z+\xi\right)^\beta, \end{aligned}$$ and $$\begin{aligned} f''(x)&=\frac{\xi\alpha x^{\frac{1}{\beta}-2}}{\beta\left(\xi x^{\frac{1}{\beta}}+1\right)^{\beta-\alpha+2}}\Biggl\{c^{1-\beta}(c-1)^{\alpha-1}x\left[-\frac{\xi\alpha}{\beta^2}(\beta-\alpha)x^{\frac{1}{\beta}}+\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)\right] \\ &\qquad -k\left(\xi x^{\frac{1}{\beta}}+1\right)^\beta\left[-\xi\left(1-\frac{\alpha}{\beta}\right)x^{\frac{1}{\beta}}+\frac{1}{\beta}-1\right]\Biggl\} \\ &=\frac{\xi\alpha x^{\frac{2}{\beta}-1}h_2(x^{-\frac{1}{\beta}})}{\beta\left(\xi x^{\frac{1}{\beta}}+1\right)^{\beta-\alpha+2}} \end{aligned}$$ where $$\begin{aligned} h_2(z):=c^{1-\beta}(c-1)^{\alpha-1}\left[-\frac{\xi \alpha}{ \beta^2}(\beta-\alpha)+\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)z\right]-k\left(\xi+z\right)^\beta\left[-\xi\left(1-\frac{\alpha}{\beta}\right)+\left(\frac{1}{\beta}-1\right)z\right]. \end{aligned}$$ We first investigate the convexity/concavity of $f$ by studying the sign of $f''(x)$, which is determined by that of $h_2(x^{-\frac{1}{\beta}})$. Check that $$\begin{aligned} h_2(0)&=\xi\left(1-\frac{\alpha}{\beta}\right)\left[-\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}+k\xi^\beta\right], \\ h_2'(0)&=c^{1-\beta}(c-1)^{\alpha-1}\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)-k\xi^{\beta}\left(\frac{1}{\beta}-\beta+\alpha-1\right) \end{aligned}$$ and $$\begin{aligned} h_2''(z)=-k\left(\xi+z\right)^{\beta-2}\left[(1-\beta)(1+\beta)z+\xi(1-\beta)(2+\beta-\alpha)\right]<0 \end{aligned}$$ for all $z>0$ since $\alpha\leq \beta\leq 1$ and thus $h_2$ is concave. Then there are two possibilities. Suppose $\xi\leq\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $h_2(0)\leq 0$ and $$\begin{aligned} h_2'(0)&=c^{1-\beta}(c-1)^{\alpha-1}\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)-k\xi^{\beta}\left(\frac{1}{\beta}-\beta+\alpha-1\right) \\ &<c^{1-\beta}(c-1)^{\alpha-1}\frac{1}{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)-k\xi^{\beta}\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right) \\ &=k\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)\left[\frac{c^{1-\beta}(c-1)^{\alpha-1}}{\beta k}-\xi^\beta\right] \\ &\leq k\left(\frac{\alpha}{\beta}-\beta+\alpha-1\right)\left[\frac{\alpha c^{1-\beta}(c-1)^{\alpha-1}}{\beta k}-\xi^\beta\right] \\ &\leq 0 \end{aligned}$$ where we have used the facts that $\alpha<1$ and $\frac{\alpha}{\beta}-\beta+\alpha-1<\alpha-\beta\leq 0$. Since $h_2$ is concave, we must have $h_2(z)\leq 0$ for all $z>0$. Hence $f_2''(x)\leq 0$ for all $x\geq 0$, i.e. $f$ is a concave function. Suppose instead $\xi>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $h_2(0)> 0$ and $$\begin{aligned} \lim_{z\to\infty}\frac{h_2(z)}{z^{\beta+1}}=-k\left(\frac{1}{\beta}-1\right)<0 \end{aligned}$$ such that $h_2(z)\to -\infty$ as $z\to\infty$. As $h_2$ is concave, we must have $h_2(z)$ down-crossing zero exactly once on $(0,\infty)$. Hence $f_2''(x)\propto h_2(x^{-\frac{1}{\beta}})$ has exactly one sign change from negative to positive, i.e. $f$ is concave for small $x$ and convex for large $x$ with a unique inflexion point $\tilde{x}$ which is given by the solution to $h_2(x^{-\frac{1}{\beta}})=0$. This corresponds to equation . Now we look at the monotonicity of $f$ via the sign of $f'(x)$ which in turn is determined by that of $h_1(x^{-\frac{1}{\beta}})$. Check that $$\begin{aligned} h_1(0)&=\xi\left[\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}-k\xi^\beta\right],\\ h_1'(z)&=c^{1-\beta}(c-1)^{\alpha-1}-\frac{k\xi\beta}{\left(z+\xi\right)^{1-\beta}}. \end{aligned}$$ Observe that $h_1'$ is increasing and thus $h_1$ is convex. There are two cases. Suppose $\xi\leq\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $h_1(0)\geq 0$ and $$\begin{aligned} h_1'(0)&=c^{1-\beta}(c-1)^{\alpha-1}-k\beta\xi^\beta \geq \alpha c^{1-\beta}(c-1)^{\alpha-1}-k\beta\xi^\beta \geq 0. \end{aligned}$$ As $h_1$ is convex, we must have $h_1(z)\geq 0$ for all $z>0$. Hence $f'(x)\geq 0$ for all $x\geq 0$, i.e. $f$ is an increasing function. Together with the consideration of $f''$ in this parameter regime, $f$ is an increasing concave function. Suppose $\xi>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then we have $h_1(0)<0$ instead. We also have $$\begin{aligned} \lim_{z\to\infty}\frac{h_1(z)}{z}=c^{1-\beta}(c-1)^{\alpha-1}>0 \label{eq:h1infty} \end{aligned}$$ and hence $h_1(z)\to \infty$ as $z\to\infty$. Since $h_1$ is convex, $h_1$ must up-cross zero exactly once on $(0,\infty)$. Therefore $f'(x)\propto h_1(x^{-\frac{1}{\beta}})$ changes sign exactly once, from which we conclude $f$ is first increasing and then decreasing with a unique turning point $x_2^*$. Moreover, $x_2^*$ is the solution to $h_1(x^{-\frac{1}{\beta}})=0$ which is equivalent to . Taking the behavior of $f''$ into consideration, we conclude that $f$ is increasing concave $[0,x_2^*]$, decreasing concave on $[x_2^*,\tilde{x}]$ and decreasing convex on $[\tilde{x},\infty)$. The case of $\beta=1$ can be handled similarly. The key difference is that no longer holds when $\beta=1$ but rather we will have $$\begin{aligned} \lim_{z\to\infty}\frac{h_1(z)}{z}=(c-1)^{\alpha-1}-k\xi \end{aligned}$$ instead which can be either positive or negative. In the case of $\xi>\frac{(c-1)^{\alpha-1}}{k}$, we have $h_1(z)\to -\infty$ as $z\to \infty$. We can then deduce $f'(x)$ is negative for all $x$ and thus $f$ is decreasing. If $\xi:=\frac{\lambda}{\gamma}>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, then $f(x)<0$ for all $x$ where $f$ is defined in . \[lem:zerobound\] The result follows directly from the definition of $f$ that $$\begin{aligned} f(x)&=\left[\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}x\left(\frac{\lambda}{\gamma} x^{1/\beta}+1\right)^{-\beta}-k\right]\left(\frac{\lambda}{\gamma} x^{1/\beta}+1\right)^{\alpha} \\ &<\left[k\left(\frac{\lambda}{\gamma}\right)^\beta x\left(\frac{\lambda}{\gamma} x^{1/\beta}+1\right)^{-\beta}-k\right]\left(\frac{\lambda}{\gamma} x^{1/\beta}+1\right)^{\alpha}\\ &<\left[k\left(\frac{\lambda}{\gamma}\right)^\beta x\left(\frac{\lambda}{\gamma} x^{1/\beta}+0\right)^{-\beta}-k\right]\left(\frac{\lambda}{\gamma} x^{1/\beta}+1\right)^{\alpha}=0.\end{aligned}$$ From the discussion in Section \[sect:solmethod\], we have to identify the smallest concave majorant of the function $$\begin{aligned} g_2(\theta):=\max\{V_1(s^{-1}(\theta); \lambda s^{-1}(\theta)+\Psi+R),U(-R)\}=\max\{V_1(\theta^{\frac{1}{\beta}}; \lambda \theta^{\frac{1}{\beta}}+\Psi+R),U(-R)\} $$ where $V_1$ is the value function of the exit problem given in Lemma \[lem:henderson\]. Since we assume $R> 0$ and that $c>1$, $\Psi\geq 0$ and $\gamma\leq 1\leq \lambda$, we have $c\left(\frac{\lambda \theta^{\frac{1}{\beta}}+\Psi+R}{\gamma}\right)\geq \theta^{\frac{1}{\beta}}$ and hence first the definition of will always apply when evaluating $V_1(\theta^{\frac{1}{\beta}}; \lambda \theta^{\frac{1}{\beta}}+\Psi+R)$, i.e. $$\begin{aligned} v_1(\theta)&:=V_1(\theta^{\frac{1}{\beta}}; \lambda \theta^{\frac{1}{\beta}}+\Psi+R) \nonumber\\ &=-k(\lambda \theta^{\frac{1}{\beta}}+\Psi+R)^{\alpha}+\frac{\alpha}{\beta}(\lambda \theta^{\frac{1}{\beta}}+\Psi+R)^{\alpha-\beta}c^{1-\beta}(c-1)^{\alpha-1}\gamma^\beta\theta \nonumber\\ &=R^\alpha\left(1+\frac{\Psi}{R}\right)^\alpha\left[\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}\left(\frac{\lambda}{\gamma}\frac{\gamma \theta^{\frac{1}{\beta}}}{R+\Psi}+1\right)^{\alpha-\beta}\left(\frac{\gamma}{R+\Psi}\right)^\beta\theta-k\left(\frac{\lambda}{\gamma}\frac{\gamma \theta^{\frac{1}{\beta}}}{R+\Psi}+1\right)^\alpha\right] \nonumber \\ &=R^\alpha\left(1+\frac{\Psi}{R}\right)^{\alpha}f\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta\right) \label{eq:v1rep}\end{aligned}$$ where $f$ is defined in . The shape of $f$ under different parameters combination is given by Lemma \[lem:shapef\] and thus we have the following cases. When $\xi:=\frac{\lambda}{\gamma}\leq\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, $f$ is increasing concave with $\displaystyle \lim_{x\to \infty}f(x)= +\infty$. These properties are inherited by $v_1$. Furthermore, $v_1(0)=R^{\alpha}(1+\Psi/R)^{\alpha}f(0)=-kR^{\alpha}(1+\Psi/R)^{\alpha}\leq -kR^{\alpha}$ and $\displaystyle \lim_{\theta\to \infty}v_1(\theta)> 0>-kR^{\alpha}$. Thus $g_2$ is constructed by truncating an increasing concave function from below at $-kR^{\alpha}$. The smallest concave majorant of $g_2$ is formed by drawing a tangent line passing through $(0,-kR^{\alpha})$ which touches $v_1$ at some $\theta^*_1$. See Figure \[fig:case1\]. The optimal strategy is to sell the asset when its transformed price $\Theta_t$ first reaches $\theta_1^*$ or above. The corresponding threshold in the original price scale is given by $p_1^*:=s^{-1}(\theta_1^*)=(\theta_1^*)^{1/\beta}$. When $\xi=\frac{\lambda}{\gamma}>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, Lemma \[lem:shapef\] implies that $v_1$ is first concave increasing, reaching a global maximum at some $\theta_2^*$, concave decreasing and finally convex decreasing with $\displaystyle \lim_{\theta\to \infty}v_1(\theta)= -\infty$. There are two further possibilities. If $v_1(\theta_2^*)> -kR^{\alpha}$, then there must exist $0\leq \hat{\theta}_1<\hat{\theta}_2$ such that $g_2(\theta)=-kR^{\alpha}$ on $[0,\hat{\theta}_1]\cup [\hat{\theta}_2,\infty)$ and $g_2(\theta)=v_1(\theta)$ on $[\hat{\theta}_1,\hat{\theta}_2]$. The smallest concave majorant of $g_2(\theta)$ is formed by a chord passing $(0,-kR^\alpha)$ which touches $v_1$ at some $\theta_1^*<\theta_2^*$, and a horizontal line at level $g(\theta_2^*)$ on $\theta>\theta_2^*$. See Figure \[fig:case2\]. The optimal strategy is to purchase the asset when its transformed price $\Theta_t$ first enters the interval $[\theta_1^*,\theta_2^*]$. The boundary of the purchase regions in the naive scale can be recovered via $p_i^*=(\theta_i^*)^{1/\beta}$ for $i=1,2$. If $v_1(\theta_2^*)\leq -kR^{\alpha}$ instead, then $v_1(\theta)\leq -kR^{\alpha}$ for all $\theta$. Thus $g_2(\theta)=-kR^{\alpha}$ which is a flat horizontal line, and it is also the smallest concave majorant of itself. The optimal strategy is not to trade at all at any price level such that the utility received is always $U(-R)=-kR^{\alpha}$. See Figure \[fig:case3\]. The “never purchase” case arises if and only if $v_1(\theta_2^*)\leq -kR^{\alpha}$ or equivalently $$\begin{aligned} R^\alpha\left(1+\frac{\Psi}{R}\right)^{\alpha}f\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta_2^*\right) \leq -kR^{\alpha} \iff \left(1+\frac{\Psi}{R}\right)^{\alpha}f(x_2^*) \leq -k\end{aligned}$$ where $x_2^*$ is the maximizer of $f$ introduced in Lemma \[lem:shapef\] and it is independent of $\frac{\Psi}{R}$. Using the fact that $f(0)=-k$ and Lemma \[lem:zerobound\], we have $-k<f(x^*)<0$ and hence there must exist $C:=\left[-\frac{k}{f(x^*)}\right]^{1/\alpha}-1>0$ such that $\left(1+\frac{\Psi}{R}\right)^{\alpha}f(x^*) \leq -k$ if and only if $\Psi/R\geq C$. Omitted since it is largely than same as the proof of Proposition \[prop:entry\]. The result will follow if we can show that $\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}>1$ such that case (1) of Proposition \[prop:entry\] and \[prop:entryspecial\] always applies when $\lambda=\gamma=1$. The required inequality is $$\begin{aligned} \left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}>1 &\iff \frac{\alpha}{\beta }c^{1-\beta}(c-1)^{\alpha-1}>k \\ &\iff \frac{\alpha}{\beta }c^{1-\beta}(c-1)^{\alpha-1}>\frac{\alpha}{\beta} c(c-1)^{\alpha-1}-(c-1)^{\alpha} \\ &\iff F(c):= (c-1)^{\alpha}- \frac{\alpha}{\beta }c(c-1)^{\alpha-1}(1-c^{-\beta})>0 \end{aligned}$$ where we have used . Using simple calculus we can show that $F(x)>0$ for all $x>1$. This concludes the proof. This will follow immediately from Proposition \[prop:tradingregion\] by observing that $x=0$ is the solution to when $\Psi=0$. Recall from the proof of Proposition \[prop:entry\] that $\theta_1^*$ is the point of contact of the tangent line to $v_1$ which passes $(0,-kR^{\alpha})$. Hence $\theta_1^*$ should solve $$\begin{aligned} v_1'(\theta)-\frac{v_1(\theta)+kR^{\alpha}}{\theta}=0. \label{eq:theta1eq}\end{aligned}$$ Furthermore, we can deduce from a graphical inspection that the solution to is a down-crossing.[^8] Using the representation of $v_1(\theta)$ in , can be rewritten as $$\begin{aligned} R^\alpha\left(1+\frac{\Psi}{R}\right)^{\alpha}\left(\frac{\gamma}{R+\Psi}\right)^{\beta}f'\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta\right)-\frac{R^\alpha\left(1+\frac{\Psi}{R}\right)^{\alpha}f\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta\right)+kR^{\alpha}}{\theta}=0.\end{aligned}$$ A further substitution of $x=\left(\frac{\gamma}{R+\Psi}\right)^\beta \theta$ leads to $$\begin{aligned} \left(1+\frac{\Psi}{R}\right)^{\alpha}[xf'(x)-f(x)]=k. \label{eq:x1}\end{aligned}$$ Then $p_1^*=(\theta_1^*)^{1/\beta}=\frac{R+\Psi}{\gamma}(x_1^*)^{1/\beta}$ where $x_1^*$ is defined as the solution to which is equivalent to . Recall from the proof of Proposition \[prop:entry\] as well that $\theta_2^*$ is the maximizer of $v_2(\theta)$. Using the representation of , $\theta_2^*$ should then solve $f'\left(\left(\frac{\gamma}{R+\Psi}\right)^{\beta}\theta\right)=0$. Using , $x_2^*:=\left(\frac{\gamma}{R+\Psi}\right)^\beta \theta_2^*$ is a solution to $$\begin{aligned} h_1(x^{-\frac{1}{\beta}})=c^{1-\beta}(c-1)^{\alpha-1}\left(x^{-\frac{1}{\beta}}+\frac{\xi \alpha}{\beta}\right)-k\xi\left(x^{-\frac{1}{\beta}}+\xi\right)^\beta=0.\end{aligned}$$ Then the result follows since $p_2^*=(\theta_2^*)^{1/\beta}=\frac{R+\Psi}{\gamma}(x_2^*)^{1/\beta}$. From the proof of Proposition \[prop:tradingregion\], the required solution to equation is a down-crossing. Then given that the left hand side of is increasing in $\Psi$ (when evaluated at $x=x_1^*$) we can deduce $x_1^*$ and in turn $p_1^*$ are both increasing in $\Psi$. To show that $p_1^*$ is decreasing in $\gamma$, consider a substitution of $q=\frac{x^{1/\beta}}{\gamma}$. Then $p_1^*=(R+\Psi)q_1^*$ where $q_1^*$ is the solution to $$\begin{aligned} \left(1+\frac{\Psi}{R}\right)^{\alpha}\frac{k\left(\lambda q+1\right)^\beta\left[\lambda\left(1-\frac{\alpha}{\beta}\right)q+1\right]-\frac{\alpha}{\beta}c^{1-\beta}(c-1)^{\alpha-1}\lambda \gamma^{\beta}\left(1-\frac{\alpha}{\beta}\right)q^{\beta+1}}{\left(\lambda q+1\right)^{\beta-\alpha+1}}=k \label{eq:qeq1}\end{aligned}$$ where the left hand side of is decreasing in $\gamma$. Hence $q_1^*$ and in turn $p_1^*$ are both decreasing in $\gamma$. The monotonicity of $p_2^*$ with respect to $\Psi$ is trivial because equation which defines $x_2^*$ does not depend on $\Psi$. To check the monotonicity with respect to $\gamma$, consider a substitution of $q=\frac{x^{1/\beta}}{\gamma}$ again so that $p_2^*=(R+\Psi)q_2^*$ where $q_2^*$ is defined as the solution to $$\begin{aligned} c^{1-\beta}(c-1)^{\alpha-1}\left(\frac{1}{q}+\frac{\lambda \alpha}{\beta}\right)-\frac{k\lambda}{\gamma^{\beta}}\left(\frac{1}{q}+\lambda\right)^\beta=0. \label{eq:qeq}\end{aligned}$$ From the proof of Lemma \[lem:shapef\], the solution to $h_1(x^{-\frac{1}{\beta}})=0$ is a down-crossing. This property is inherited by . Moreover, the left hand side of is increasing in $\gamma$. Hence $q_2^*$ and in turn $p_2^*$ is increasing in $\gamma$. Similarly, consider a substitution of $y=\lambda^\beta x$. Then $p_2^*=\frac{R+\Psi}{\lambda \gamma }(y_2^*)^{1/\beta}$ where $y_2^*$ is defined as the solution to $$\begin{aligned} c^{1-\beta}(c-1)^{\alpha-1}\left(y^{-\frac{1}{\beta}}+\frac{ \alpha}{\beta\gamma}\right)-\frac{k\lambda^\beta}{\gamma}\left(y^{-\frac{1}{\beta}}+\frac{1}{\gamma}\right)^\beta=0. \label{eq:y}\end{aligned}$$ The left hand side of is decreasing in $\lambda$ and hence $y_2^*$ is decreasing in $\lambda$. Therefore $p_2^*$ is decreasing in $\lambda$ as well. [^1]: The optimal investment rule in the classical Merton ([-@merton69], [-@merton71]) portfolio selection problem can also be viewed as a buy low sell high strategy: Since the agent keeps a constant fraction of wealth invested in the risky asset, extra units of risky asset are sold (purchased) when the price increases (falls), ceteris paribus. In our paper, we focus on a single indivisible asset and do not consider portfolio effect. [^2]: We do not consider fixed transaction cost on sale. Part of the reason is to simplify the mathematical analysis in the forthcoming sections (see Remark \[remark:modelassump\] in Section \[sect:main\].). For many practical applications, it is also reasonable to assume that the fixed cost on market entry is much more significant than the fixed exit cost. [^3]: In the exit problem, conditional on the purchase price $p$ of the asset the reference point $H=\lambda p + \Psi+R$ can be viewed as an exogenously given constant. But in the second stage of the optimization, we are going to determine the optimal purchase price. Hence the reference point in our Prospect Theory trading model is indeed an endogenous one. [^4]: It is possible to have $\mathbb{P}(\tau^*(p)<\infty)<1$. In other words, there is a possibility that the entry strategy is not executed in finite time, and hence there is no decision to sell. The economic payoff in this scenario is zero. See the discussion in Section \[sect:main\]. [^5]: Similar to [@henderson12], [@xu-zhou13] and [@henderson-hobson-tse18], we do not explicitly consider subjective discounting. Under discounting the agent is much more inclined to delay losses and to realize profits earlier, this will lead to an extreme *disposition effect* which is not consistent with the empirical trading pattern of retail investors. See the discussion in [@henderson12]. [^6]: Similar to Remark \[remark:bm\], the price process $P$ may not reach a fixed level $p_1^*>P_0$ in finite time. In this case the entry strategy will not be executed and the payoff to the agent is zero. [^7]: Note that the constant $C$ in case (2) of Proposition \[prop:entry\] and \[prop:entryspecial\] depends on $\lambda$ and $\gamma$. Increasing $\frac{\lambda}{\gamma}$ will result in a switch from case (2)(a) to case (2)(b). [^8]: For the large proportional transaction costs case $\xi=\frac{\lambda}{\gamma}>\left[\frac{\alpha}{\beta k}c^{1-\beta}(c-1)^{\alpha-1}\right]^{\frac{1}{\beta}}$, the straight line passing $(0,-kR^{\alpha})$ can touch $v_1$ at two distinct locations. A simple geometric inspection will tell us that the required root is the smaller one.
--- abstract: '0.80cm Momentum decoupling develops when forward scattering dominates the pairing interaction and implies tendency for decorrelation between the physical behavior in the various regions of the Fermi surface. In this regime it is possible to obtain anisotropic s- or d-wave superconductivity even with isotropic pairing scattering. We show that in the momentum decoupling regime the distortion of the $CuO_2$ planes is enough to explain the experimental reports for s- mixing in the dominantly d-wave gap of $YBa_2Cu_3O_7$. In the case of spin fluctuations mediated pairing instead, a large part of the condensate must be located in the chains in order to understand the experiments.' --- 1.59cm 0.80cm 0.0in [ **Orthorhombicity mixing of s- and d- gap\ components in $YBa_2Cu_3O_7$\ without involving the chains**]{} 0.9in [**G. Varelogiannis**]{} 0.04in 0.01cm [*Institute of Electronic Structure and Laser\ Foundation for Research and Technology - Hellas\ P.O. Box 1527, Heraklion Crete 71110, Greece*]{} 0.21in 0.1in 0.9cm 0.2cm PACS numbers: 74.25.-q 0.80cm The issue of the symmetry of the order parameter in the oxides motivated intense investigations [@review]. Advanced phase sensitive experiments have been developed recently that allowed to establish that the order parameter in $YBa_2Cu_3O_7$ reverses its sign on the Fermi surface indicating d-wave symmetry [@phaseD]. This symmetry is generally believed to indicate spin fluctuation mediated superconductivity. The presence of nodes in the gap of $YBa_2Cu_3O_7$ is confirmed by the linear temperature dependence of the penetration depth in the low temperature regime [@nodesD]. However, there are also results that are in clear conflict with a simple d-wave picture [@Chaudhari]. In particular, c-axis Josephson tunneling experiments on $YBa_2Cu_3O_7$ indicated the existence of a significant s-component [@DynesPb]. This late conclusion is reinforced by the relative insensitivity of the superconducting critical temperature on the presence of non-magnetic impurities or defects [@DynesImp]. It appears experimentally that the gap has a dominant d-wave component but also a significant s-wave component. It has been argued that this behavior may indicate the existence of two different condensates [@Muller]. The mixing of s and d components arises naturally when the lattice is orthorhombically distorted [@James]. Large orthorhombic distortions have therefore been invoked in order to understand the experimental conflicts in $YBa_2Cu_3O_7$ [@Maki; @Pokrovsky; @Jules]. However, the orthorhombic distortion of the $CuO_2$ planes in the case of $YBa_2Cu_3O_7$ is only a few percent ($\approx 3\%$) and such a small distortion cannot induce significant mixing of s-components in a d-wave spin fluctuations mediated pairing. To reconcile the large orthorhombicity effects required by the phenomenology and spin fluctuations pairing, it has been argued that the $Cu-O$ chains are involved in superconductivity and at least $25\%$ of the condensate is located there [@Jules]. Since the chain band concerns only one direction in the $ab$ plane, if chains are involved, large in plane anisotropies are reasonable. Large anisotropies between the $a$ and $b$ directions are also reported in microwave penetration depth measurements [@penab] and in dc resistivity measurements [@Friedmann]. On the other hand, supposing that the chains contain a large part of the condensate and are therefore crucially involved in the pairing mechanism is difficult to reconcile with the fundamental similarities of superconductivity in $YBCO$ with that of the other high-$T_c$ cuprates where the chains are absent. Whether the chains are involved in the pairing or not is not yet a definitely answered question, there are nevertheless strong arguments supporting that only $CuO_2$ planes are involved in the interesting physics [@PWA]. It has been proposed recently an alternative to spin fluctuations mechanism of anisotropies and gap symmetry transitions that involves isotropic scattering and has been named Momentum Decoupling (MD) [@meMD1; @meMD2; @meMD3]. When the characteristic momenta exchanged in the pairing interaction are small compared to the characteristic momenta of the variations of the electronic density of states, there is tendency for decorrelation between the physical behavior in the different regions of the Fermi surface. In particular, couplings become proportional to the angular resolved electronic density of states (ARDOS) $N(E_F,\vec{k})= |\upsilon_F(\vec{k})|^{-1}$ at each region of the Fermi surface, and therefore anisotropies are driven by the electronic density of states and not by the scattering [@meMD1]. Taking into account the conventional Coulomb pseudopotential $\mu^*$ the d-wave and s-wave (both ARDOS driven anisotropic) states become energetically degenerate [@meMD2]. The presence of different gap symmetries in different oxides as well the d-s gap symmetry transition by overdoping $Bi_2Sr_2CaCu_2O_8$ [@Kelley] are natural consequences of MD [@meMD2; @meMD3]. The temperature enhancement of the anisotropy [@Ma] and the behavior of the anomalous dip above the gap in the electronic density of states [@meDIP] are qualitative puzzling aspects of the phenomenology of $Bi_2Sr_2CaCu_2O_8$ that also indicate MD [@meMD1]. Dominance of forward scattering in the pairing could result from the vicinity of the strongly correlated electronic system to a phase separation instability that could be driven by magnetic fluctuations [@Marder] or even by phonons [@Hubb]. The interlayer tunneling mechanism proposed by Anderson is effectively $q\approx 0$ pairing and could be at the origin of MD [@PWAtun]. The same for the charge transfer resonance pairing mechanism [@CTR] which also concerns small momentum transfer process [@Littlewood]. Notice that dominantly forward scattering has unexpected implications even for the normal state properties that have not been yet fully explored like for example the possibility of linear $T$-dependent dc resistivity despite electron scattering with high energy phonons [@dc]. We report here that the orthorhombic distortion of the $CuO_2$ planes in $YBa_2Cu_3O_7$, produces an effect an order of magnitude larger in the case of MD than in the case of spin fluctuations pairing and could, therefore, explain the experimental reports of significant mixing of s- components in the dominantly d-wave gap without need to involve the chains. We solve the BCS equations on a two dimensional lattice that might simulate the $CuO_2$ planes of $YBCO$. The gap is obtained by $$\Delta(\vec{k})=-\sum_{\vec{p},|\xi_{\vec{p}}|<\Omega_D} {\Lambda(\vec{k}-\vec{p})\Delta(\vec{p})\over 2\sqrt{\xi_{\vec{p}}^2+\Delta(\vec{p})^2}} \tanh\biggl( \sqrt{{\xi_{\vec{p}}^2+\Delta(\vec{p})^2\over 2T}}\biggr) \eqno(1)$$ The materials characteristics enter through the dispersion $\xi_{\vec{k}}$. The effect of orthorhombicity on the $CuO_2$ plane is to make inequivalent the $a$ and $b$ axes and in $YBa_2Cu_3O_7$ the difference in these lattice constants is less than $\approx 3.5\%$. For such small variations we can consider that in a tight-binding dispersion the hoping along the two different axes will be inequivalent with differences of the same order. We consider in fact a simple next nearest neighbors tight binding fit to LDA calculations of the $CuO_2$ band in $YBCO$ [@OKAnd] $$\xi_{\vec{k}}=-2t [\cos(k_x) + (1+\beta)\cos(k_y)]-4t' \cos(k_x)\cos(k_y) - \mu \eqno(2)$$ where $t=0.25eV$, $t'/t=-0.45$ and $\mu=-0.44 eV$. This type of dispersion produces a van Hove peak in the density of states about $10 meV$ below the Fermi level. The relevant parameter for our discussion is $\beta$ which characterizes the orthorhombic distortion. The scattering amplitude $\Lambda(\vec{k}-\vec{p})$ in equation (1) contains the physics of the pairing mechanism. The two different situations of Momentum Decoupling and spin fluctuation pairing that we consider here correspond to two different characteristic structures of $\Lambda(\vec{k}-\vec{p})$. In the momentum decoupling regime the pairing scattering is isotropic taking at small momenta a lorentzian form $$\Lambda(\vec{k}-\vec{p})=-\Lambda^o\biggl( 1+{|\vec{k}-\vec{p}|^2\over q_c^2}\biggr)^{-1}+\mu^* \eqno(3)$$ where the first term concerns the pairing and $q_c$ plays the role of a momentum cutoff. This type of lorentzian form is found to occur in the scattering of the electronic system with any bosonic system including phonons, provided the electronic system is close to the phase separation instability [@Hubb]. The Coulomb pseudopotential $\mu^*$ is the effective repulsion of the paired electrons and is not necessarily momentum independent. We are in the MD regime provided the characteristic momenta of the variations of $\mu^*$ are large compared to $q_c$. The interaction of equation (3) leads to either s- or d-wave superconductivity, depending on marginal for the pairing parameters like the magnitude of $\mu^*$ and its characteristic momentum range. Considering for $\mu^*$ a Lorentzian structure as that of the pairing amplitude we were able to plot a phase diagram of the energetically favorable (having the lowest free energy) gap symmetry (s-wave or d-wave) on a plane defined by the ratio of the characteristic cut-off of $\mu^*$ over that of the pairing amplitude and the magnitude of $\mu^*$ for an electronic structure similar to that of the oxides [@meMD2; @meMD3]. What is relevant for our discussion here is that a dominantly d-wave gap as reported by phase sensitive and node sensitive experiments on $YBCO$ arises naturally for conventional values of $\mu^*$ with a pairing amplitude as in equation (3) [@meMD2; @meMD3]. The alternative “conventional” mechanism for d-waves is the scattering with spin fluctuations that has been extensively discussed in the literature. As an example of this second approach we consider the phenomenological Millis Monien and Pines (MMP) scattering with spin fluctuations [@MMP] in the static limit $$\Lambda(\vec{k}-\vec{p})\approx {-\Lambda_o\over 1 + \xi^2_M(\vec{k}-\vec{p}-\vec{Q})^2} \eqno(4)$$ where $\vec{Q}=(\pi,\pi)$, the coherence range of the antiferromagnetic spin fluctuations $\xi_M$ is taken on the order of three lattice spacings as in the experiment [@MMP] and Coulomb pseudopotential is neglected. In the orthorhombically distorted case $a$ and $b$ directions are not equivalent and since the Fermi velocities are different in these two directions one would expect different magnitudes of gap. The difference between the absolute values of the gap along $a$ and along $b$ is therefore a measure of the orthorhombicity effect. We plot in figure (1a) the evolution of the ratio $\Delta_a^2/\Delta_b^2$ with $\beta$. In the tetragonal case $\beta=0$ this ratio is of course equal to unity. However, as we switch on the distortion $\beta$ the maximum absolute values of the gap we obtain near the $(0,\pi)$ and $(\pi,0)$ points are appreciably different. Full line in figure 1a corresponds to the MD regime with a scattering amplitude as in equation (3) and dashed line to the MMP scattering amplitude given in Eq. (4). In both cases the energetically favorable d-wave channel is considered and therefore the gap changes sign between $(0,\pi)$ and $(\pi,0)$. We can already conclude from figure 1a that in the case of MD the effect of orthorhombicity is an order of magnitude larger than in the case of spin fluctuations. Let us illustrate now that, in the MD case, the distortion of the $CuO_2$ planes may be sufficient to understand the experiments. We first consider the London penetration depth along the two different directions at zero temperature $$\lambda_{k_x(k_y)}^{-2}\propto \sum_{\vec{k}}\upsilon_{k_x(k_y)}^2 \biggl( \partial f (E_{\vec{k}}) / \partial E_{\vec{k}}\biggr) \eqno(5)$$ where $E_{\vec{k}}=\sqrt{\xi_{\vec{k}}^2+\Delta_{\vec{k}}^2}$. The experimental results of Ref. [@penab] indicate large in-plane anisotropy of the penetration depth $\lambda_a/\lambda_b\approx 1.6$. We show in figure (1b) the dependence of the penetration depth in plane anisotropy $\lambda^{-2}_a/\lambda^{-2}_b$ on the distortion parameter $\beta$. The full line corresponds to the MD regime while the dashed line to the MMP spin-fluctuation scattering. We see that in the MD regime the in plane distortion expected on the order $\beta\approx 0.3-0.4$ could be sufficient to produce the experimental in-plane anisotropy of the penetration depth, while for an MMP interaction, the reported in plane anisotropy of $\lambda$ is an order of magnitude smaller than in the experiment. The same can be said for the c-axis Josephson tunneling results of Dynes and collaborators [@DynesPb]. In fact they observed Josephson tunneling currents on c-axis $Pb$/insulator/$YBa_2Cu_3O_7$ tunnel junctions. According to Ambegaokar and Baratoff [@Ambegaokar] the Josephson current is given by $$JR={2\pi T \over N_1N_2}{1\over \pi} \sum_{n=0}^{\infty} \sum_{\vec{k}}{\Delta_1(\vec{k})\over \xi_1(\vec{k})^2+\Delta_1(\vec{k})^2+ \omega_n^2} \sum_{\vec{k'}} {\Delta_2(\vec{k'})\over \xi_2(\vec{k'})^2+\Delta_2(\vec{k'})^2+ \omega_n^2} \eqno(6)$$ At zero temperature the sum over the fermion Matsubara frequencies is becoming an integral that can be performed straightforwardly, leading to the following expression for the Josephson current at $T=0$: $$J(T=0)R={1\over 2\pi}{1\over N_1N_2} \sum_{\vec{k}\vec{k'}}\Delta_1(\vec{k})\Delta_2(\vec{k'}) {1\over \sqrt{\xi_1(\vec{k})^2+\Delta_1(\vec{k})^2} \sqrt{\xi_2(\vec{k'})^2+\Delta_2(\vec{k'})^2}} \times$$ $$\times {1\over \sqrt{\xi_1(\vec{k})^2+\Delta_1(\vec{k})^2}+ \sqrt{\xi_2(\vec{k'})^2+\Delta_2(\vec{k'})^2}} \eqno(7)$$ where $R$ is the junction resistance and $N_i(0)$ the densities of states on the Fermi level. It is clear that if $\Delta_1$ and $\Delta_2$ are orthogonal (they belong to different irreducible representations of the point group), there should not be any Josephson current in the junction. Therefore, since the gap of $Pb$ is known to be s-wave, the observation of the Josephson current seems to exclude a purely d-wave gap in $YBCO$ and a significant part of s-component is necessary in order to have Josephson coupling between the two condensates. For the $Pb$/insulator/$YBCO$ junction, if we suppose that the $Pb$ gap is isotropic then in equation (6) the sum over $k$ for the isotropic case is becoming trivial leading to a term proportional to to the density of states of lead. At zero temperature the matsubara frequency sum is becoming a frequency integral taking here the form $\int_0^\infty d\omega F(\omega)G(\omega)$ where $F(\omega)=(\Delta_{Pb}^2+\omega^2)^{-1/2}$ and $G(\omega)= (\xi_Y(\vec{k})^2+\Delta_Y(\vec{k})^2+\omega^2)^{-1}$. This integral is calculated numerically. In Ref. [@DynesPb] is reported a Josephson current along the c axis that was about $10\%$ of what it should be expected from the isotropic Ambegaokar-Baratoff formula [@Ambegaokar] if for YBCO the gap were taken equal to $1.76 T_c$ as expected in weak coupling BCS theory. The weakness of the supercurrent could show that the d-components are dominant in $YBCO$ [@Clemm]. To our approach the gap in $YBCO$ is indeed dominantly d-wave yet because of the orthorhombic distortion there is also an s-component that is responsible for the Josephson coupling with the condensate of lead. To show that this approach could reasonably account for the results of [@DynesPb] we take two different cases. In the first case we consider that the gap of $YBCO$ is isotropic and in the second case we obtain the gap from the solution of the BCS equations as previously. In both cases we adjust the $YBCO$ gap to a value about $15$ times larger than the gap of $Pb$. We also adjust the isotropic gap we take for YBCO in the first case to be equal to $(1/2)(|\Delta_a|+|\Delta_b|)$. What would be comparable to the findings of Ref. [@DynesPb] is the ratio of the Josephson current that results using the anisotropic gap we obtain in the MD regime solving the BCS equations as previously over the supercurrent obtained in the isotropic case and which should correspond to the Ambegaokar-Baratoff expectations. We plot in figure (1c) the evolution of this ratio with the distortion parameter $\beta$. When $\beta = 0$ we have no Josephson supercurrent and as the distortion parameter reaches values as high as $\beta=0.04$ in the case of MD (full line) we can have appreciable supercurrents of the order of $15\%$ of what should be expected in a junction between isotropic superconductors in agreement with the results of [@DynesPb]. With the MMP interaction instead the supercurrent is here also about an order of magnitude smaller than the experimental report. It emerges therefore a fundamental qualitative difference between MD and spin fluctuations pairing. In the later case, if the orthorhombic distortions interpretation of the $s$ and $d$ mixing in $YBCO$ makes sense, the chains participate fundamentally in the pairing and at least about $25\%$ of the condensate should be located there. On the other hand, in the case of MD, the orthorhombic distortion of the $CuO_2$ planes is sufficient. We proposed therefore a mechanism that explains the puzzle of significant s-wave components in $YBa_2Cu_3O_7$ without contradicting the strong arguments [@PWA] supporting that the relevant physics happens in the $CuO_2$ planes. Discussions with J.F. Annett and E.N. Economou are gratefully acknowledged. [999]{} The subject is reviewed by J.F. Annett, N. Goldenfeld, and A.J. Leggett, in Physical Properties of High-$T_c$ Superconductors, Vol. VI, Editor D. Ginzberg, World Scientific (1996) and J. of Low Temp. Phys. [**105**]{}, 473 (1996) D.A. Wollman et al., Phys. Rev. Lett. [**71**]{}, 2134 (1993); D.A. Brawner and H.R. Ott, Phys. Rev. B [**50**]{}, 6530 (1994); C.C. Tsuei et al., Phys. Rev. Lett. [**73**]{}, 593 (1994) W.N. Hardy et al., Phys. Rev. Lett. [**70**]{}, 3999 (1990) P. Chaudhari and S.-Y. Lin, Phys. Rev. Lett. [**72**]{}, 1084 (1994) A.G. Sun et al., Phys. Rev. Lett. [**72**]{}, 2667 (1994) A.G. Sun et al., Phys. Rev. B [**50**]{}, 3266 (1994) K.A. Müller, Nature [**377**]{}, 133 (1995) J.F. Annett, Adv. in Phys. [**39**]{}, 83 (1990) K. Maki and N.T. Beal-Monod, Phys. Lett. A [**208**]{}, 365 (1995) S.V. Pokrovsky and V.L. Pokrovsky, Phys. Rev. Lett [**75**]{}, 1150 (1995); Phys. Rev. B [**54**]{}, 13275 (1996) C. O’Donovan et al, Phys. Rev. B [**51**]{}, 6588 (1995); D. Branch and J.P. Carbotte, Phys. Rev. B [**52**]{}, 603 (1995); C.O’ Donovan and J.P. Carbotte, [*ibid*]{}, 4568 (1995) K. Zhang et al., Phys. Rev. Lett. [**73**]{}, 2484 (1994) T.A. Friedmann et al., Phys. rev. B [**42**]{}, 6217 (1990); R. Gagnon et al., Phys. Rev. B [**50**]{}, 3458 (1994) P.W. Anderson, Science [**256**]{}, 1526 (1992) G. Varelogiannis et al., Phys. Rev. B [**54**]{}, R6877 (1996) G. Varelogiannis, [*Marginality of the superconducting gap symmetry in the oxides*]{}, preprint cond-mat/9511139 G. Varelogiannis and M. Peter, Czech. J. of Phys. [**46**]{}, Suppl. [**S2**]{}, p. 1047 (1996) R.J. Kelley et al., Science [**271**]{}, 1255 (1996) J. Ma et al., Science [**267**]{}, 862 (1995) G. Varelogiannis, Phys. Rev. B [**51**]{}, R1381 (1995); Phys. Rev. Lett. [**76**]{}, 3236 (1996) M. Marder, N. Papanicolaou and G.C. Psaltakis, Phys. Rev. B [**41**]{}, 6920 (1990); V.J. Emery, S.A. Kivelson and Q. Lin; Phys. Rev. Lett. [**64**]{}, 475 (1990); A.N. Andriotis et al., Phys. Rev. B [**47**]{}, 9208 (1993) G. Varelogiannis, [*Superconductivity in Hubbard Fermi-Liquids coupled to phonons*]{}, preprint P.W. Anderson, Science [**268**]{}, 1154 (1995) P.B. Littlewood, C.M. Varma, and E. Abrahams, Phys. Rev. Lett. [**63**]{}, 2602 (1989) P.B. Littlewood, Phys. Rev. B [**42**]{}, 10075 (1990) G. Varelogiannis and E.N. Economou, [*Small-q electron-phonon scattering and linear dc resistivity in high-$T_c$ oxides*]{}, preprint O.K. Andersen et al., Phys. Rev. B [**49**]{}, 4145 (1994) A.J. Millis, H. Monien and D. Pines, Phys. Rev. B [**42**]{}, 167 (1990) V. Ambegaokar and A. Baratoff, Phys. Rev. Lett. [**10**]{}, 486 (1963); [**11**]{}, 104 (1963) An alternative interpretations is the possibility of s-wave gap in $YBa_2Cu_3O_7$ but significantly depleted in the external layer: M. Ledvij and R.A. Clemm, Phys. Rev. B [**52**]{}, 12552 (1995) [**Figure Captions:**]{} 0.3cm [**Figure 1:**]{} (a): the ratio of the gaps along the $a$ and $b$ directions $\Delta_a^2/\Delta_b^2$ as a function of the distortion parameter $\beta$. (b): The London penetration depth in-plane anisotropy $\lambda_b^2/\lambda_a^2$ as a function of the distortion parameter $\beta$. (c): The ratio of supercurrent obtained from a Josephson junction of $Pb$ with anisotropic $YBCO$ over that expected from a junction of lead with isotropic $YBCO$ with gap magnitude $(1/2)(|\Delta_a|+|\Delta_b|)$. In all cases the full lines correspond to the MD regime as described in the text and dashed lines to the MMP spin fluctuations scattering amplitude with the same dispersion conditions.
--- abstract: 'We explore the flux-jump regime in type-II Pb thin films with a periodic array of antidots by means of magneto-optical measurements. A direct visualization of the magnetic flux distribution allows to identify a rich morphology of flux penetration patterns. We determine the phase boundary $H^*(T)$ between dendritic penetration at low temperatures and a smooth flux invasion at high temperatures and fields. For the whole range of fields and temperatures studied, guided vortex motion along the principal axes of the square pinning array is clearly observed. In particular, the branching process of the dendrite expansion is fully governed by the underlying pinning topology. A comparative study between macroscopic techniques and direct local visualization shed light onto the puzzling $T-$ and $H-$independent magnetic response observed at low temperatures and fields. Finally, we find that the distribution of avalanche sizes at low temperatures can be described by a power law with exponent $\tau \sim 0.9(1)$.' author: - 'M. Menghini, R. J. Wijngaarden' - 'A. V. Silhanek[^1]' - 'S. Raedts' - 'V. V. Moshchalkov' title: Dendritic flux penetration in Pb films with a periodic array of antidots --- Introduction ============ Flux penetration in a type-II superconductor in the mixed state is usually described by the Bean critical state model. In this approximation it is assumed that a balance between the pinning force and the external magnetic pressure leads to a constant flux gradient.[@bean] Similarly to a sand pile, this vortex distribution is metastable and therefore it is bound to decay to a lower energy configuration. The dynamic evolution towards the equilibrium state is generally described as flux creep where thermal or quantum fluctuations are needed to overcome a current dependent pinning barrier $U(j)$.[@anderson] If this process takes place under isothermal conditions the creep is logarithmic in time and the field penetration is smooth with a flat flux front.[@forkl; @koblishka] In contrast, if the process is perfectly adiabatic, the heat dissipation $\delta Q$ produced by the vortex motion will give rise to a local increase of the temperature $\delta T=\delta Q / C$, where $C$ is the specific heat of the superconducting material. Since typically $dJ_c/dT<0$, this local rise of temperature implies a reduction of the critical current which in turn promotes further vortex motion thus yielding a vortex avalanche. In this scenario, the field penetration is abrupt, giving rise to jumps in the magnetization, and develops much faster than the creep relaxation process. These avalanches (or flux-jumps) occur at low temperatures where critical currents are high and the specific heat is small thus severely undermining the potential technological applications of superconducting materials.[@mints-review] In most cases, flux penetration experiments are performed in thin superconducting materials of slab geometry exposed to a field perpendicular to the sample plane. It has been observed for many samples that in this configuration the field penetrates via highly branched expansions giving rise to a dendritic pattern of flux channels.[@duran; @johansen] Theoretical studies as well as numerical simulations [@aranson; @johansen], have reproduced the observed flux penetration patterns in thin films giving support to a thermo-magnetic origin for these type of instabilities. On the other hand, the influence of the pinning landscape on the morphology and local characteristics of dendritic flux penetration has not been yet fully addressed. A model system to investigate this effect can be realized by tailoring a periodic pinning array in superconducting samples. Vortex avalanches have been previously detected [@terentiev; @hebert; @silhanek; @vlasko-vlasov] in thin films with periodic pinning arrays. Global magnetization measurements[@terentiev; @hebert; @silhanek] in samples with arrays of antidots have shown that the region in the $H-T$ phase diagram dominated by flux-jumps is more extended as compared to the case of plain films. Besides that, commensurability of flux-jumps with the matching field of the pinning lattice and invariance of the magnetization and flux-jump size distribution at low temperatures and fields were observed. Moreover, magneto-optical (MO) imaging [@vlasko-vlasov] in Nb patterned films has shown that vortex avalanches along the principal directions of the pinning lattice take place in zero-field cooling (ZFC) experiments. From the theoretical point of view, Aranson [*et al.*]{}[@aranson] have shown that a periodic spatial modulation of the critical current gives rise to a branching pattern of local temperature following the symmetry of the underlying pinning array. In this work we study the flux-jump regime in Pb thin films with periodic pinning by means of MO. In order to separate and clearly identify the effects of the engineered pinning potential similar experiments were performed on plain Pb films. The characteristics of the samples and the MO technique are described in the following section. Subsequently, the different types of flux penetration in ZFC experiments at different temperatures are described. Additionally, phase boundary separating dendritic from smooth penetration was determined. Finally, we present an analysis of the evolution of dendrites with field and we study the avalanche size distribution as a function of temperature. Samples and Experimental Procedure ================================== The experiments were conducted on Pb thin films with a square array of antidots. The dimensions and critical temperature for each sample are summarized in Table \[table\]. In all patterned samples the square antidot array consists of square holes with lateral dimension $b = 0.8~\mu$m and period $d=1.5~\mu$m which corresponds to a first matching field $\mu_0 H_1=0.92$ mT. Simultaneously with each patterned film we deposited also an unpatterned reference film on a SiO$_2$ substrate which allows us to perform a direct comparison in order to ascertain the effects of the pinning array (see Table I). Due to geometrical characteristics these Pb thin films are type-II superconductors. [@dolan; @rodewald] From the temperature dependence of the upper critical field $H_{c2}(T)$ of the plain films we have estimated a superconducting coherence length $\xi(0) = 33 \pm 3$ nm. A more detailed description of the sample preparation can be found in Ref.\[\]. Sample $w_1$ (mm) $w_2$ (mm) $t$ (nm) $T_c$ (K) ----------- ------------ ------------ ---------- ----------- AD75 1.6 2.9 75 7.21 AD65 2.3 2.5 65 7.21 AD15 1.9 2.0 13.5 7.10 PF15 2.2 3.1 13.5 7.10 \[table\] : Lateral dimensions ($w_1$ and $w_2$), thickness ($t$), and critical temperatures ($T_c$) for all the films studied. AD indicates a square array of antidots and PF a plain film. The local magnetic induction, $B$, just above the surface of the sample was measured using a magneto-optical image lock-in amplifier technique as described in Ref.\[\]. The magnetic induction was sensed using an indicator with in-plane magnetization and large Faraday effect mounted on top of the sample. The sample together with the indicator were mounted in a specially designed cryogenic polarization microscope. The experiments were performed in a commercial Oxford Instruments 7 T vector magnet system. Results and Discussion ====================== Dendrite morphology ------------------- Magneto-optical imaging of Pb films with a square array of antidots shows a rich variety of magnetic flux penetration in ZFC experiments as a function of temperature. Fig.\[images\] summarizes the different morphologies of flux penetration observed in these samples. The brighter regions correspond to high magnetic fields while the dark ones indicate zero field. In the bottom part of Figs.\[images\] (a)-(d) magnetic domains from the magneto optical garnet show up as a saw-tooth-like boundary between regions with different contrast. These domains do not seem to influence the flux pattern inside the sample and are irrelevant for the discussion below. At low $T$ and $H$ finger-like dendrites elongated in the direction perpendicular to the sample’s border are formed (Fig.\[images\](a)). As the temperature increases to 5.5 K the dendrites become considerably larger and more branched (see Fig.\[images\](b)). In the range, $5.5 < T \leq 6$ K, the magnetic field first penetrates smoothly up to approximately $1/4$ of the sample width ($\mu_0H \sim 1.5$ mT) and then suddenly a highly branched dendrite is formed. An example of this behavior, also predicted theoretically [@aranson], is shown in Fig.\[images\](c). In the present sample we found highly branched (tree-like) dendrites for applied fields up to 3 mT, for higher fields the penetration becomes uniform. It is noteworthily that in the finger-like regime the maximum length of dendrites is limited by the half width of the sample whereas in the region of highly-branched or tree-like dendrites, vortices can extend much further into the sample. Finally, for $T>6\,$K a smooth flux penetration is observed (Fig.\[images\](d)) in the whole range of fields investigated and a Bean-like pattern develops. Within the regime dominated by avalanches it is found that the main core of the dendrites and their ramifications are oriented along the principal directions of the square array of antidots. However, the influence of the underlying periodic pinning array is not constrained to the flux-jump regime but can also be seen in the smooth Bean-like penetration pattern. Indeed, a closer look at the flux front for $T>6\,$K shows clear streaks aligned with the pinning array as a result of preferential or guided motion of vortices.[@pannetier] This result is consistent with previous reports in low temperature as well as high temperature superconductor thin films with periodic pinning.[@pannetier; @radu; @marco] On the other hand, it has been theoretically shown that, in samples with random disorder, during the initial ramping of the field hot magnetic filaments propagate from the border of the sample. This, could eventually also lead to streaks in the field penetration.[@aranson] However, since we have observed a filamentary penetration only in the patterned sample and not in the plain film, we can rule out this possibility and attribute the observed effect to the periodic pinning potential. The influence of the square lattice of antidots on the flux penetration becomes more evident when comparing the previous results with those obtained in Pb plain films (PF) (Figs.\[images\] (e) and (f)). In this case, ZFC MO experiments show that the vortex avalanche regime is constrained to a smaller region of the $H-T$ phase diagram (see Fig.\[hstar\](b)). Besides that, as can be clearly seen in the flux patterns formed at low temperatures (Fig.\[images\](e)), the morphology of the dendrites is quite different from the one described above for antidot samples. In PF we observe that the magnetic field bursts in highly disordered dendrites with no particular orientation (other than the average imposed by the screening currents) and with no characteristic size. These features are similar to those previously reported for Nb and MgB$_2$ plain films.[@duran; @johansen] Finally, a smooth penetration is found at high temperatures and fields, as in the case of the patterned sample. In all cases, we have observed that the dendrites develop rather abruptly, $v > 10$ m/s, according to the limit imposed by our experimental temporal resolution. Previously, it was shown that this velocity can indeed be much higher. [@leiderer] Besides this, dendrites nucleate at the edge of the sample in random positions which do not reproduce if the experiment is repeated. This indicates that their appearance is an intrinsic property of the system rather than due to imperfections in the sample’s border.[@comment] Phase diagram ------------- The transition line $H^*(T)$ from avalanche to smooth flux penetration regimes in Pb films with antidots was previously determined from dc-magnetization and ac-susceptibility measurements.[@hebert; @silhanek] In the former case, the vortex avalanche regime manifests itself as a jumpy response of the magnetization, whereas in ac-susceptibility measurements the signature of the transition between the different flux penetration regimes is a local paramagnetic reentrance in the ac-screening.[@silhanek] In Fig.\[hstar\] we plot the $H^*(T)$ lines previously reported using ac-susceptibility together with the those determined by ZFC MO measurements. Fig.\[hstar\](a) shows the phase boundary obtained for samples with the same antidot array and a slightly ($15\%$) different thickness. The remarkable agreement between these two type of experiments reinforces the interpretation of the reentrance in the ac-screening as the onset of dendritic vortex avalanches. For comparison, the boundary lines corresponding to samples with and without antidots are shown in Fig.\[hstar\](b). In this case both samples were deposited simultaneously and have the same thickness. The $H^*(T)$ line for the non-patterned sample was determined by MO imaging whereas the boundary for the antidot sample was obtained by ac-susceptibility and is the same as already shown in Ref.\[\]. In Fig.\[hstar\](b) we can clearly see that the flux-jump regime covers a larger portion of the phase diagram for the patterned sample than for the plain film, in agreement with previous reports.[@hebert; @silhanek] Hébert [*et al.*]{}[@hebert] proposed that the larger extension of the avalanche regime in presence of antidots can be related to the formation of a multi-terrace critical state[@cooley; @vvmprb] in this kind of samples. Within this model, the main precursor of avalanches is the abrupt local change $\delta B(x)$ in between terraces of constant $B$. However, the direct observation of vortex dendrites indicates that this scenario is not appropriate to describe the observed extension of the flux-jump regime. Dendrite field evolution ------------------------ In order to gain more insight into the dynamics of the avalanches we studied the magnetic induction profile inside the dendrites in ZFC experiments where the external field is increased in discrete steps. In Fig.\[profiles\](a) a 3D image of the field distribution near one edge of the sample with antidots at $T = 4$ K and $\mu_0H= 1.8$ mT is given. Fig.\[profiles\](b) shows the magnetic field profile inside the dendrite along the line indicated by A-A’ in Fig.\[profiles\](a) for different applied fields. During the experiment the external field was increased in steps of $\delta H=0.2$ mT but in Fig.\[profiles\](b) for clarity we show only curves at $\delta H=0.4$ mT. In both figures a maximum of $B$ at the edge of the sample is clearly seen as expected for a thin film in a transverse magnetic field due to demagnetization effects. In general, we observe that once a dendrite develops, its [*shape*]{} remains practically unchanged as the field is further increased. Additionally, as can be seen in Fig.\[profiles\](b), the internal field $B$ along a dendrite (A-A’ line) increases as one moves from the edge towards the center of the sample. Moreover, the magnitude of the magnetic induction inside the dendrite can be even higher than the field at the edge of the sample. After a dendrite has formed, the initial deficiency of vortices near the edge of the sample is progressively filled by new avalanches as the external field is ramped up (see for example the field profiles for $H \geq 2.4$ mT in Fig.\[profiles\](b)). As already pointed out by Barkov [*et al.*]{},[@shantsev] the initial inhomogeneous distribution of vortices along the dendrites can be attributed to the field induced by the screening currents that flow around the dendrite. The field lines associated with these currents are more dense near the front of the dendrite giving rise to higher local field at that point. Before the avalanche event occurs, the field penetrates following a Bean-like profile ($H<1.2$ mT in Fig.\[profiles\](b)). Interestingly, right after the avalanche develops, this slope relaxes, as expected for a field-cooling process, and for higher applied fields it recovers again. A similar effect was previously observed in Nb films.[@welling] The evolution of the shape of the dendrites in their transverse direction is shown in Fig.\[profiles\](c). These profiles are calculated along the line defined by B-B’ in Fig.\[profiles\](a) where no side branches of the dendrites are crossed. For the sake of clarity the curves have been displaced vertically. Naturally, the appearance of new peaks as the field is increased corresponds to the formation of new dendrites. The average width of the core of the dendrites is $w \sim 45 \mu$m $\sim 30 d$, thus involving many unit cells of the periodic pinning array. From this sequence of profiles it can be seen that [*the width of the peaks does not change with field*]{}. Also, no clear temperature dependence of the dendrite width has been observed. Avalanche size distribution --------------------------- Since MO imaging maps the spatial distribution of $B$ inside the sample we can calculate not only the total magnetic flux involved in all avalanches but also the number of vortices involved in a single avalanche. In order to do that, we subtract two consecutive MO images such that only relative changes are recorded. Then we identify all avalanches that took place at a given change in magnetic field and calculate the area, $A_i$, and the magnetic flux, $\Phi_i$ involved in each single avalanche event. Also we sum all the events for a given field step. The resulting values $\Phi_T=\sum \Phi_i$ and $A_T=\sum A_i$ for all fields and at three different temperatures are shown in Fig.\[aval\](a) and (b). Since avalanches stop at a temperature dependent field $H^*(T)$ the analysis is significant up to a certain field that is smaller for higher temperatures as can be seen from Fig.\[hstar\]. It is interesting to note that the avalanches start at a field $\mu_0H \sim 0.7$ mT (see vertical dotted line in Fig.\[aval\]). This minimum magnetic field is independent of the used magnetic field step and temperature, thus indicating that it is a characteristic field of these type of instabilities. The existence of a minimum field for the development of the first avalanche or the first flux-jump, was predicted theoretically and observed in many experiments. [@alsthuler] This feature was also found in recent numerical and analytical studies of instabilities of field penetration in thin films with random disorder.[@aranson] In Fig.\[aval\](a) we observe a noisy behavior of $\Phi_T$ as $H$ increases, in agreement with the observed jumpy magnetization.[@hebert] Fig.\[aval\](b) shows the area of the sample invaded by vortex avalanches for each step of $\mu_0H$. Within the inherent noise due to avalanche behavior, the data in both figures collapse onto a single curve for all temperatures. From the data shown in these figures, we can roughly estimate the internal field increment $\delta B$ at each external $\delta H=0.4$ mT step. Indeed, from Fig.\[aval\](a) we have $\Phi_T \sim 8 . 10^{-10}$ Tm$^2$ and from Fig.\[aval\](b), $A_T \sim 0.3$ mm$^2$, thus we obtain $\delta B = \Phi_T \times A_T \sim 2.6$ mT within the avalanches which is approximately 6 times larger than the change in the applied field. This difference, of course, is a consequence of the inhomogeneous distribution of vortices in the avalanche regime leading to a strongly focussed flux penetration. Averaging over the whole sample gives $\delta B \sim \delta H$. The observation of a $\delta B$ independent of $H$ and $T$ is consistent with the previously reported temperature independent flux-jumps in similar samples.[@hebert; @silhanek] We have pointed out in Section III.A that dendrites at $T=5.5$ K exhibit more branching than at lower temperatures (see Fig.\[images\]). This effect becomes evident in Fig.\[aval\](c) where the average avalanche size, $<\Phi>$, is plotted as a function of $\mu_0H$ for the same three temperatures as in (a) and (b). The average is calculated for each applied field as $<\Phi>=\sum \Phi_i / N$ where $N$ is the number of avalanches that takes place at that field. From the figure it is evident that at $T=$3.5 K and 4.5 K the average avalanche sizes are similar whereas at $T=$5.5 K there is a substantial increase in $<\Phi>$. This result is consistent with a scenario where finger-like dendrites of a well-defined size dominate at low $T$ whereas large, highly branched tree-like dendrites, with no characteristic size, dominate at high $T$. The present analysis of avalanche sizes shows that even though the average size of the dendrites depends on temperature, the total flux involved in all avalanches remains approximately constant. In addition to the analysis of magnetic flux and area of avalanches presented above, similar to a bulk magnetization measurement, the identification of each avalanche event allows us to analyze the [*distribution*]{} of individual avalanche sizes in the whole dendritic penetration regime. Presently, there is controversy on whether the critical state in superconductors is a self organized critical (SOC) system or not. [@radovan] In the first case the size distribution will be described by a power law since avalanches of all sizes are expected. In our samples we find that the avalanche size distribution (we define the size by the total amount of moved flux) is consistent with power law behavior for $T < 5.5$ K (see Fig.\[counts\]). However, at large avalanche sizes the data departs from a linear behavior in the log-log scale. This is due to a finite size effect since the length of the dendrites is limited by the size of the sample[@authors2]. Clearly we observe power-law behavior over one and a half decade consistently with SOC behavior. At $T=5.5$ K a reliable fitting is not possible since the small amount of avalanches results in a very poor statistics (in this case there are of the order of 80 avalanches while at 4.5 K the number is 200). The power-law exponent extracted from the fitting of the data at low temperatures is $\tau \sim 0.9(1)$. A similar value ($\tau =1.09$) has been obtained recently by Radovan and Zieve[@radovan] in Pb plain films by analyzing the size of the magnetization jumps using local Hall probe measurements. For YBa$_2$Cu$_3$O$_7$ Aegerter [*et al.*]{} [@wellingepl] found a slightly larger value $\tau =1.29(2)$. Conclusions =========== We have studied magnetic flux penetration in Pb thin films with antidots by means of MO imaging. At low temperatures and fields the penetration is dominated by vortex avalanches while at higher $T$ and $H$ a rather smooth and flat flux front is observed. We have found that the avalanches develop in the form of dendrites similarly to previous observations in Nb films with a periodic antidot array. The morphology of the dendrites changes with temperature, from finger-like at low $T$ to tree-like at high $T$. For all $H-T$ we observe that the vortex motion is guided by the pinning potential generated by the antidots. In general, new dendrites are formed far from old dendrites, in regions where previously no invasion of vortices has taken place, indicating that they interact repulsively. This occurs until there is no room for a new dendrite. As a consequence, the emergence of new dendrites leads to a more uniform magnetic field distribution. The boundary between dendritic and smooth penetration as determined by MO imaging is in a very good agreement with the results obtained from ac-susceptibility measurements in similar samples, see Fig.\[hstar\]. We have also corroborated that in the film with antidots the vortex avalanche regime is extended to higher temperatures and fields as compared to the case of unpatterned films. The detection of dendritic penetration in the flux-jump regime shows that the proposed model [@hebert; @silhanek] of multi-terrace formation for flux penetration is not applicable at low temperatures for Pb films with antidots. Magnetic field profiles inside the dendrites indicate that the field at the tip of the dendrite is of the order or even higher than the field at the edge of the sample. This is due to the high field induced by the screening currents, which make a hairpin bend at the end of the dendrite. A relaxation of the magnetic field slope at the edge of the sample due to the avalanche is observed. Avalanche size distribution analysis shows that the sum of the flux over all avalanches remains constant with temperature. This accounts for the observed temperature independent magnetization at low $H$ and $T$. However, we find that the average size of the dendrites depends on temperature. Thus, a detailed knowledge of the morphology of avalanches is necessary for a complete description of flux penetration in these superconducting thin films. Besides this global analysis of vortex avalanches we study the size distribution of individual avalanches taking profit of the local character of our technique. We find that the size distribution of individual avalanches is consistent with a power law behavior over more than a decade of avalanche sizes at low temperatures. However, the absence of finite size scaling analysis [@authors2] does not allow to make a definite conclusion on whether the system is SOC or not. Recently, it was demonstrated that the coupling between nonlocal flux diffusion with local thermal diffusion can account for dendritic penetration in plain films. We believe that a similar analysis for the case of samples with periodic pinning will be very helpful for a complete understanding of magnetic flux instabilities in superconducting samples. We would like to thank R. Jonckheere for fabrication of the resist patterns. This work was supported by the Belgian Interuniversity Attraction Poles (IUAP), Research Fund K.U.Leuven GOA/2004/02, the Fund for Scientific Research Flanders (FWO) and ESF “VORTEX” program and by FOM (Stichting voor Fundamenteel Onderzoek der Materie) which is financially supported by NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek). [10]{} C. P. Bean, Phys. Rev. Lett. [**8**]{}, 250 (1962) P. W. Anderson, Phys. Rev. Lett. [**9**]{}, 309 (1962) A. Forkl, H. -U. Habermeier, R. Knorpp, H. Theuss, and H. Kronmüller, Physica C [**211**]{}, 121 (1993). M. R. Koblischka, Th. Schuster, B. Ludescher, and H. Kronmüller, Physica C [**190**]{}, 557 (1992). R. G. Mints and A. L. Rakhmanov, Phys. Mod. Phys. [**53**]{}, 551 (1981). C. A. Durán, P. L. Gammel, R. E. Miller, and D. J. Bishop, Phys. Rev. B [**52**]{}, 75 (1995). T. H. Johansen, M. Baziljevich, D. V. Shantsev, P. E. Goa, Y. M. Galperin, W. N. Kang, H. J. Kim, E. M. Choi, M. S. Kim, S. I. Lee, Supercond. Sci. Tech. [**14**]{}, 726 (2001). I. Aranson, A. Gurevich, and V. Vinokur, Phys. Rev. Lett. [**87**]{}, 067003 (2001), I. S. Aranson, A. Gurevich, M. S. Welling, R. J. Wijngaarden, V. K. Vlasko-Vlasov, V. M. Vinokur, and U. Welp, submitted for publication (2004). V. Vlasko-Vlasov, U. Welp, V. Metlushko, G. W. Crabtree, Physica C [**341**]{} 1281 (2000). A. Terentiev, D. B. Watkins, L. E. De Long, L. D. Cooley, D. J. Morgan, and J. B. Ketterson, Phys. Rev. B [**61**]{}, R9249 (2000). S. Hébert, L. Van Look, L. Weckhuysen, and V. V. Moshchalkov, Phys. Rev. B [**67**]{}, 224510 (2003). A. V. Silhanek, S. Raedts, and V. V. Moshchalkov, to be published in Phys. Rev. B. G. J. Dolan and J. Silcox, Phys. Rev. Lett. [**30**]{}, 603 (1973). W. Rodewald, Phys. Lett. [**55A**]{}, 135 (1975). S. Raedts, A. V. Silhanek, M. J. Van Bael, and V. V. Moshchalkov, Physica C [**404**]{}, 298 (2004). R. J. Wijngaarden, K. Heeck, M. Welling, R. Limburg,M. Pannetier, K. van Zetten, V. L. Roorda, A. R. Voorwinden, Rev. Sci. Instrum. [**72**]{}, 2661 (2001). M. Pannetier, R. J. Wijngaarden, I. Fl$\o$an, J. Rector, B. Dam, R. Griessen, P. Lahl, and R. Wördenweber, Phys. Rev. B [**67**]{}, 212501 (2003). M.S. Welling, R.J. Wijngaarden, C.M. Aegerter, R. Wördenweber and P. Lahl, Physica C [**404**]{}, 410 (2004). R.Surdeanu, R.J. Wijngaarden, J. Einfeld , R. Wördenweber and R. Griessen, Europhysics Lett. [**54**]{}, 682 (2001). P. Leiderer, J. Boneberg, P. Brüll, V. Bujok, and S. Herminghaus, Phys. Rev. Lett. [**71**]{}, 2646 (1993). Small imperfections as indentations in the sample’s border lead to a deformation of the screening currents that can trigger or promote dendrite nucleation. In our particular case one of the substrate edges was deliberately cut to create a rough sample edge, however no clear evidence of a higher dendrite density was detected. L. D. Cooley and A. M. Grishin, Phys. Rev. Lett. [**74**]{}, 2788 (1995). V. V. Moshchalkov, M. Baert, V. V. Metlushko, E. Rosseel, M. J. Van Bael, K. Temst, R. Jonckheere, and Y. Bruynseraede, Phys. Rev. B [**57**]{},3615 (1998). F. L. Barkov, D. V. Shantsev, T. H. Johansen, P. E. Goa, W. N. Kang, H. J. Kim, E. M. Choi, S. I. Lee, Phys. Rev. B [**67**]{}, 064513 (2003). M. S. Welling, R. J. Westerwaal, W. Lohstroh, and R. J. Wijngaarden, Physica C [**411**]{}, 11 (2004). E. Altshuler and T. H. Johansen, Rev. Mod. Phys. [**76**]{}, 471 (2004) and references therein. H. A. Radovan and R. J. Zieve, Phys. Rev. B [**68**]{}, 224509 (2003). We did not perform a finite size scale analysis because the anisotropic shape of avalanches (elongated in one direction) introduces artifacts in the size distribution when selecting a system length, L, smaller than the size of the sample. C.M.Aegerter, M.S.Welling and R.J.Wijngaarden, Europhys. Lett. [**65**]{}, 753 (2004). Figure Captions =============== [^1]: present address: MST-NHMFL, MS E536, Los Alamos National Laboratory, Los Alamos, NM 87544, USA.
--- abstract: 'This paper investigates downlink transmission over a quasi-static fading Gaussian broadcast channel (BC), to model delay-sensitive applications over slowly time-varying fading channels. System performance is characterized by outage achievable rate regions. In contrast to most previous work, here the problem is studied under the key assumption that the transmitter only knows the probability distributions of the fading coefficients, but not their realizations. For scalar-input channels, two coding schemes are proposed. The first scheme is called blind dirty paper coding (B-DPC), which utilizes a robustness property of dirty paper coding to perform precoding at the transmitter. The second scheme is called statistical superposition coding (S-SC), in which each receiver adaptively performs successive decoding with the process statistically governed by the realized fading. Both B-DPC and S-SC schemes lead to the same outage achievable rate region, which always dominates that of time-sharing, irrespective of the particular fading distributions. The S-SC scheme can be extended to BCs with multiple transmit antennas.' author: - | [Wenyi Zhang, [*Member, IEEE*]{}, Shivaprasad Kotagiri, [*Student Member, IEEE*]{},\ and J. Nicholas Laneman, [*Senior Member, IEEE*]{}]{} [^1] [^2] [^3] bibliography: - 'v095.bib' title: '[Outage-Efficient Downlink Transmission Without Transmit Channel State Information]{}' --- Broadcast channel, (blind) dirty paper coding, downlink, non-ergodic fading, outage achievable rate region, quasi-static fading, (statistical) superposition coding Introduction {#sec:intro} ============ In downlink transmission, a centralized transmitter needs to simultaneously communicate with multiple receivers. Each receiver can only decode its message from its own received signal, without access to the other receivers’ signals. Such systems are usually modeled as broadcast channels (BC) with Gaussian noises, which have been studied extensively since the development of superposition coding [@cover72:it]; see also [@cover98:it] and references therein for an overview of early results on BCs. For a Gaussian BC with scalar inputs and outputs, superposition coding achieves a rate region which dominates that of time-sharing [@bergmans74:it-2], and in fact yields the capacity region [@bergmans74:it-1]. If the transmitter and receivers are equipped with multiple antennas, the resulting vector Gaussian BC is generally non-degraded, and superposition coding turns out to be suboptimal, and dirty paper coding (DPC), originally proposed in [@costa83:it] for single-user Gaussian channels with Gaussian interference non-causally known at the transmitter, can be utilized to maximize the throughput [@caire03:it]. This observation has stimulated a series of work on vector Gaussian BCs [@caire03:it]-[@weingarten06:it], and it has recently been shown that DPC achieves the capacity region of vector Gaussian BCs [@weingarten06:it]. A central assumption in the aforementioned results is that the transmitter has perfect knowledge of the channel state information (CSI), namely, the channel gains, be they constant or random (say, due to fading). For scalar Gaussian BCs with fading, if the transmitter and all the receivers have perfect CSI, both the ergodic capacity region and the outage capacity region are known [@li01:it-1; @li01:it-2]; however, without transmit CSI, neither is known. For ergodic fading BCs without transmit CSI, an achievable rate region has been obtained in [@tuninetti03:isit]. In this paper, we investigate quasi-static fading Gaussian BCs without transmit CSI. The motivation is to model downlink transmission in delay-sensitive applications over slowly time-varying fading channels, and the lack of transmit CSI serves as the worst case for practical systems in which an adequate feedback link may not be available. Due to the non-ergodic nature of quasi-static fading, it is generally impossible for a coding scheme to achieve any strictly positive information rate under all fading realizations. We therefore focus on outage achievable rate regions, as will be formally introduced in Section \[sec:model\]. Lack of transmit CSI seems to pose a fundamental difficulty in broadcast settings. If the transmitter has CSI, the standard BC model is stochastically degraded conditioned upon the fading realizations, because the transmitter can sort the receivers according to their realized signal-to-noise ratios (SNR). Superposition coding is thus optimal for each channel realization, and achieves the outage capacity region when combined with dynamic power allocation [@li01:it-2]. However, without transmit CSI, the transmitter has no way to predict the ordering of the received signals. Conventional superposition coding therefore would not appear to be effective for this model. Generally speaking, a quasi-static fading Gaussian BC without transmit CSI belongs to the class of “mixed channels” [@han03:book], for which no computable, single-letter characterization of the $\epsilon$-capacity region, [*i.e.*]{}, outage capacity region, has been obtained (cf. [@iwata05:ieice]). Even though conventional superposition coding is not effective, there exist efficient approaches in terms of outage achievable rate region. In this paper, we identify two such coding schemes, and show that they both lead to the same outage achievable rate region, which always dominates that of time-sharing, irrespective of the particular fading distributions. The first scheme is called blind dirty paper coding (B-DPC), which utilizes a robustness property of DPC to perform precoding at the transmitter. The second scheme is called statistical superposition coding (S-SC), in which each receiver adaptively performs successive decoding with the process statistically governed by the realized fading. B-DPC is a transmit-centric approach, because the transmitter needs to invoke dirty paper codes $K$ times in a progressive way, while each receiver only needs to decode its own message directly. In contrast, S-SC is the more a receive-centric approach, because the transmitter simply adds up $K$ independently coded streams as in conventional superposition coding, while each receiver (except the $K$th one) needs to execute a successive interference cancellation procedure. The remainder of this paper is organized as follows. Section \[sec:model\] presents the channel model and problem formulation. Section \[sec:main\] gives the main result which characterizes the outage achievable rate region, and shows that it always dominates that of time-sharing. Sections \[sec:bdpc\] and \[sec:ssc\] show how the region of Section \[sec:main\] is achieved by B-DPC and S-SC, respectively. Finally, Section \[sec:conclude\] concludes the paper. Channel Model and Problem Formulation {#sec:model} ===================================== In this section, we summarize the $K$-user scalar Gaussian BC model with quasi-static fading. The input-output relationship of the channel satisfies $$\begin{aligned} \label{eqn:k-user} \rvy_k[n] = \rvh_k \rvx[n] + \rvz_k[n], \quad k = 1, \ldots, K,\;\; n = 1, \ldots, N.\end{aligned}$$ At discrete-time index $n$, the channel takes a scalar input $\rvx[n] \in \mathbb{C}$ from the transmitter, and produces a scalar output $\rvy_k[n] \in \mathbb{C}$ at the $k$th receiver. The channel input $\rvx[\cdot]$ has an average power constraint $P$, given as $$\begin{aligned} \frac{1}{N}\sum_{n = 1}^N |\rvx[n]|^2 \leq P\end{aligned}$$ over the coding block of length $N$. The channel noise samples $\rvz_k[\cdot]$ are independent, identically distributed (i.i.d.) and circularly symmetric complex Gaussian, with mean zero and variance $N_0$, denoted $\rvz_k[\cdot] \sim \mathcal{CN}(0, N_0)$. For scalar fading channels with perfect receive CSI, as will be assumed in this paper, there is no loss of generality to consider only fading magnitudes. So we assume that the squared channel fading coefficient $\rva_k := |\rvh_k|^2$ has a probability density function (PDF) $f_k(a)$ for $a \in [0, \infty)$, and remains constant over the entire coding block so that the resulting BC is called quasi-static. We denote the cumulative distribution function (CDF) of $\rva_k$ by $F_k(a) := \mathbb{P}[\rva_k \leq a]$, and the corresponding inverse cumulative distribution function (ICDF), or, the so-called quantile function, by $G_k(t)$. For every $t \in [0, 1]$, $G_k(t)$ is the supremum of the set $\{a: F_k(a) = t\}$. We assume that, for each coding block, the realization of $\rvh_k$ is known perfectly at the $k$th receiver, but not at the transmitter or any other receiver. Such a situation may arise in practical systems in which receivers are able to estimate their channels with satisfactory accuracy, but the transmitter does not for lack of an adequate feedback link. Although in practice the receivers’ estimate of channels is noisy due to limited channel training, we assume the receive CSI is prefect, in order to simplify analysis and provide useful insights into the more general case. In the sequel, we will frequently make use of the average SNR defined as ${\rho}:= P/N_0$, and without loss of generality normalize the channel equation (\[eqn:k-user\]) such that $P = {\rho}$ and $N_0 = 1$. \[cc\][$\times$]{} \[cc\][$+$]{} \[cc\][$\{\rvm_1,\rvm_2,\ldots,\rvm_K\}$]{} \[cc\][ENC]{} \[cc\][DEC 1]{} \[cc\][DEC 2]{} \[cc\][DEC K]{} \[cc\][[$\rvh_1$]{}]{} \[cc\][[$\rvh_2$]{}]{} \[cc\][[$\rvh_K$]{}]{} \[cc\][[$\rvy_1[\cdot]$]{}]{} \[cc\][[$\rvy_2[\cdot]$]{}]{} \[cc\][[$\rvy_K[\cdot]$]{}]{} \[cc\][[$\rvz_1[\cdot]$]{}]{} \[cc\][[$\rvz_2[\cdot]$]{}]{} \[cc\][[$\rvz_K[\cdot]$]{}]{} \[cc\][[$\hat{\rvm}_1$]{}]{} \[cc\][[$\hat{\rvm}_2$]{}]{} \[cc\][[$\hat{\rvm}_K$]{}]{} For one coding block, the encoder maps $K$ mutually independent messages, each for one individual user, altogether into a codeword of length $N$, [*i.e.*]{}, $$\begin{aligned} \{\rvx[n]\}_{n = 1}^N = \varphi^{(N)}\left(\left\{\rvm_k\right\}_{k = 1}^K\right).\end{aligned}$$ Note that the encoding function $\varphi^{(N)}(\cdot)$ does not depend upon the realization of the fading coefficients $\{\rva_k\}_{k = 1}^K$. The $k$th message, $\rvm_k$, is uniformly chosen from $\{1, \ldots, \left\lceil \exp(N R_k)\right\rceil\}$ where $R_k \geq 0$ is the target rate for the $k$th user. The $k$th decoder maps its received signal along with its fading coefficient into a message index in $\{1, \ldots, \left\lceil \exp(N R_k)\right\rceil\}$, as $$\begin{aligned} \hat{\rvm}_k = \psi^{(N)}_k\left(\left\{\rvy_k[n]\right\}_{n = 1}^N, \rva_k\right).\end{aligned}$$ For a sequence of encoder-decoders tuples $\{\varphi^{(N)}(\cdot), \psi^{(N)}_1(\cdot), \ldots, \psi^{(N)}_K(\cdot)\}$, indexed by the coding block length $N$, and an outage probability vector $\underline{\epsilon} = (\epsilon_1, \ldots, \epsilon_K) \in [0, 1]^K$, we say that a rate vector $\underline{R} = (R_1, \ldots, R_K)$ is $\underline{\epsilon}$-outage achievable if the outage probability for the $k$th user $$\begin{aligned} \limsup_{N \rightarrow \infty} \mathbb{P}\left[\hat{\rvm}_k = \psi^{(N)}_k\left(\left\{\rvy_k[n]\right\}_{n = 1}^N, \rva_k\right) \neq \rvm_k\right] \leq \epsilon_k,\end{aligned}$$ simultaneously for $k = 1, \ldots, K$. The $\underline{\epsilon}$-outage capacity region $\mathcal{C}({\rho}, \underline{\epsilon})$ is then defined as the closure of the set of all the $\underline{\epsilon}$-outage achievable rate vectors for all possible encoder-decoders tuples, under the input power constraint (cf. [@li01:it-2]). An Outage Achievable Rate Region {#sec:main} ================================ For the channel model introduced in Section \[sec:model\], we have the following result. \[prop:k-user\] For the $K$-user quasi-static fading scalar Gaussian BC without transmit CSI, and a given outage probability vector $\underline{\epsilon}$, sorting the indexes of the $K$ receivers such that $G_1(\epsilon_1) \geq G_2(\epsilon_2) \geq \ldots \geq G_K(\epsilon_K)$, an $\underline{\epsilon}$-outage achievable rate region is given by $$\begin{aligned} \label{eqn:R-inner} \mathcal{R}^\ast({\rho}, \underline{\epsilon}) := \left\{\underline{R}: \exists \underline{\gamma} = (\gamma_1, \ldots, \gamma_K) \in [0, 1]^K, \sum_{k = 1}^K \gamma_k = 1, \mbox{s.t.}\; R_k < R^\ast_k({\rho}, \underline{\gamma}, \epsilon_k), \forall k = 1, \ldots, K\right\},\end{aligned}$$ where $$\begin{aligned} \label{eqn:Rk} R^\ast_k({\rho}, \underline{\gamma}, \epsilon_k) &:=& \log\left(1 + \frac{G_k({\epsilon}_k) \gamma_k{\rho}} {G_k({\epsilon}_k)\cdot (\sum_{i = 1}^{k - 1}\gamma_i){\rho}+ 1} \right).\end{aligned}$$ [*Proof*]{}: We provide two different proofs of the achievability of $\mathcal{R}({\rho}, \underline{\epsilon})$ in Sections \[sec:bdpc\] and \[sec:ssc\], respectively. [**Q.E.D.**]{} We emphasize that, in Proposition \[prop:k-user\], the $K$ users are sorted based upon the values of $G_k(\epsilon_k)$, $k = 1, \ldots, K$. This is a crucial condition. As will be demonstrated in Section \[sec:bdpc\], for any arbitrary ordering of the $K$ users, we can obtain an $\underline{\epsilon}$-outage achievable rate region given by (\[eqn:R-inner\]). However, the resulting region is largest only for the particular ordering specified here. Comparison with Time-Sharing {#comparison-with-time-sharing .unnumbered} ---------------------------- If we employ time-sharing to decompose a BC into $K$ non-interfering, single-user channels with time-sharing vector $\underline{\mu} = (\mu_1, \ldots, \mu_K) \in [0, 1]^K, \sum_{k = 1}^K \mu_k = 1$, and further allow power allocation among these $K$ channels with power allocation vector $\underline{\eta} = (\eta_1, \ldots, \eta_K) \in [0, \infty)^K$ such that $\sum_{k = 1}^K \mu_k \eta_k = 1$, then it follows that we can achieve an $\underline{\epsilon}$-outage achievable rate region given by $$\begin{aligned} \mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon}) := \left\{ \underline{R}: \exists \underline{\mu}, \underline{\eta}, \mbox{s.t.}\; R_k < R^\mathrm{td}_k({\rho}, \mu_k, \eta_k, \epsilon_k), \forall k = 1, \ldots, K \right\},\end{aligned}$$ where $$\begin{aligned} R^\mathrm{td}_k({\rho}, \mu_k, \eta_k, \epsilon_k) &:=& \mu_k \cdot\log\left(1 + G_k({\epsilon}_k) \eta_k{\rho}\right).\end{aligned}$$ In order to compare $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ and $\mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon})$, it is useful to introduce the following memoryless Gaussian BC without fading, $$\begin{aligned} \label{eqn:bc-nonfading} \tilde{\rvy}_k[i] = \sqrt{G_k({\epsilon}_k)} \tilde{\rvx}[i] + \tilde{\rvz}_k[i],\quad k = 1, \ldots, K,\;\; i = 1, \ldots, n,\end{aligned}$$ with $\tilde{\rvz}_k[\cdot] \sim \mathcal{CN}(0, 1)$, and with the same average power constraint $\rho$ as in the original quasi-static fading BC (\[eqn:k-user\]). We then notice that $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ coincides with the capacity region of this Gaussian BC (\[eqn:bc-nonfading\]), while $\mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon})$ corresponds to its rate region achieved by time-sharing. Therefore we conclude that $\mathcal{R}^\ast({\rho}, \underline{\epsilon}) \supseteq \mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon})$, and note that the two regions coincide if and only if $G_1(\epsilon_1) = G_2(\epsilon_2) = \ldots = G_K(\epsilon_K)$ (cf. [@bergmans74:it-2]). That is, Proposition \[prop:k-user\] yields an $\underline{\epsilon}$-outage achievable rate region that always contains that of time-sharing. For illustration, let us examine an example with two receivers. Both receivers experience Rayleigh fading, [*i.e.*]{}, $\rva_1, \rva_2$ are exponential random variables. We assume that the two receivers are under a near-far situation, with $\mathbf{E}[\rva_1] = 10$ and $\mathbf{E}[\rva_2] = 1$. The target outage probability vector is $\underline{\epsilon} = [0.01 \;\; 0.01]$, and the average power constraint $\rho$ is $20$dB. From these parameters, we find that $G_1(\epsilon_1) = 10\times\log(1/0.99) \approx 0.1$ and $G_2(\epsilon_2) = \log(1/0.99) \approx 0.01$, respectively. Figure \[fig:good\_dpc\] depicts the $\underline{\epsilon}$-outage achievable rate regions $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ and $\mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon})$, from which it is clear that $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ contains $\mathcal{R}^\mathrm{td}({\rho}, \underline{\epsilon})$. =5.0in Blind Dirty Paper Coding (B-DPC) {#sec:bdpc} ================================ In this section, we present the first coding scheme that achieves $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ in Proposition \[prop:k-user\]. We first introduce a variant of the “writing on dirty paper” (WDP) problem and observe a robustness property of B-DPC, then utilize this property to establish the achievability of $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$. Blind DPC and a Robustness Property {#subsec:robustness} ----------------------------------- Consider a variant of the WDP problem illustrated in Figure \[fig:wdp\]. The channel law satisfies $$\begin{aligned} \label{eqn:wdp} \rvy[n] = \sqrt{\rva} \cdot (\rvx[n] + \rvs_1[n] + \rvs_2[n]) + \rvz[n],~~ n = 1, \ldots, N,\end{aligned}$$ with i.i.d. additive noise $\rvz[\cdot] \sim \mathcal{CN}(0, N_0)$, and i.i.d. interference signals $\rvs_1[\cdot] \sim \mathcal{CN}(0, Q_1)$ and $\rvs_2[\cdot] \sim \mathcal{CN}(0, Q_2)$. The input $\rvx[\cdot]$ has an average power constraint $P$. The transmitter has full access to $\rvs_1$ non-causally, but neither the transmitter nor the receiver has access to $\rvs_2$; thus $\rvs_2$ acts as a (faded) noise. The fading, or resizing, random variable $\rva$ has a PDF $f(a)$ for $a \in [0, \infty)$, and remains constant over the entire coding block. Furthermore, $\rva$ is known at the receiver but not at the transmitter. We note that, (\[eqn:wdp\]) reduces to the original WDP problem if and only if $\rva$ is a constant with probability one. For general distributions on $\rva$, the channel SNR $\rva P/(\rva Q_2 + N_0)$ is a random variable unknown to the transmitter due to its lack of knowledge of $\rva$. Therefore it is impossible for the transmitter to dynamically adapt its DPC scheme according to the channel realization. Nevertheless, we can still apply DPC, with a linear precoding coefficient $\alpha$ chosen independent of $\rva$, to generate the auxiliary random variable $\rvU = \rvx + \alpha \rvs_1$. We call this approach “blind” dirty paper coding (B-DPC). Following the DPC encoding and decoding procedures in [@costa83:it], and noting that the channel fading only affects the noise variance at the decoder, we can find that the achievable rate conditioned on $\rva$ is the random variable $$\begin{aligned} \label{eqn:cond-rate} \rvJ(\alpha, \rva) := \log \frac{P[\rva (P + Q_1 + Q_2) + N_0]}{(1 - \alpha)^2 \rva PQ_1 + (P + \alpha^2 Q_1)(\rva Q_2 + N_0)}.\end{aligned}$$ For every target rate $R \geq 0$, (\[eqn:cond-rate\]) thus enables us to evaluate the outage probability $\mathbb{P}\left[\rvJ(\alpha, \rva) \leq R\right]$, [*i.e.*]{}, the probability that the realization of $\rva$ makes the achievable rate $\rvJ(\alpha, \rva)$ insufficient to support the target rate $R$. We further adjust the linear precoding coefficient $\alpha$ to minimize the outage probability. After manipulations, we find that the minimizer of $\mathbb{P}\left[\rvJ(\alpha, \rva) \leq R\right]$ is $$\begin{aligned} \label{eqn:alpha-opt} \alpha^\ast = 1 - e^{-R},\end{aligned}$$ and that the corresponding minimum outage probability is $$\begin{aligned} \label{eqn:min-outage} \min_{\alpha} \mathbb{P}\left[\rvJ(\alpha, \rva) \leq R\right] = \mathbb{P}\left[ R \geq \log \left(1 + \frac{P \rva}{Q_2 \rva + N_0}\right) \right].\end{aligned}$$ From (\[eqn:min-outage\]), we observe that the minimum outage probability of B-DPC coincides with the minimum outage probability if the receiver also knows $\rvs_1[\cdot]$ and thus can eliminate $\sqrt{\rva} \rvs_1[\cdot]$ from the received signal. Therefore B-DPC is outage-optimal, regardless of the specific distribution of $\rva$. It is also interesting to note that the optimal choice of $\alpha$ depends upon the target rate $R$. We may introduce a virtual channel SNR ${\rho}^\ast$ satisfying $R = \log(1 + \rho^\ast)$, and rewrite (\[eqn:alpha-opt\]) as $\alpha^\ast = {{\rho}^\ast}/{(1 + {\rho}^\ast)}$. So for a given target rate $R$, the optimal strategy for the transmitter is to treat the channel as if it is realized to just be able to support this rate. The optimality of B-DPC can be explained by a coincidence argument as follows. The conditional achievable rate (\[eqn:cond-rate\]) is a function of two variables, $\alpha$ and $\rva$, and is monotonically increasing with $\rva$ for every $\alpha$. On the other hand, for $\rva$ known to the transmitter, the choice of $\alpha$ maximizing $\rvJ(\alpha, \rva)$ is given by $\alpha^{\mathrm{DPC}}(\rva) := \rva P/(\rva P + \rva Q_2 + N_0)$. Therefore, for a given target rate $R$, if we solve the equation $\rvJ(\alpha^{\mathrm{DPC}}(\rva), \rva) = R$ which has the unique solution $\rva = a^\ast$, and choose $\alpha^\ast = \alpha^\mathrm{DPC}(a^\ast)$ in B-DPC, we can guarantee that for every fading realization $\rva < a^\ast$, the target rate $R$ is always achievable. Proof of Proposition \[prop:k-user\] via B-DPC {#subsec:proof} ---------------------------------------------- We now proceed to proving Proposition \[prop:k-user\] using B-DPC. For every fixed $\underline{\gamma}$, we need to show that all rate vectors $\underline{R}$ satisfying (\[eqn:R-inner\]) are achievable. Consider the $k$th receiver, and rewrite its channel as $$\begin{aligned} \label{eqn:bdpc-kchannel} \rvy_k[n] = \sqrt{\rva_k} \rvx_k[n] + \sqrt{\rva_k} \sum_{l > k} \rvx_l[n] + (\sqrt{\rva_k} \sum_{m < k} \rvx_m[n] + Z_k[n]),\quad n = 1, \ldots, N.\end{aligned}$$ In (\[eqn:bdpc-kchannel\]), the encoder function $\varphi^{(N)}\left(\left\{\rvm_k\right\}_{k = 1}^K\right)$ is additive such that $$\begin{aligned} \left\{\rvx[n]\right\}_{n = 1}^N = \sum_{k = 1}^K \varphi^{(N)}_k\left(\left\{\rvm_i\right\}_{i = k}^K\right),\end{aligned}$$ and we denote $$\begin{aligned} \left\{\rvx_k[n]\right\}_{n = 1}^N = \varphi^{(N)}_k\left(\left\{\rvm_i\right\}_{i = k}^K\right), \quad k = 1, \ldots, K.\end{aligned}$$ We encode $\rvm_k$ into $\left\{\rvx_k[n]\right\}_{n = 1}^N$ following B-DPC with average power $\gamma_k{\rho}$, by treating $\sum_{l > k} \rvx_l[\cdot]$ as the non-causally known interference, and by treating $(\sqrt{\rva_k} \sum_{m < k} \rvx_m[\cdot] + \rvz_k[\cdot])$ as noise. The encoded signal $\{\rvx_k[n]\}_{n = 1}^N$ thus contains i.i.d. $\mathcal{CN}(0, \gamma_k {\rho})$ components, which are further mutually independent with any other $\rvx_{k^\prime}[\cdot]$, $\forall k^\prime \neq k$. From the discussion in Section \[subsec:robustness\], if we choose the linear precoding coefficient in B-DPC as $\alpha^\ast_k = 1 - e^{-R_k}$ for a target rate $R_k$, the resulting outage probability of the $k$th receiver is $$\begin{aligned} \label{eqn:outage-proof} \mathbb{P}^{(\mathrm{out})}_k := \mathbb{P}\left[ \rva_k \leq \frac{e^{R_k} - 1}{\gamma_k {\rho}- (e^{R_k} - 1) \sum_{m < k}\gamma_m {\rho}} \right].\end{aligned}$$ Alternatively, for a given target outage probability $\epsilon_k$ for the $k$th receiver, it follows from (\[eqn:outage-proof\]) that the maximum achievable rate $R_k$ should satisfy $$\begin{aligned} \frac{e^{R_k} - 1}{\gamma_k {\rho}- (e^{R_k} - 1) \sum_{m < k}\gamma_m{\rho}} < G_k(\epsilon_k),\end{aligned}$$ which gives rise to $$\begin{aligned} R_k < \log \left( 1 + \frac{G_k(\epsilon_k) \gamma_k{\rho}}{G_k(\epsilon_k) \sum_{m < k}\gamma_m {\rho}+ 1} \right),\end{aligned}$$ corresponding to (\[eqn:Rk\]) for the fixed $\underline{\gamma}$. As we exhaust all the possible $\underline{\gamma}$, we obtain the rate region $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ as given by (\[eqn:R-inner\]). This concludes the proof of Proposition \[prop:k-user\]. Extension to Receivers with Multiple Antennas {#subsec:simo} --------------------------------------------- Proposition \[prop:k-user\] readily extends to the case in which each receiver has multiple antennas. This stems from the fact that DPC [@costa83:it] can be extended (by directly applying the general results in [@gelfand80:pcit]) to single-input, multiple-output (SIMO) Gaussian channels. Analogously, B-DPC still attains robustness without transmit CSI, and the steps in Sections \[subsec:robustness\] and \[subsec:proof\] carry through. Consider a $K$-user quasi-static fading scalar-input Gaussian BC without transmit CSI, with the $k$th receiver equipped with $m_k$ receive antennas receiving $$\begin{aligned} \label{eqn:simo} \rvy_{k, m}[n] = \rvh_{k, m} \rvx[n] + \rvz_{k, m}[n], \quad m = 1, \ldots, m_k,\;\; n = 1, \ldots, N.\end{aligned}$$ The i.i.d. additive noise vector $\bigl[\rvz_{k, 1}[\cdot], \ldots, \rvz_{k, m_k}[\cdot]\bigr]^\mathrm{T} \sim \mathcal{CN}(\mathbf{0}, \mathbf{I}_{m_k \times m_k})$. The input $\rvx[\cdot]$ satisfies average power constraint ${\rho}$. The complex-valued random variable $\rvh_{k, m}$ denotes the fading coefficient for the $m$th receive antenna of the $k$th receiver. Here note that for vector fading channels, we need to take into consideration the complex-valued fading coefficients. We have the following result. \[cor:simo\] For the channel model (\[eqn:simo\]), B-DPC achieves an $\underline{\epsilon}$-outage achievable rate region identical to that described by $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ for the $K$-user quasi-static fading scalar Gaussian BC model (\[eqn:k-user\]), with $\rva_k$ replaced by $\sum_{m = 1}^{m_k} |\rvh_{k, m}|^2$. [*Case Study: Receivers with Two Antennas of Spatially Correlated Rayleigh Fading*]{} In practical downlink systems, the physical size of receivers is usually limited. Consequently, the number of receive antennas is typically small and spatial correlation exists among them. Here we examine the case of two receivers, each equipped with two antennas experiencing Rayleigh fading. For each receiver, the fading coefficients of the two receive antennas are correlated with correlation coefficient $\zeta \in [-1, 1]$. We assume that the two receivers are under a near-far situation, with the mean of each fading coefficient of the first receiver being $10$ and that of the second being $1$. The target outage probability vector is $\underline{\epsilon} = [0.01 \; 0.01]$, and the average power constraint $\rho$ is $20$dB. Figure \[fig:simo\] depicts the $\underline{\epsilon}$-outage achievable rate regions $\mathcal{R}^\ast(\rho, \underline{\epsilon})$, for different values of the spatial correlation coefficient $\zeta$. It is clearly illustrated that multiple receive antennas, even moderately correlated, substantially enlarge the outage achievable rate region. Statistical Superposition Coding (S-SC) {#sec:ssc} ======================================= As we know, and the robustness property of B-DPC exemplifies, outage probability relates more to the fading statistics rather to individual realizations. We therefore are motivated to revisit superposition coding, focusing on its statistical properties in the context of quasi-static fading. As will be shown in this section, a modified superposition coding scheme, called statistical superposition coding (S-SC), also achieves the $\underline{\epsilon}$-outage achievable rate region $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ given by Proposition \[prop:k-user\]. Encoding and Decoding Procedures for S-SC ----------------------------------------- **Encoding:** The encoding part of S-SC is identical to conventional superposition coding for a scalar Gaussian BC [@cover72:it]. Fix a power allocation vector $\underline{\gamma} = (\gamma_1, \ldots, \gamma_K) \in [0, 1]^K$ satisfying $\sum_{k = 1}^K \gamma_k = 1$. The channel inputs are again generated as $\rvx[\cdot] = \sum_{k = 1}^K \rvx_k[\cdot]$, where the i.i.d. $\rvx_k[\cdot] \sim \mathcal{CN}(0, \gamma_k{\rho})$ encodes the message $\rvm_k$ for the $k$th receiver. We note that, however, the $K$ signal components $\{\rvx_k[\cdot]\}_{k = 1}^K$ are generated independently, with no dependence as in B-DPC. **Decoding:** Consider the decoding procedure at the $k$th receiver, with its channel written as $$\begin{aligned} \rvy_k[n] = \sqrt{\rva_k} \sum_{k = 1}^K \rvx_k[n] + \rvz_k[n],\quad n = 1, \ldots, N.\end{aligned}$$ - In the first step, the decoder attempts to decode $\rvm_K$, the message for the $K$th receiver, by treating $\sqrt{\rva_k} \sum_{l = 1}^{K - 1}\rvx_l[\cdot]$ as noise. Due to the quasi-static nature of the channel, the decoder may either successfully decode $\rvm_K$, and thus reliably reconstruct $\{\rvx_K[n]\}_{n = 1}^N$, or experience an outage at this stage. - The second decoding step has two possibilities. If $\rvm_K$ has been decoded successfully, the decoder subtracts $\sqrt{\rva_k} \rvx_K[\cdot]$ from $\rvy_k[\cdot]$, and proceeds to decode $\rvm_{K - 1}$ by treating $\sqrt{\rva_k} \sum_{l = 1}^{K - 2}\rvx_l[\cdot]$ as noise; otherwise, the decoder attempts to decode $\rvm_{K - 1}$ by treating $\sqrt{\rva_k} \sum_{l = 1}^{K - 2}\rvx_l[\cdot]$ together with $\sqrt{\rva_k} \rvx_K[\cdot]$ as noise. - Continuing the step-wise decoding procedure, when the decoder at the $k$th receiver turns to decode its own message $\rvm_k$, it has already successfully decoded the messages for a random subset of the other receivers with indexes larger than $k$. The decoder thus subtracts from $\rvy_k[\cdot]$ the signals for these other receivers, and decodes $\rvm_k$ by treating all the remaining undecoded signals as noise. We note that, in the described decoding procedure, the decoder can only cancel the interfering signals of a random subset of receivers, rather than those of all the “more degraded” receivers as in conventional superposition coding. This is why we call the scheme statistical superposition coding. Proof of Proposition \[prop:k-user\] via S-SC --------------------------------------------- In the proof, it suffices to show that for any fixed power allocation vector $\underline{\gamma}$, the $k$th receiver employing S-SC achieves an outage probability no larger than $\epsilon_k$, $k = 1, \ldots, K$, if the target rate vector $\underline{R}$ satisfies $$\begin{aligned} \label{eqn:rate-condition-ssc} R_k < \log \left( 1 + \frac{G_k(\epsilon_k) \gamma_k{\rho}}{G_k(\epsilon_k) \sum_{m < k}\gamma_m {\rho}+ 1} \right).\end{aligned}$$ We prove this statement by induction. First, the statement obviously holds true for the $K$th receiver. Next, assuming that the statement holds true for all receivers with indexes larger than $k$, consider the $k$th receiver with $k \leq K - 1$. Let us introduce a decoding-indicator for the $k$th receiver, which is a length-$(K - k + 1)$ random vector $\underline{\rvd}^{(k)} \in \{0, 1\}^{K - k + 1}$, with $l$th element $\rvd^{(k)}_l = 1$ if the decoder at the $k$th receiver has successfully decoded $\rvm_{K + 1 - l}$, and $\rvd^{(k)}_l = 0$ otherwise. For example, consider a three-user BC with the first receiver ($k = 1$) obtaining $\underline{\rvd}^{(1)} = [1 \;\;0\;\; 1]$ in one particular channel realization. This means that the first receiver has first successfully decoded the message $\rvm_3$, then experienced an outage in attempting to decode $\rvm_2$, and finally decoded its own message $\rvm_1$ successfully. In general, any $(0, 1)$-vector of appropriate length can be realized as a valid decoding-indicator due to the randomness of the fading; however, the situation is considerably simplified under the condition in Proposition \[prop:k-user\], namely, the indexes of the $K$ receivers are sorted such that $G_1(\epsilon_1) \geq G_2(\epsilon_2) \geq \ldots \geq G_K(\epsilon_K)$. Under the condition of Proposition \[prop:k-user\], we claim that, if $\rvd^{(k)}_l = 1$ for some $l$, then $\rvd^{(k)}_{l^\prime} = 1$ for all $l^\prime \geq l + 1$. In words, if the $k$th receiver successfully decodes the message for the $l$th ($l \geq k$) receiver, then it must have successfully decoded the messages for all the receivers with indexes larger than $l$. For example, the decoding-indicator $\underline{\rvd}^{(1)} = [1 \;\;0\;\; 1]$ is impossible in this case, but $\underline{\rvd}^{(1)} = [1 \;\;1 \;\; 1], [1 \;\;1\;\; 0], [1 \;\;0\;\; 0]$, or $[0 \; 0 \; 0]$ are possible decoding-indicators. We prove the claim by contradiction. Let us assume that there exists an execution of S-SC at the $k$th receiver with $\underline{\rvd}^{(k)}$, in which $\rvd^{(k)}_{l^\prime} = 0$ is the first zero element scanning from left to right, and $\rvd^{(k)}_l = 1$ for some $l > l^\prime$ is located to the right of $\rvd^{(k)}_{l^\prime}$ in $\underline{\rvd}^{(k)}$. Since $\rvd^{(k)}_{l^\prime} = 0$ is the first zero element in $\underline{\rvd}^{(k)}$, all the messages for the receivers with index larger than $(K + 1 - l^\prime)$ have been successfully decoded and thus eliminated from the received signal, before decoding $\rvm_{K + 1 - l^\prime}$. We therefore have $$\begin{aligned} \label{eqn:ssc-proof-1} \log\left(1 + \frac{\rva_k \gamma_{K + 1 - l^\prime}{\rho}} {1 + \rva_k \sum_{j = 1}^{K - l^\prime}\gamma_j {\rho}}\right) \leq R_{K + 1 - l^\prime}.\end{aligned}$$ Meanwhile, since our induction assumes that the target rate of the $(K + 1 - l^\prime)$-th receiver satisfies (\[eqn:rate-condition-ssc\]), [*i.e.*]{}, $$\begin{aligned} \label{eqn:ssc-proof-2} R_{K + 1 - l^\prime} < \log\left(1 + \frac{G_{K + 1 - l^\prime}(\epsilon_{K + 1 - l^\prime}) \gamma_{K + 1 - l^\prime} {\rho}}{1 + G_{K + 1 - l^\prime}(\epsilon_{K + 1 - l^\prime}) \sum_{j = 1}^{K - l^\prime}\gamma_j{\rho}}\right).\end{aligned}$$ Comparing (\[eqn:ssc-proof-1\]) and (\[eqn:ssc-proof-2\]), we find that the channel fading realization $\rva_k$ must satisfy $\rva_k < G_{K + 1 - l^\prime}(\epsilon_{K + 1 - l^\prime})$. On the other hand, $\rvd^{(k)}_l = 1$ implies $$\begin{aligned} \log\left(1 + \frac{\rva_k \gamma_{K + 1 - l}{\rho}}{1 + \rva_k \sum_{j = 1}^{K - l}\gamma_j {\rho}+ \rva_k \sum_{j = 1}^l \gamma_{K + 1 - j} (1 - \rvd^{(k)}_j) {\rho}}\right) > R_{K + 1 - l},\end{aligned}$$ where $\rva_k \sum_{j = 1}^l \gamma_{K + 1 - j} (1 - \rvd^{(k)}_j) {\rho}\geq 0$ accounts for the effect of those undecoded messages subject to outage in the previous S-SC decoding steps. So we further get $$\begin{aligned} \label{eqn:ssc-proof-3} \log\left(1 + \frac{\rva_k \gamma_{K + 1 - l}{\rho}}{1 + \rva_k \sum_{j = 1}^{K - l}\gamma_j {\rho}}\right) > R_{K + 1 - l}.\end{aligned}$$ Meanwhile, since the induction should hold true for any rate vector satisfying (\[eqn:rate-condition-ssc\]), we can choose an arbitrarily small $\delta > 0$ such that $$\begin{aligned} \label{eqn:ssc-proof-4} R_{K + 1 - l} > \log\left(1 + \frac{G_{K + 1 - l}(\epsilon_{K + 1 - l}) \gamma_{K + 1 - l} {\rho}}{G_{K + 1 - l}\sum_{j = 1}^{K - l} \gamma_j {\rho}+ 1}\right) - \delta.\end{aligned}$$ Comparing (\[eqn:ssc-proof-3\]) and (\[eqn:ssc-proof-4\]), we find that $\rva_k$ must satisfy $\rva_k \geq G_{K + 1 - l}(\epsilon_{K + 1 - l})$. Combining the two bounds on $\rva_k$, we obtain $$\begin{aligned} G_{K + 1 - l}(\epsilon_{K + 1 - l}) \leq \rva_k < G_{K + 1 - l^\prime}(\epsilon_{K + 1 - l^\prime}),\end{aligned}$$ which is in contradiction with the condition $G_1(\epsilon_1) \geq G_2(\epsilon_2) \geq \ldots \geq G_K(\epsilon_K)$. So the claim is proved. Having established the claim regarding the structure of decoding-indicators, we are ready to complete the proof of Proposition \[prop:k-user\], by evaluating the probability that the $k$th receiver does not experience an outage in decoding its own message $\rvm_k$. From our claim, the occurrence of this event implies that the messages $\{\rvm_K, \rvm_{K - 1}, \ldots, \rvm_{k + 1}\}$ all have been successfully decoded. It is then follows that for every $R_k$ satisfying (\[eqn:rate-condition-ssc\]), the outage probability for decoding $\rvm_k$ is no larger than $\epsilon_k$. By induction, this concludes the proof of Proposition \[prop:k-user\]. Extension to a Transmitter with Multiple Antennas {#subsec:miso} ------------------------------------------------- As with B-DPC, S-SC can be extended to BCs with SIMO links, yielding Corollary \[cor:simo\] again. Furthermore, S-SC can also be extended to BCs with multiple-input, single-output (MISO) links. In contrast, it is unclear how to accomplish this with B-DPC, because DPC is generally suboptimal in multiple-input Gaussian channels without utilizing the channel gain vector for precoding [@caire03:it]. Consider a $K$-user quasi-static fading Gaussian BC without transmit CSI, with the transmitter equipped with $m_\mathrm{t}$ antennas. The $k$th receiver output is given by $$\begin{aligned} \label{eqn:miso} \rvy_k[n] = \sum_{m = 1}^{m_\mathrm{t}} \rvh_{k, m} \rvx_m[n] + \rvz_k[n],\quad n = 1, \ldots, N.\end{aligned}$$ The i.i.d. additive noises $\rvz_k[\cdot] \sim \mathcal{CN}(0, 1)$. The vector inputs $\bigl[ \rvx_1[\cdot], \ldots, \rvx_{m_\mathrm{t}}[\cdot] \bigr]^\mathrm{T}$ have an average power constraint ${\rho}$, [*i.e.*]{}, $$\begin{aligned} \frac{1}{N}\sum_{n = 1}^N \sum_{m = 1}^{m_\mathrm{t}} |\rvx_m[i]|^2 \leq {\rho},\end{aligned}$$ over the coding block of length $N$. The complex-valued random variable $\rvh_{k, m}$ denotes the fading coefficient for the link from the $m$th transmit antenna to the $k$th user. We have the following result. \[cor:miso\] For the channel model (\[eqn:miso\]), S-SC achieves an $\underline{\epsilon}$-outage achievable rate region identical to that described by $\mathcal{R}^\ast({\rho}, \underline{\epsilon})$ for the $K$-user quasi-static fading scalar Gaussian BC model (\[eqn:k-user\]), with $\rva_k$ replaced by $(1/m_\mathrm{t})\sum_{m = 1}^{m_\mathrm{t}}|\rvh_{k, m}|^2$. [*Case Study: Multiple Transmit Antennas of Spatially Uncorrelated Rayleigh Fading*]{} Unlike receivers in a typical downlink system, the physical size of the transmitter is usually less constrained. Consequently, multiple transmit antennas without spatial correlation may be deployed. Here we examine the case of two receivers each equipped with a single antenna, and with each link experiencing Rayleigh fading independent of the others. We assume that the two receivers are under a near-far situation, with the mean of each fading coefficient of the first receiver being $10$ and that of the second being $1$. The target outage probability vector is $\underline{\epsilon} = [0.01 \; 0.01]$, and the average power constraint $\rho$ is $20$dB. Figure \[fig:miso\] depicts the $\underline{\epsilon}$-outage achievable rate regions $\mathcal{R}^\ast(\rho, \underline{\epsilon})$, for different values of the number of transmit antennas $m_\mathrm{t}$. It is clearly illustrated that multiple transmit antennas substantially enlarge the outage achievable rate region. Concluding Remarks {#sec:conclude} ================== In this paper, we consider downlink transmission modeled as a quasi-static fading Gaussian BC without transmit CSI. We identify a non-trivial outage achievable rate region which always dominates that of time-sharing. We show that there exist two distinct coding schemes, namely B-DPC and S-SC, both achieving this outage achievable rate region. The analysis of these coding schemes highlights the statistical nature of the communication problem under an outage criterion. That is, in order to be outage-efficient, it is not the performance for individual channel realizations, but instead the performance statistics, that play a key role. Acknowledgment {#acknowledgment .unnumbered} ============== The authors wish to thank Giuseppe Caire for encouragement and useful comments in preparing this paper. [^1]: The work of W. Zhang has been supported in part by NSF through grants NRT ANI-0335302, ITR CCF-0313392, and OCE-0520324; the work of S. Kotagiri and J. N. Laneman has been supported in part by NSF through grants CCF-0546618 and CNS-0626595, and a Graduate Student Fellowship from the Notre Dame Center for Applied Mathematics. The material in this paper was presented in part at the IEEE International Symposium on Information Theory (ISIT), Nice, France, June 2007. [^2]: W. Zhang is with the Communication Sciences Institute, Ming Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA. Email: [wenyizha@usc.edu]{} [^3]: S. Kotagiri and J. N. Laneman are with the Department of Electrical Engineering, University of Notre Dame, Notre Dame, IN. Email: [{skotagir, jnl}@nd.edu]{}
epsf Introduction ============ Conservation laws during collisions are a cornerstone of Newtonian mechanics. They have been generalized to special relativity and are ubiquitous in the interpretation of collider experiments. In general relativity, the question of conservation laws during collisions is subtler because the colliding (self-gravitating) objects affect spacetime itself. Some attention has been paid in the literature to the collision of shells in general relativity [@kssm81; @wu; @dt85; @redmount85; @bi91; @nos93; @bf96; @in99]. Only particular cases however have been considered and the calculations are rather involved. One can distinguish two types of approaches, one based on (not always well justified) energy-momentum conservation laws, another based on geometrical constraints, especially in the case of light-like shells. The interest for collisions in general relativity has been renewed very recently in the context of brane-world cosmology by the idea that our current universe could be the product of a collision of 3-branes in a five-dimensional spacetime [@kost01; @kkl01; @bucher01]. In this Letter, we propose a unified treatment, based on a purely geometric approach, which considerably simplifies the calculations, and thus immediately applies to [*any number*]{} of $n$-branes in a $D=n+2$ dimensional spacetime. The case $n=2$ corresponds to shells in standard general relativity, whereas the case $n=3$ applies to the collision of $3$-branes in the brane cosmology scenarios. The simplicity of our treatment follows from expressing the geometrical constraint as a sum rule for angles associated with generalised Lorentz boosts between the branes and the intervening spacetime regions. Local motion of a brane ======================= In an $n+2$ dimensional spacetime, $n$-branes divide the spacetime into distinct regions. We will consider the simplest case where each such region is empty and can be described by a metric of the form ds\^2=-f(R)dT\^2+ [dR\^2f(R)]{} +R\^2d\_n\^2, \[metric\] where the ‘orthogonal’ metric $d\Omega_n^2$ does not depend on either $T$ or $R$. The well-known case of a Schwarzschild-(anti)-de Sitter spacetime corresponds to $f(R)=k-(\mu/R^{n-1})\mp(R/\ell)^2$. A brane at the boundary of this region is described by a two-dimensional trajectory $(T(\tau), R(\tau))$, where $\tau$ is the proper time. If we define the two-dimensional velocity vector $u^a=\left(\dot T, \dot R\right)$, where the dot denotes the derivative with respect to $\tau$, then by definition of the proper time, $u^a$ is normalized so that $g_{ab}u^a u^b= -f \dot T^2+f^{-1} \dot R^2= -1$. One can make connection with the formulas of special relativity by introducing a basis of normalized vectors, ${\bf e_T}= f^{-1/2}{\partial\over \partial T}$ and ${\bf e_R}=\sqrt{f}{\partial \over\partial R}$. One can then define a Lorentz factor $\gamma=- {\bf e_T}.{\bf u}$ and a relative velocity $\beta$, given by $\gamma\beta={\bf e_R}.{\bf u}$, which yields \[gamma\] = , =[R]{}. where $\epsilon=+1$ if $R$ decreases from “left” to “right”, $\epsilon=-1$ otherwise. Equation (\[gamma\]) characterizes the motion of the brane $\B$ with respect to an observer at rest in the frame $\R$ defined by (\[metric\]). It is easy to check that this implies the standard special relativistic formula $\gamma=1/\sqrt{1-\beta^2}$. At any point along the brane trajectory there is a local transformation from the bulk coordinates $T$ and $R$ to the proper time along the brane, $\tau$, and Gaussian normal coordinate, $\chi$: ( [c]{} dd ) = (-) ( [c]{} dT ), \[coordchange\] where $\Lambda(\theta)$ is a two-dimensional Lorentz matrix \[Lambda\] ()= ( [cc]{} & & ) , and $\alpha$ in Eq. (\[coordchange\]) is the Lorentz angle associated with the motion of the brane with respect to the original coordinate systems $\R$, i.e. \[alpha\] =\^[-1]{}(R/). Junction conditions =================== Being of codimension $1$, the worldsheet of each brane we consider here will separate the spacetime in two disconnected regions: the left region, which we call $\R_-$, and the right region, which we call $\R_+$, with two metrics of the form (\[metric\]) on the two sides. The coordinate $R$ must be the same on the two sides because the orthogonal part of the metric must be continuous. The junction conditions can be written in the form  [@israel] =-\^2(S\_[AB]{}-[Sn]{}g\_[AB]{}), where the left hand side is the jump of the extrinsic curvature tensor across the brane. $S_{AB}$ is the energy-momentum tensor of the brane, $S$ its trace, and $\kappa^2$ is the coupling between matter and gravity. For the orthogonal part, the extrinsic curvature components are $K_{ij}=(\epsilon/R) \sqrt{f+\dot R^2} g_{ij}$, which in the junction conditions yields the expression \[junction1\] \_+-\_- =[\^2n]{} R , where $\rho$ is the comoving energy density on the brane. This can be translated into a Friedmann-like equation inside the brane, which reads R\^2=[\^44n\^2]{}\^2R\^2-[12]{}(f\_++f\_-) +[n\^2\^4\^2R\^2]{}(f\_+-f\_-)\^2. \[friedmann\] The other part of the junction conditions is equivalent to the usual energy conservation law $\dot\rho+n (\dot R/ R)(\rho+P)=0$, where $P$ is the pressure. Let us now study the coordinate transformation that relates the coordinates $(T_+, R_+)$ to the coordinates $(T_-, R_-)$ (note that, for the brane, $R_-=R_+=R$, as imposed by the continuity of the metric along the ‘orthogonal’ directions). Because for both coordinate systems the metric is of the diagonal form (\[metric\]), the coordinate transformation is necessarily given by ( [c]{} dT\_+ ) = () ( [c]{} dT\_- ), \[lorentz\] where $\Lambda$ is a two-dimensional Lorentz matrix as defined in Eq. (\[Lambda\]) . If one evaluates the coordinate transformation [*at the brane*]{}, it is easy to see that the angle is given by $\alpha=\alpha_+-\alpha_-$, where $\alpha_+$ and $\alpha_-$ are the Lorentz angles associated with the motion of the brane with respect to the coordinate systems $\R_+$ and $\R_-$ respectively, as defined in Eq. (\[alpha\]). Intuitively, this result is very easy to understand. It simply means that to go from the coordinates of the region $\R_-$ to the coordinates of the region $\R_+$, one must do a (pseudo-)Lorentz transformation, which is the combination of a Lorentz transformation going from $\R_-$ to a system where the brane is at rest, with a Lorentz transformation from the brane system to $\R_+$. System of several branes ======================== So far, we have considered only one brane and the two regions surrounding it. To describe the collision of a system of branes in general, we now introduce a system of $N=N_{in}+N_{out}$ branes, consisting of $N_{in}$ ingoing branes colliding simultaneously and of $N_{out}$ outgoing branes, which are produced by the collision. These $N$ branes are separated by $N$ different regions of spacetime, which are assumed to be empty but can be endowed with different cosmological constants and Schwarzschild masses. \ To simplify the formalism, we are going to label alternately branes and regions by integers, starting from the leftmost ingoing brane and going anticlockwise around the point of collision (see Fig. 1). The branes will thus be denoted by odd integers, $2k-1$ ($1\le k \le N$), and the regions by even integers, $2k$ ($1\le k \le N$). Let us introduce, as before, the angle $\alpha_{2k-1|2k}$ which characterizes the motion of the brane $\B_{2k-1}$ with respect to the region $\R_{2k}$, and which is defined by \_[2k-1|2k]{}=[\_[2k]{}R\_[2k-1]{}]{}. \[alpha\_k\] Of course we can equally describe the motion of the region $\R_{2k}$ with respect to the brane by the Lorentz angle $\alpha_{2k|2k-1}=-\alpha_{2k-1|2k}$. We will find it convenient to define a rescaled brane density \_[2k-1]{}=\_[2k-1]{} R, with the plus sign for ingoing branes ($1\le k\le N_{in}$), the minus sign for outgoing branes ($N_{in}+1\le k \le N$). An outgoing positive energy density brane thus has the same sign as an ingoing negative energy density brane. The junction condition (\[junction1\]) then takes the simple form, $$\begin{aligned} \tilde\rho_{2k-1} &=& \epsilon_{2k}\sqrt{f_{2k}}\cosh\alpha_{2k-1|2k} \nonumber\\ && \ - \epsilon_{2k-2}\sqrt{f_{2k-2}} \cosh\alpha_{2k-2|2k-1} \,\end{aligned}$$ which can be further simplified, using the definition (\[alpha\_k\]), to give $$\begin{aligned} \label{junction2} \tilde\rho_{2k-1} &=& \epsilon_{2k}\sqrt{f_{2k}}\exp{(\pm\alpha_{2k-1|2k})} \nonumber\\ && \ - \epsilon_{2k-2}\sqrt{f_{2k-2}} \exp{(\mp\alpha_{2k-2|2k-1})} \label{rho}.\end{aligned}$$ Collision and conservation law ============================== In a small neighbourhood around the collision event, one can consider the change of coordinate systems between two regions in two ways: going from one region to the next anticlockwise or clockwise. The requirement of having the same result in the two cases requires that the composition of the pseudo-Lorentz transformations must give identity after a complete tour around the collision event. This gives the [*consistency relation*]{} \_[k=1]{}\^N (\_[2k-1|2k]{}-\_[2k-1|2k-2]{}) = [**Id**]{}, where we identify the index $i=j+2N$ with $i=j$. This condition has been obtained recently in a more complicated derivation by Neronov [@neronov01] using the existence of common null coordinates. In terms of the Lorentz angles $\alpha$, this consistency relation is simply the sum rule \_[i=1]{}\^[2N]{} \_[i|i+1]{}=0. \[collision\] This relation provides [*one constraint*]{}, which can be written in many ways. What we will show is that this relation can be expressed in an extremely intuitive form, which can look either like energy conservation or, equivalently, like momentum conservation. The main result of this Letter is that, using the junction conditions (\[junction2\]), the sum rule (\[collision\]) can be written as the conservation law \_[k=1]{}\^N\_[2k-1]{}e\^[\_[2k-1|j]{}]{}=0, \[conservation\] for [*any value of the index $j$*]{}, where we have introduced the generalized relative angle \[relativeangle\] \_[j|j’]{}=\_[i=j]{}\^[j’-1]{}\_[i|i+1]{}, if $j<j'$, and $\alpha_{j'|j}=-\alpha_{j|j'}$. To prove Eq. (\[conservation\]) one can simply use Eq. (\[junction2\]) to substitute for $\tilde\rho_{2k-1}$ and obtain a sum over exponentials minus another sum which is in fact identical by Eq. (\[collision\]) and hence they cancel each other out. One must be aware that although one can use Eq. (\[conservation\]) to give many different expressions, there is only one underlying geometrical constraint embodied in (\[collision\]). Let us now point out that the conservation law (\[conservation\]) can be written as an [*energy conservation law*]{} seen in the $j$-th reference frame, \_[k=1]{}\^N\_[2k-1]{}\_[j|2k-1]{}=0, where $\gamma_{j|j'}\equiv \cosh\alpha_{j|j'}$ corresponds to the Lorentz factor between the brane/region $j$ and the brane/region $j'$ and can be obtained, if $j$ and $j'$ are not adjacent, by combining all intermediary Lorentz factors (this is simply using the velocity addition rule of special relativity), or the relative angle formula (\[relativeangle\]). The index $j$ corresponds to the reference frame with respect to which the conservation rule is written. But the conservation law (\[conservation\]) can also be written as a [*momentum conservation law*]{} in the $j$-th reference frame, \_[k=1]{}\^N\_[2k-1]{}\_[2k-1|j]{}\_[2k-1|j]{}=0, with $\gamma_{j|j'}\beta_{j|j'}\equiv \sinh\alpha_{j|j'}$. Note that the relation (\[collision\]) implies a strong analogy between the real exponentials (and the hyperbolic cosine and sine), which we are using here, with the complex exponentials (and the usual cosine and sine) by effectively imposing a periodicity. Light-like branes ================= Our formalism can also be extended to deal with the case of light-like branes [@bi91]. One cannot then introduce a local Lorentz transformation from an adjacent region to the brane frame, as we did above for time-like branes. But we can still consider the coordinate transformation from one of the adjacent regions to the other, which is still of the form (\[lorentz\]). Since we now have $\epsilon_\B dR=\epsilon_+f_+dT_+=\epsilon_-f_- dT_-$ (with $\epsilon_\B=+1$ for a left-moving brane, $\epsilon_\B=-1$ for a right-moving brane), one finds $\epsilon_+=\epsilon_-$ (we have been implicitly assuming here that $T$ is a time-like coordinate, i.e. $f>0$, but it is straightforward to generalize to the case $f<0$) and e\^[\_]{}= . For example, in the case of two ingoing light-like branes and two outgoing light-like branes, defining four regions $I$, $II$, $III$ and $IV$, the substitution of the above result in the sum rule (\[collision\]) immediately yields the DTR (Dray-t’Hooft-Redmount) formula [@dt85; @redmount85] $ f_If_{III}=f_{II}f_{IV}. $ This can be easily generalized to any combination of time-like branes with light-like branes, using the general sum rule for angles (\[collision\]) and grouping the angles in pairs for the light-like branes. Examples ======== Let us first consider the case of two ingoing branes, $a$ and $b$, colliding to give a single brane $c$, separated by the regions $I$, $II$ and $III$ (see Fig. 2). It is most convenient to express the energy conservation law in the frame of the outgoing brane. One finds $ \rho_{c}= \rho_a\gamma_{a|c}+\rho_b\gamma_{b|c}. $ \ Note that here, the outcome of the collision is completely determined by the situation before collision. Indeed, the outgoing brane velocity and the energy density are interdependent as can be seen in Eq. (\[friedmann\]). If one had several outgoing branes then our conservation law would provide only one relation, which should completed by other information (based for instance on the microphysics of the collision) in order to fully determine the outcome. A slightly more complicated case, but of direct relevance to the recent ekpyrotic scenario [@kost01] or other works on brane cosmology inspired by the Horava-Witten model [@kkl01], is when one of the ingoing branes, $a$ say, is a $Z_2$-symmetric orbifold fixed point. We assume that the second brane, $b$, is not $Z_2$-symmetric, otherwise one dimension of spacetime would disappear at the collision. Because of the mirror symmetry about $a$, one must consider two copies, $b$ and $b'$, of the incoming brane (see Fig. 3). We finally assume that the product of the collision is a single $Z_2$-symmetric brane, labelled $c$. Then, $Z_2$-symmetry combined with Eq. (\[collision\]) implies that $\alpha_{a|c}=0$, i.e. there is no redshift between the ingoing and outgoing $Z_2$-symmetric branes. As a consequence, the energy conservation law reads simply $ \rho_c=\rho_a+2\rho_b\gamma_{b|a}. $ Note that the total momentum is automatically zero in the frame comoving with $a$ or $c$, however $\dot{R}_c$ is only zero if $\rho_c$ has the critical value given by Eq. (\[friedmann\]). Of course, if one considers the peeling off from an initial $Z_2$-symmetric brane, $a$, of a brane $b$, then one gets the conservation law $\rho_a=\rho_c+2\rho_b\gamma_{b|c}$ where the $Z_2$-symmetric brane after collision is labelled $c$. Further applications of our results will be discussed in a separate publication [@lmw01b]. \ Conclusions =========== We have presented here a general treatment of the collision of branes in a vacuum spacetime. We have extended to the general case of time-like branes the geometrical constraint characterizing the collision, which was known before only in the special case of light-like branes[@dt85; @redmount85]. This constraint can be expressed in the simple form of a sum rule for hyperbolic angles. The general relativistic junction conditions (\[junction2\]) allow us to relate these angles to the energy or momentum of branes at the collision. In this way we obtain extremely simple and intuitive energy and momentum conservation laws, which are analoguous to the collision of point particles in two-dimensional special relativity. One can envisage extending our formalism to the case where the spacetime regions between the branes need be neither empty nor static. One immediate generalization would be to consider a Reissner-Nordstrom-(anti-)de Sitter metric in (\[metric\]). The formalism would then be unchanged but one would have to supplement the conservation law (\[collision\]) with the conservation of the brane charges. In general, however, the generalization will be complicated by the need to take into account the junction conditions for the bulk fields and by the possibility that part of the energy at the collision might dissipate in excitations of the bulk field. Nonetheless we believe that our approach, by its simplicity, is likely to be a useful starting point for such a generalization. One application of our formalism will be to shed some light on the evolution of cosmological perturbations through a 3-brane collision [@lmw01b]. We thank C. Barrabes and J. A. Vickers for discussions. DL thanks the Portsmouth RCG for its hospitality and KM thanks both the IAP and the Portsmouth RCG for their hospitality. This work was supported in part by the Yamada foundation. DW is supported by the Royal Society. [99]{} H. Kodama, M. Sasaki, K. Sato, and K. Maeda, Prog. Theor. Phys. [**66**]{}, 2052 (1981). S. W. Hawking, I. G. Moss and J. M. Stewart, Phys. Rev. D [**26**]{}, 2681 (1982); Z. C. Wu, Phys. Rev. D [**28**]{}, 1898 (1983). T. Dray and G. ’t Hooft, Commun. Math. Phys. [**99**]{}, 613 (1985); T. Dray and G. ’t Hooft, Class. Quant. Grav. [**3**]{}, 825 (1986). I.H. Redmount, Prog. Theor. Phys. [**73**]{}, 1401 (1985). C. Barrabes, W. Israel, Phys. Rev. D [**43**]{}, 1129 (1991). D. Nunez, H.P. de Oliveira, J. Salim, Class. Quantum Grav. [**10**]{}, 1117 (1993). C. Barrabes, V. Frolov, Phys. Rev. D [**53**]{}, 3215 (1996). D. Ida, K. Nakao, Prog. Theor. Phys. [**101**]{}, 989 (1999). J. Khoury, B. A. Ovrut, P. J. Steinhardt and N. Turok, Phys. Rev. D [**64**]{}, 123522 (2001) \[arXiv:hep-th/0103239\]. R. Kallosh, L. Kofman and A. D. Linde, Phys. Rev. D [**64**]{}, 123523 (2001) \[arXiv:hep-th/0104073\]. M. Bucher, “A braneworld universe from colliding bubbles”, hep-th/0107148 W. Israel, Nuovo Cim. [**44B**]{}, 1 (1966). A. Neronov, JHEP [**0111**]{}, 007 (2001) \[arXiv:hep-th/0109090\]. D. Langlois, K. Maeda, D. Wands, in preparation.
--- abstract: 'The matrices of spanning rooted forests are studied as a tool for analysing the structure of digraphs and measuring their characteristics. The problems of revealing the basis bicomponents, measuring vertex proximity, and ranking from preference relations / sports competitions are considered. It is shown that the vertex accessibility measure based on spanning forests has a number of desirable properties. An interpretation for the normalized matrix of out-forests in terms of information dissemination is given.' address: | Institute of Control Sciences of the Russian Academy of Sciences\ 65 Profsoyuznaya str., Moscow 117997, Russia author: - Pavel Chebotarev - Rafig Agaev bibliography: - 'c:/pavel/bibli/pavel/all2.bib' title: Matrices of Forests and the Analysis of Digraphs --- and Laplacian matrix, spanning forest, matrix-forest theorem, proximity measure, bicomponent, ranking, incomplete tournament, paired comparisons n[1,…,n]{} n[0,…,n]{} \_\#1 ł =800 Introduction ============ The matrices of routes between vertices are useful to analyse the structure of graphs and digraphs. These matrices are the powers of the adjacency matrix. In this paper, we consider the matrices of spanning rooted forests as an alternative tool for analyzing digraphs. We show how they can be used for finding the basis (source) bicomponents of a digraph (Section \[stru\]), for measuring vertex proximity (Section \[dosti\]), and for ranking on the base of preference relations / sports competitions (Section \[leader\]). In the initial sections, we introduce the necessary notation (Section \[Notatio\]), present recurrent formulae for calculating the “forest matrices” (Section \[sec\_constr\]), and list some properties of spanning rooted forests and forest matrices (Section \[sect\_prop\]). Three features that distinguish the matrices of forests from the matrices of routes are notable. First, all column sums (or row sums) of the forest matrices are the same, therefore, these matrices can be considered as matrices of [*relative*]{} accessibility. Second, there are matrices of “out-forests” and matrices of “in-forests”, enabling one to distinguish “out-accessibility” from “in-accessibility”, which is intuitively defensible. Third, the total weights of maximum spanning forests are closely related to the Cesáro limiting probabilities of Markov chains determined by the digraph under consideration. Notation and simple facts {#Notatio} ========================= Weighted digraphs, components, and bases ---------------------------------------- Suppose that $\G$ is a weighted digraph without loops, $V(\G)=\{\1n\},$ $n>1,$ is its set of vertices and $E(\G)$ the set of arcs. Let $W=(\e\_{ij})$ be the matrix of arc weights. Its entry $\e\_{ij}$ is zero if there is no arc from vertex $i$ to vertex $j$ in $\G$; otherwise $\e\_{ij}$ is strictly positive. In what follows, $\G$ is fixed, unless otherwise specified. If $\G'$ is a subgraph of $\G$, then the weight of $\G'$, $\e(\G')$, is the product of the weights of all its arcs; if $E(\G')=\varnothing$, then $\e(\G')=1$ by definition. The weight of a nonempty set of digraphs $\GG$ is \[set\_weight\] ()=\_[H]{}(H);()=0. [*A spanning subgraph*]{} of $\G$ is a subgraph with vertex set $V(\G)$. The [*indegree*]{} $\id(\ve)$ and [*outdegree*]{} $\od(\ve)$ of a vertex $\ve$ are the number of arcs that come [*in*]{} $\ve$ and [*out of*]{} $\ve$, respectively. A vertex $\ve$ is called a [*source*]{} if $\id(\ve)=0$. A vertex $\ve$ is [*isolated*]{} if $\id(\ve)=\od(\ve)=0$. A [*route*]{} ([*semiroute*]{}) is an alternating sequence of vertices and arcs $\ve\_0,e\_1,$ $\ve\_1\cdc e\_k, \ve\_k$ with every arc $e\_i$ being $(\ve\_{i-1},\ve\_i)$ (resp., either $(\ve\_{i-1},\ve\_i)$ or $(\ve\_i,\ve\_{i-1})$). A [*path*]{} is a route with distinct vertices. A [*circuit*]{} is a route with $\ve\_0=\ve\_k$, the other vertices being distinct and different from $\ve\_0$. Vertex $\ve$ [*is reachable*]{} from vertex $z$ in $\G$ if $\ve=z$ or $\G$ contains a path from $z$ to $\ve$. A digraph is [*strongly connected*]{} (or [*strong*]{}) if all of its vertices are mutually reachable and [*weakly connected*]{} if any two different vertices are connected by a semiroute. Any maximal strongly connected (weakly connected) subgraph of $\G$ is a [*strong component*]{}, or a [*bicomponent*]{} (resp., a [*weak component*]{}) of $\G$. Let $\G_1\cdc\G_r$ be all the strong components of $\G$. The [*condensation*]{} (or [*factorgraph*]{}, or [*leaf composition*]{}, or [*Hertz graph*]{}) $\G^{\circ}$ of digraph $\G$ is the digraph with vertex set $\{\G_1\cdc\G_r\}$, where arc $(\G_i,\G_j)$ belongs to $E(\G^{\circ})$ iff $E(\G)$ contains at least one arc from a vertex of $\G_i$ to a vertex of $\G_j$. The condensation of any digraph $\G$ obviously contains no circuits. A [*vertex basis*]{} of a digraph $\G$ is any minimal (by inclusion) collection of vertices such that every vertex of $\G$ is reachable from at least one vertex of the collection. If a digraph does not contain circuits, then its vertex basis is obviously unique and coincides with the set of all sources [@Harary69; @Zykov69]. That is why the bicomponents of $\G$ that correspond to the sources of $\G^{\circ}$ are called the [*basis bicomponents*]{} [@Zykov69] or [*source bicomponents*]{} of $\G$. In this paper, the term [*source knot of*]{} $\G$ will stand for the set of vertices of any source bicomponent of $\G.$ In [@FiedlerSedlacek58], source knots are called [*W-bases*]{}. The following statement [@Harary69; @Zykov69] characterizes all the vertex bases of a digraph. \[proZy\] A set $U\subseteq V(\G)$ is a vertex basis of $\G$ if and only if $U$ contains exactly one vertex from each source knot of $\G$ and no other vertices. \[sec2\] Schwartz [@Schwartz86] referred to the source knots of a digraph as [*minimum $P$-undominated sets*]{}. According to his [*Generalized Optimal Choice Axiom*]{} (GOCHA), if a digraph represents a preference relation on a set of alternatives, then the [*choice*]{} should be the union of its minimum $P$-undominated sets.[^1] This choice is interpreted as the set of “best” alternatives. A review of choice rules of this kind can be found in [@Vol'skii88]; for “fuzzy” extensions, see [@Roubens96FSS]. Matrices of forests ------------------- A [*diverging tree*]{} is a weakly connected digraph in which one vertex (called the [*root*]{}) has indegree zero and the remaining vertices have indegree one. A diverging tree is said to [*diverge from*]{} its root. Spanning diverging trees are sometimes called [*out-arborescences*]{}. A [*diverging forest*]{} (or [*diverging branching*]{}) is a digraph all of whose weak components are diverging trees. The roots of these trees are called the roots of the diverging forest. A [*converging tree*]{} ([*converging forest*]{}) is a digraph that can be obtained from a diverging tree (resp., diverging forest) by the reversal of all arcs. The roots of a converging forest are its vertices that have outdegree zero. In what follows, spanning diverging forests in $\G$ will be called [*out-forests*]{} of $\G$; spanning converging forests in $\G$ will be called [*in-forests*]{} of $\G$. \[De-Max\] An out-forest $F$ of a digraph $\G$ is called a [*maximum out-forest*]{} of $\G$ if $\G$ has no out-forest with a greater number of arcs than in $F$. It is easily seen that every maximum out-forest of $\G$ has the minimum possible number of diverging trees; this number will be called the [*out-forest dimension*]{} of $\G$ and denoted by $\di$. It can be easily shown that the number of arcs in any maximum out-forest is $n-\di$; in general, the number of weak components in a forest with $k$ arcs is $n-k$. By $\FF^{\rto}(\G)=\FF^{\rto}$ and $\FF^{\rto}_k(\G)=\FF^{\rto}_k$ we denote the set of all out-forests of $\G$ and the set of all out-forests of $\G$ with $k$ arcs, respectively; $\FF^{i\rto j}_k$ will designate the set of all out-forests with $k$ arcs where $j$ belongs to a tree diverging from $i$; $\FF^{i\rto j}=\bigcup_{k=0}^{n-\di}\FF^{i\rto j}_k$ is the set of such out-forests with all possible numbers of arcs. The notation like $\FF^{\rto}_{(k)}$ will be used for sets of out-forests that consist of $k$ trees, so $\FF^{\rto}_{(k)}=\FF^{\rto}_{n-k},\;k=\1n.$ Thus, the ${*}\hspace{-.4em}\to$ sign relates to out-forests; the corresponding notation with $\to\hspace{-.3em}{*}\,$, such as $\FF^{\tor},$ relates to in-forests, i.e., $\ast$ images the root(s). Let \[sik\] \_k=(\^\_k),k=0,1,…;=(\^)=\_[k=0]{}\^[n-]{}\_k. \[si\] By (\[sik\]) and (\[set\_weight\]), $\si\_k=0$ whenever $k>n-\di;$ $\si\_0=1.$ We also introduce the parametric value \[sitau\] () =\_[k=0]{}\^[n-]{}(\^\_k)\^k =\_[k=0]{}\^[n-]{}\_k\^k,&gt;0, which is the total weight of out-forests in $\G$ provided that all arc weights are multiplied by $\tau$. Consider the [*matrices $Q\_k=(q_{ij}^k),\; k=0,1,\ldots,$ of out-forests of $\G$ with $k$ arcs*]{}: the entries of $Q\_k$ are \[qijk\] q\_[ij]{}\^k=(\_k\^[ij]{}). By (\[qijk\]) and (\[set\_weight\]), $Q\_k=0$ whenever $k>n-\di;$ $Q\_0=I.$ The [*matrix of all out-forests*]{} is \[qij\] Q=(q\_[ij]{})=\_[k=0]{}\^[n-]{}Q\_k q\_[ij]{}=(\^[ij]{}). We will also consider the [*normalized matrices of out-forests*]{}: \[Jk\] \[J\] J\_k=\_k\^[-1]{}Q\_k,k=n-; J=(J\_[ij]{})=\^[-1]{}Q and the parametric matrices \[Qtau\] \[Jtau\] Q()=\_[k=0]{}\^[n-]{} Q\_k\^k J()=\^[-1]{}()Q(), &gt;0, where $\si\_k,$ $\si,$ and $\si(\tau)$ are defined by (\[sik\]) and (\[sitau\]). The [*normalized matrix of maximum out-forests*]{} $J\_{n-\di}\/$ will also be denoted by $\J=(\J\_{ij})$: \[Jbar\] =J\_[n-]{}. Counting forests by means of linear algebra {#sec_constr} =========================================== \[Sec\_Poli\] An algorithmic description of maximum out-forests was given in [@AgaChe00]. Another algorithm for the enumeration of out-forests can be obtained by adding a fictitious vertex $0$ along with the arcs $(0,v)$ for all vertices $v$ of $\G$ and enumerating the diverging trees in the supplemented digraph. The [*column Laplacian*]{} matrix of $\G$ is the $n\times n$ matrix $L=L(\G)=(\l\_{ij})$ with entries $\l\_{ij}=-\e\_{ij}$ whenever $j\ne i$ and $\l\_{ii}=-\suml_{k\ne i}\l\_{ki}$, $i,j=\1n$. This matrix has zero column sums; in [@CheAga02a+] we denoted it $L'$; now, for simplicity, designation $L$ is used. A [*row Laplacian*]{} matrix differs from the column Laplacian matrix by the diagonal only: its diagonal is such that the row sums are zero. The row and column Laplacian matrices are singular M-matrices (see, e.g., [@CheAga02a+ p. 258]). Their index is 1 [@CheAga02a+ Proposition 12]. The spectra of the row Laplacian matrices were studied in [@AgaChe05LAA]. The following theorem provides a method to calculate the forest matrices by means of linear algebra. \[pro.allk\] $\!\!\!$[ [@CheAga02a+].]{} ${\displaystyle\; Q_{k+1}=\si\_{k+1}\!I-LQ_{k};\;\;\; \si_{k+1}=\frac{\tr(LQ\_k)}{k+1}, \;\;\;k=0,1,\ldots.}$ Hence, Q\_[k+1]{}=I-LQ\_[k]{},k=0,1,…. Consider the matrices $L\_{k}\stackrel{{\rm def}}{=}\si\_{k}\!I-Q\_{k},$ $k=0,1,\ldots.$ Obviously, $L\_{k}$ is the Laplacian matrix $L(\G^{k})$ of the [*digraphs $\G^{k}$ of out-forests of $\G$*]{}: the arc weights in $\G^{k}$ are the off-diagonal entries of $Q\_{k}.$ Then, by Theorem \[pro.allk\], we have $L\_{k+1}=LQ\_k$ and $\,\tr(L\_k)=k\si\_k,\;k=0,1,\ldots.$ This implies the following recurrent formula for $L\_{k+1}$: ${\displaystyle L\_{k+1}=L\left(\frac{\tr(L\_k)}{k}I-L\_k\right),\;k=1,2,\ldots.}$ Properties of the forest matrices {#sect_prop} ================================= A number of results on the forest matrices are presented in [@CheAga02a+]. Some of them are collected in the following theorem. \[th4\] \[sumsum7\] \[teo.allk\] $\!\!\!$[ [@CheAga02a+].]{} [1.]{} Matrices $J\_k,\;k=\0n-\di,$ $J,$ and $J(\tau)$ are column stochastic.\ [2.]{} For any $\tau>0,$ $Q(\tau)=\adj(I+\tau L)$ and $\si(\tau)=\det(I+\tau L)$ hold$,$ whence$,$ $J(\tau)=(I+\tau L)^{-1}.$\ [3.]{} $L\q=\q L=0$.\ [4.]{} $\vj$ is idempotent$:$ $\;\q^{2}=\q.$\ [5.]{} $\J =\lim_{\tau\to\infty}J(\tau) =\lim_{\tau\to\infty} (I+\tau\,L)^{-1}.$\ [6.]{} $\rank(\J)=\di;\;\rank(L)=n-\di$.\ [7.]{} ${Q\_{k} =\sum_{i=0}^k\si\_{k-i}(-L)^i,\;\;\; k=0,1,\ldots.}$\ [8.]{} $\J$ is the eigenprojection of $L$. Item 2 of Theorem \[th4\] is a parametric version of the [*matrix-forest theorem*]{} [@CheSha97]. To formulate the topological properties of the matrix $\J$, the following notation is needed. Let $\ktil=\bigcup_i K\_i$, where $K\_i$ are all the source knots of $\G$; let $K_i^{+}$ be the set of all vertices reachable from $K\_i$ and unreachable from the other source knots. For any $k\in\ktil$, $K(k)$ will designate the source knot that contains $k$. For any source knot $K$ of $\G,$ denote by $\G_K$ the restriction of $\G$ to $K$ and by $\G_{-K}$ the subgraph with vertex set $V(\G)$ and arc set $E(\G)\setminus E(\G_K)$. For a fixed $K$, $\TT$ will designate the set of all spanning diverging trees of $\G_K$, and $\PP$ the set of all maximum out-forests of $\G_{-K}$. By $\TT^k,$ $k\in K,$ we denote the subset of $\TT$ consisting of all trees that diverge from $k$, and by $\PP^{K\rto j},$ $j\in V(\G),$ the set of all maximum out-forests of $\G_{-K}$ such that $j$ is reachable from some vertex that belongs to $K$ in these forests. $\q_{k\bullet}$ is the $k$th row of $\q$. \[th2\] $\!\!\!$[ [@AgaChe00].]{} [Let $K$ be a source knot in $\G$. Then the following statements hold.\ [1.]{} $\q_{ij}\ne 0\;\Leftrightarrow\; (i\in\ktil$ and $j$ is reachable from $i$ in $\G).$\ [2.]{} Let $k\!\in\!K.\!$ For any $j\in V(\G),$ $\q_{kj}=\e(\TT^k)\e(\PP^{K\rto j})\slash \e(\FF^{\rto}_{(\di)}).\!$ Furthermore$,$ if $j\in K^+,$ then $\q_{kj}=\q_{kk}=\e(\TT^k)\slash \e(\TT).$\ [3.]{} $\suml_{k\in K}\q_{kk}=1.$ In particular$,$ if $k$ is a source$,$ then $\q_{kk}=1.$\ [4.]{} For any $k\_1,k\_2\in K$, $\q_{k\_2\!\bullet}=(\e(\TT^{k\_2}) \slash\e(\TT^{k\_1}))\q_{k\_1\!\bullet}$ holds$,$ i.e.$,$ the rows $k\_1$ and $k\_2$ of $\q$ are proportional.]{} We say that a weighted digraph $\G$ and a finite homogeneous Markov chain with transition probability matrix $P$ [*inversely correspond*]{} to each other if \[G\_M\_cor\] I-P=L\^, where $\a$ is any nonzero real number. If a Markov chain inversely corresponds to $\G,$ then the probability of transition from $j$ to $i\ne j$ is proportional to the weight of arc $(i,j)$ in $\G$ and is $0$ if $E(\G)$ does not contain $(i,j).$ We consider such an [*inverse*]{} correspondence in order to model preference digraphs in Section \[leader\]: in this case, the transitions in the Markov chain are performed from “worse” objects to “better” ones, so the Markov chain stochastically “searches the leaders.” \[M\] For any finite Markov chain$,$ its matrix of Cesáro limiting probabilities coincides with the matrix $\J$ of any digraph inversely corresponding to this Markov chain. Theorem \[M\] follows from the [*Markov chain tree theorem*]{} [@LeightonRivest83; @LeightonRivest86], which, in turn, can be immediately proved using item 8 of Theorem \[th4\] and a result of [@Rothblum76a] (see [@CheAga02a+]). Another proof of Theorem \[M\] can be found in [@chebotarev02spanning]. A review on forest representations of Markov chain probabilities is given in [@Che04MCTT]. For an interpretation of $J(\tau)$ in terms of Markov chains we refer to [@AgaChe01]. Detecting the source knots of a digraph {#stru} ======================================= In this section, we show that entry $ij$ of $(I+\tau L)^{-1}$, where $\tau>0$, is nonzero if and only if $j$ is reachable from $i$ in $\G$ and that $\J$ points out the source knots of $\G$ and the vertices reachable from each of them. The [*reachability matrix*]{} of a digraph is the matrix $R=(r\_{ij})_{n\times n}$ with entries $$r\_{ij}= \cases{ 1, &if $j$ is reachable from $i$,\cr 0, &otherwise. }$$ It follows from the definition (\[Jtau\]) of $J(\tau)$ that for every $\tau>0$ R=(J()), where the signum function operates entrywise. Recall that, by item 2 of Theorem \[teo.allk\], $J(\tau)$ can be calculated as follows: $J(\tau)=(I+\tau L)^{-1}.$ So we obtain \[reach\] For every $\tau>0,$ $R=\sgn\left((I+\tau L)^{-1}\right)$. An algebraic way to recover the bicomponents of a digraph is to calculate the [*mutual reachability matrix*]{}, which is the Hadamard (entrywise) product of the reachability matrix and its transpose. Note that the standard algebraic means of finding the reachability matrix of a digraph is to compute $(I+A)^{n-1}$, where $A$ is the adjacency matrix, or to successively calculate the power matrices $(I+A)^k$ until the positions of nonzero entries stabilize; then, in both cases, the nonzero entries of the resulting matrix should be replaced by ones [@Zykov69]. The matrices $(I-\a A)^{-1}$ with sufficiently small $\a>0$ can also be used for this end. The following result does not provide an effective way to find the reachability matrix, but it contributes to the understanding of the nature of the forest matrices. By (\[Jtau\]), $J(\tau)=\si^{-1}(\tau)\suml_{k=\di}^{n} Q\_{(k)}\tau^{n-k}$, where $Q\_{(k)}\stackrel{{\rm def}}{=}Q\_{n-k}.$ It turns out that all information about the digraph reachability is accumulated in $Q\_{(\di)}$ and $Q\_{(\di+1)}$. This follows from \[p1.2\] For any $i,j\in V(\G)$ and any path from $i$ to $j$ in $\G,$ there exists an out-forest in $\FF^{i\rto j}_{(\di)}\cup\FF^{i\rto j}_{(\di+1)}$ that contains this path. [For the given path and any maximum out-forest in $\G$, consider their join and remove all arcs of the out-forest that come in the vertices of the path but do not belong to the path. The resulting subgraph contains neither circuits nor vertices $\ve$ with $\id(\ve)>1,$ i.e., it is an out-forest. It is rooted at $i$ and contains at least $n-\di-1$ arcs, including the arcs of the given path. Hence, it belongs to $\FF^{i\rto j}_{(\di)} \cup\FF^{i\rto j}_{(\di+1)}.$]{} This implies \[p1.3\] $R=\sgn\!\left(J\_{(\di)}+J\_{(\di+1)}\right),$ where $J\_{(k)}\stackrel{{\rm def}}{=}J\_{n-k}.$ In some cases, the main goal is to find the source knots and the vertices reachable from each of them. For example, as was noted in Section \[sec2\], this is the case in choice theory if the Generalized Optimal Choice Axiom (GOCHA) is adopted. Then the union $\ktil=\bigcup^{\di}_{i=1}K_i$ of the source knots is the set of “best” alternatives chosen on the base of a preference relation or a digraph of preferences [@Schwartz86] (cf. [@Vol'skii88; @Laslier97; @Roubens96FSS]). It turns out that $\q$ immediately reveals $\ktil$. \[De2\] The [*top reachability matrix*]{} of $\G$ is the matrix $\widehat R=(\widehat r\_{ij})_{n\times n}$ with entries \[reachM\] r\_[ij]{}= It follows from item 1 of Theorem \[th2\] that \[TR\] $\widehat R=\sgn(\J).$ The source knots of $\G$ can be disclosed by computing the [*mutual top reachability matrix*]{}, which is the Hadamard product of $\widehat R$ and ${\widehat R}^{\interca}$. It follows from Proposition \[TR\] that \[p\_for\_re\] $\!$Vertices $i$ and $j$ belong to the same source knot iff ${\J\_{i\!j}\J\_{ji}\!\ne\!0.}$ If $\widehat R$ is found by of approximate calculation based on item 5 of Theorem \[th4\], the following statement can be of help. Recall that, by item 2 of Theorem \[th4\], $\si=\det(I+L).$ \[approx\] Let $\G$ be a digraph with all arc weights 1. Then r\_[ij]{} = i,j=n, where $J\_{ij}(\tau)$ is the $ij$-entry of $J(\tau)=(I+\tau L)^{-1}.$ Proposition \[approx\] is formulated for unweighted digraphs, since the reachability relation does not depend on the weights of arcs. If $\di=n,$ the claim is obvious. Let $\di<n.$ Then, by (\[sitau\]) and (\[Jtau\]), $J(\tau)=B(\tau)+C(\tau)$ holds, where $ B(\tau)=(b_{ij})={\si^{-1}(\tau)}\sum_{k=0}^{n-\di-1}\,\tau^k\,Q_k $ and $ C(\tau)=(c_{ij})=\si^{-1}(\tau)\,\tau^{n-\di}\,Q_{n-\di}. $ By item 5 of Theorem \[th4\], we have $B(\tau)\to 0,\;$ $J(\tau)\to\J$ and $C(\tau)\to\J$ as $\tau\to\infty.$ Let $\tau=\si^2.$ Since the weights of all arcs are 1, $\si\_k\ge1,$ $k=\0n-\di,$ and $\tau\ge1$ hold. Let $\widehat r\_{ij}=1.$ Then, by Proposition \[TR\], $\J\_{ij}>0,$ hence, $q_{ij}^{n-\di}\ge1$, where $\left(q_{ij}^k\right)=Q_k,\;k=\0n-\di.$ Therefore, $ J\_{ij}(\tau) >c\_{ij}(\tau) \ge\si^{-1}(\tau)\tau^{n-\di} =(\sum_{k=0}^{n-\di}\si\_k\tau^k)^{-1}\tau^{n-\di} >(\si\tau^{n-\di})^{-1}\tau^{n-\di}=\si^{-1}. $ Let now $\widehat r\_{ij}=0.$ Then $\J\_{ij}=0,$ hence, $c\_{ij}(\tau)=0$ and $ J\_{ij}(\tau) =b\_{ij}(\tau) \le(\sum_{k=0}^{n-\di}\si\_k\tau^k)^{-1} \sum_{k=0}^{n-\di-1}\tau^{n-\di-1}q_{ij}^k <(\tau^{n-\di})^{-1}\tau^{n-\di-1}\si =\si/\tau =\si^{-1}. $ Forest based accessibility measures {#dosti} =================================== Formally, by an [*accessibility measure*]{} for digraph vertices we mean any function that assigns a matrix $P=(p\_{ij})\_{n\times n}$ to every weighted digraph $\G,$ where $n={\left|V(\G)\right|}.$ Entry $p\_{ij}$ is interpreted as the accessibility (or connectivity, relatedness, proximity, etc.) of $j$ from $i.$ Consider the accessibility measures $P^{{\rm out}}_{\tau}=J(\tau),$ where $J(\tau)$ is defined by (\[Jtau\]), and $P^{{\rm in}}_{\tau}=(p^{{\rm in}}_{ij})$ with $p^{{\rm in}}_{ij}=\e(\FF^{i\tor j}(\tau))/\e(\FF^{\tor}(\tau)),$ where $\FF^{i\tor j}(\tau)$ and $\FF^{\tor}(\tau)$ are, respectively, the $\FF^{i\tor j}$ and $\FF^{\tor}$ for the digraph $\G(\tau)$ obtained from $\G$ by the multiplication of all arc weights by $\tau.$ Parameter $\tau$ specifies the relative weight of short and long ties in $\G$. \[dua\] Accessibility measures $P^{(1)}$ and $P^{(2)}$ are [*dual*]{} if for every $\G$ and every $i,j\in V(\G),\;$ $p^{(1)}_{ij}(\G)=p^{(2)}_{ji}(\G'),$ where $\G'$ is obtained from $\G$ by the reversal of all arcs (preserving their weights). The following proposition results from the fact that the reversal of all arcs in $\G$ transforms all out-forests into in-forests and vice versa. \[pr\_dual\] For every $\tau>0,$ the measures $P^{{\rm out}}_{\tau}$ and $P^{{\rm in}}_{\tau}$ are dual. What is the difference in interpretation between $P^{{\rm out}}_{\tau}$ and $P^{{\rm in}}_{\tau}$? A partial answer is as follows. $P^{{\rm out}}_{\tau}$ can be interpreted as the relative weight of $i\to j$ connections among the out-connections of $i,$ whereas $P^{{\rm in}}_{\tau}$ is the relative weight of $i\to j$ connections among the in-connections of $j.$ Naturally, these relative weights need not coincide. For example, a connection between an average man and a celebrity is usually more important for the average man. This example demonstrates that self-duality is not an imperative requirement to accessibility measures. The properties of several self-dual measures have been studied in [@CheSha98]. The following conditions proposed in part in [@CheSha98] can be considered as desirable properties of vertex accessibility measures. [**Nonnegativity. **]{} [$p\_{ij}\ge0,\;\:i,j\in V(\G)$.]{} [**Reachability condition. **]{} [For any $i,j\in V(\G),\;$ ($p\_{ij}=0 \Leftrightarrow j$ is unreachable from $i$).]{} [**Self-accessibility condition. **]{} [$\!\!\!$For any distinct $i,j\in V(\G),$ (A) $p\_{ii}>p\_{ij}$ and (B) $p\_{ii}>p\_{ji}$ hold.]{} [**Triangle inequalities for proximities. **]{} [For any $i,k,t\in V(\G)$, (A) $p\_{ki}-p\_{ti}$ $\le p\_{kk}-p\_{tk}$ and (B) $p\_{ik}-p\_{it}$ $\le p\_{kk}-p\_{kt}$ hold. ]{} The triangle inequalities for proximities a counterparts of the ordinary triangle inequality which characterizes distances (cf. [@CheSha98a]). Let $k,i,t\in V(\G)$. We say that $k$ [*mediates between $i$ and $t$*]{} if $\G$ contains a path from $i$ to $t,$ $i\ne k\ne t,$ and every path from $i$ to $t$ includes $k.$ [**Transit property. **]{} [If $k$ mediates between $i$ and $t,$ then (A) $p\_{ik}>p\_{it}$ and (B) $p\_{kt}>p\_{it}.$ ]{} [**Monotonicity. **]{} [Suppose that the weight $\e\_{kt}$ of some arc $(k,t)$ is increased or a new $(k,t)$ arc is added to $\G$, and $\D p\_{ij},\;i,j\in V(\G),$ are the resulting increments of the accessibilities. Then[:]{}\ [(1)]{} $\D p\_{kt}>0;$\ [(2)]{} If $t$ mediates between $k$ and $i$, then $\D p\_{ki}>\D p\_{ti};$ if $k$ mediates between $i$ and $t$ then $\D p\_{it}>\D p\_{ik};$\ [(3)]{} (A) If $t$ mediates between $k$ and $i$, then $\D p\_{kt}>\D p\_{ki};$\ (B) If $k$ mediates between $i$ and $t$, then $\D p\_{kt}>\D p\_{it}$. ]{} [**Convexity. **]{} [(A) If $p\_{ki}>p\_{ti}$ and $i\ne k,$ then there exists a $k$ to $i$ path such that the difference $p\_{kj}-p\_{tj}$ strictly decreases as $j$ advances from $k$ to $i$ along this path. (B) If $p\_{ik}>p\_{it}$ and $i\ne k,$ then there exists an $i$ to $k$ path such that the difference $p\_{jk}-p\_{jt}$ strictly increases as $j$ advances from $i$ to $k$ along this path.]{} The results of testing $P^{{\rm out}}_{\tau}$ and $P^{{\rm in}}_{\tau}$ are collected in \[otledostup\] The measures $P^{{\rm out}}_{\tau}$ and $P^{{\rm in}}_{\tau}$ satisfy all the above conditions not partitioned into $(A)$ and $(B)$. Furthermore$,$ $P^{{\rm out}}_{\tau}$ obeys all $(A)$ conditions and $P^{{\rm in}}_{\tau}$ all $(B)$ conditions. Let $P=P^{{\rm out}}_{\tau}.$ [*Nonnegativity*]{} follows from the definition of $P^{{\rm out}}_{\tau}$ and the positivity of arc weights. [*Reachability condition*]{} follows from Proposition \[reach\]. Item (A) of the [*self-accessibility condition*]{} is true because, by (\[Qtau\]), $p\_{ii}=\a\e(\FF^{i\rto i}(\tau)),$ $p\_{ij}=\a\e(\FF^{i\rto j}(\tau)),$ and $\FF^{i\rto j}(\tau)\subset\FF^{i\rto i}(\tau)$ (strictly), where $\a=\e^{-1}(\FF^{\rto}(\tau))$ and $\FF^{i\rto j}(\tau)$ is the $\FF^{i\rto j}$ for the digraph $\G(\tau)$ obtained from $\G$ by the multiplication of all arc weights by $\tau$; the same with $\FF^{\rto}(\tau).$ Item (A) of the [*transit property*]{} is proved similarly. To prove (A) of [*convexity*]{}, rewrite item 2 of Theorem \[th4\] in the form $I=J(\tau)\,(I+\tau L).$ Consider entries $ki$ and $ti$ of $J(\tau)\,(I+\tau L)$. Since $i\ne k$ by assumption, we get $p\_{ki}=\tau\sum_{j\ne i}\e\_{ji}(p\_{kj}-p\_{ki})$ and $p\_{ti}=\tau\sum_{j\ne i}\e\_{ji}(p\_{tj}-p\_{ti})+\delta\_{it},$ where $\delta\_{it}=1$ if $i=t$ and $\delta\_{it}=0$ otherwise. Hence, $p\_{ki}-p\_{ti}+\delta\_{it} =\tau\sum_{j\ne i}\e\_{ji}\bigl((p\_{kj}-p\_{tj})-(p\_{ki}-p\_{ti})\bigr).$ Since $p\_{ki}-p\_{ti}>0$, there exists $j\in V(\G)$ such that $\e\_{ji}\ne 0$ (and thus $(j,i)\in E(\G)$) and $p\_{kj}-p\_{tj}>p\_{ki}-p\_{ti}$. Applying the same argument to $j$ instead of $i$, and so forth, we finally obtain $k$ as the terminal vertex of this path, as desired. The [*triangle inequality*]{} follows from (A) of [*convexity*]{} (taking $j=k$). For the proof of [*monotonicity*]{} (items 1, 2, and 3A) we refer to [@AgaChe01 Proposition 11]. The corresponding statements for $P^{{\rm in}}_{\tau}$ follow similarly or by duality. It will be shown elsewhere that for a sufficiently small positive $\tau,$ $P^{{\rm out}}_{\tau}$ additionally satisfies (B) conditions, whereas $P^{{\rm in}}_{\tau}$ satisfies (A) conditions, and they both satisfy the following [*addition to monotonicity*]{}: [Suppose that the weight $\e\_{kt}$ of some arc $(k,t)$ increases or a new $(k,t)$ arc is added; then for any $i,j\in V(\G),$ ($i\ne j$ or $k\ne t$) implies $\D p\_{kt}>\D p\_{ij}.$ ]{} Consider now the accessibility measures $\widetilde P^{{\rm out}}=(p\_{ij})=\vj =\lim_{\tau\to\infty}P^{{\rm out}}_{\tau}\/$ and $\widetilde P^{{\rm in}}=\lim_{\tau\to\infty}P^{{\rm in}}_{\tau}$. Having in mind Theorem \[M\], we call $\q_{ij}$ the [*limiting out-accessibility of $j$ from $i$*]{}. Let us say that a condition [*is satisfied in the nonstrict form*]{} if it is not generally satisfied, but it becomes true after the substitution of $\ge$ for $>,$ $\le$ for $<$ and “nonstrictly” for “strictly” in the conclusion of this condition. Similarly to Proposition \[pr\_dual\] we have \[pr\_dual1\] The accessibility measures $\widetilde P^{{\rm out}}$ and $\widetilde P^{{\rm in}}$ are dual. The results of testing $\widetilde P^{{\rm out}}$ and $\widetilde P^{{\rm in}}$ are collected in \[bliz\] The accessibility measures $\widetilde P^{{\rm out}}$ and $\widetilde P^{{\rm in}}$ satisfy nonnegativity and the [“$\Leftarrow$”]{} part of reachability condition$,$ but they violate the [“$\Rightarrow$”]{} part of reachability condition. Moreover$,$ $\widetilde P^{{\rm out}}$ satisfies$,$ in the nonstrict form$,$ items $(A)$ of self-accessibility condition$,$ transit property$,$ monotonicity$,$ and convexity$,$ whereas $\widetilde P^{{\rm in}}$ satisfies in the nonstrict form items $(B)$ of these conditions. $\widetilde P^{{\rm out}}$ satisfies $(A)$ and $\widetilde P^{{\rm in}}$ satisfies $(B)$ of triangle inequality for proximities. By virtue of Theorem \[bliz\], the limiting accessibility measures only “marginally” correspond to the conception of accessibility that underlies the above conditions. [The nonstrict satisfaction of the conditions listed in the theorem follows from Theorem \[otledostup\], Proposition \[pr\_dual1\] and item 5 of Theorem \[th4\]. To prove that the strict forms of these conditions and the “$\Rightarrow$” part of reachability condition are violated, it suffices to consider the digraph $\G$ with $n\ge3$, $E(\G)=\{(1,2),(2,3)\}$, and $\e\_{12}=\e\_{23}=1$. ]{} Let us mention one more class of accessibility measures, $(I+\a\J)^{-1}$, $0<\a<\si\_{(\di)}/\si\_{(\di+1)}$. These measures are “intermediate” between $P^{{\rm out}}_{\tau}$ and $\widetilde P^{{\rm out}}$, because they are positive linear combination of $J\_{(\di)}$ and $J\_{(\di+1)}$ [@AgaChe01]. That is why we termed them the [*matrices of dense out-forests*]{}. In the terminology of [@MeyerStadelmaier78 p. 152], $(I+\a\J)^{-1}$ with various sufficiently small $\a>0$ make up a class of [*nonnegative nonsingular commuting weak inverses*]{} for $L$. These measures and the dual measures have been studied in [@AgaChe01] (see also [@CheAga02a+ p. 270–271]). Other interesting related topics are the forest distances [@chebotarev02forest] and the forest based centrality measures [@CheSha97]. Rooted forests and the problem of leaders {#leader} ========================================= Ranking from tournaments or irregular pairwise contests is an old, but still intriguing problem. Its statistical version is ranking objects on the base of paired comparisons [@David88]. Analogous problems of the analysis of individual and collective preferences arise in the contexts of policy, economics, management science, sociology, psychology, etc. Hundreds of methods have been proposed for handling these problems (for a review, see, e.g., [@David88; @DavidAndrews93; @CookKress92; @BelkinLevin90; @CheSha97a; @CheSha99; @Laslier97]). In this section, we consider a weighted digraph $\G$ that represents a competition (which need not be a round robin tournament, i.e., can be “incomplete”) with weighted pairwise results. The digraph can also represent an arbitrary weighted preference relation. The result we present below can be easily extended to multidigraphs. One of the popular exquisite methods for assigning scores to the participants in a tournament was independently proposed by Daniels [@Daniels69], Moon and Pullman [@MoonPullman69; @MoonPullman70], and Ushakov [@Ushakov71; @Ushakov76] and reduces to finding nonzero and nonnegative solutions to the system of equations $$Lx=0. \label{D}$$ Entry $x_i$ of a solution vector $x=(x\_1\cdc x\_n)$ is considered as a sophisticated “score” attached to vertex $i$. This method was multiply rediscovered with different motivations (some references are given in [@CheSha99]). As Berman [@Berman80] noticed (although, in other contexts, similar results had been obtained by Maxwell [@Maxwell1892] and other writers, see [@CaplanZeilberger82]), if a digraph is strong, then the general solution to (\[D\]) is provided by the vectors proportional to $t=(t\_1\cdc t\_n)^{\interca},$ where $t\_j$ is the weight of the set of spanning trees (out-arborescences) diverging from $j$. This fact can be easily proved as follows. By the matrix-tree theorem for digraphs (see, e.g., [@Harary69]), $t\_j$ is the cofactor of any entry in the $j$th column of $L$. Then for every $i\in V(\G),$ $\sum^n_{j=1}\,\l\_{ij}\,t_j=\det L$ (the row expansion of $\det L$) and, since $\det L=0,\;$ $t$ is a solution to (\[D\]). As $\rank(L)=n-1$ (since the cofactors of $L$ are nonzero), any solution to (\[D\]) is proportional to $t$. Berman [@Berman80] and Berman and Liu [@BermanLiu96] asserted that this result is sufficient to rank the players in an arbitrary competition, since the strong components of the corresponding digraph supposedly “can be ranked such that every player in a component of higher rank defeats every player in a component of lower rank. Now by ranking the players in each component we obtain a ranking of all the players.” While the statement about the existence of a natural order of the strong components is correct in the case of round-robin tournaments, it need not be true for arbitrary digraphs that may have, for instance, several source knots. That is why, the solution devised for strong digraphs does not enable one to rank the vertices of an arbitrary digraph. Let us consider the problem of interpreting, in terms of forests, the general solution to (\[D\]) and the problem of choosing a particular solution that could serve as a reasonable score vector in the case of arbitrary digraph $\G$. If $\G$ contains more than one source knot, there is no spanning diverging tree in $\G$. Recall that $K\_1\cdc K\_{\di}$ are the source knots of $\G,$ where $\di$ is the out-forest dimension of $\G,$ and $\ktil=\bigcup_{s=1}^{d'}K\_s$. Suppose, without loss of generality, that the vertices of $\G$ are numbered as follows. The smallest numbers are attached to the vertices in $K\_1$, the following numbers to the vertices in $K\_2$, etc., and the largest numbers to the vertices in $V(\G)\setminus\ktil.$ Such a numeration we call [*standard*]{}. \[d\_sol\] Any column of $\J$ is a solution to $(\ref{D})$. Suppose that the numeration of vertices is standard and $j\_1\in K\_1\cdc j\_{\di}\in K\_{\di}$. Then the columns $\J\_{\bullet j\_1}\cdc\J\_{\bullet j\_{\di}}$ of $\G$ make up an orthogonal basis in the space of solutions to $(\ref{D})$ and $\J\_{\bullet j\_s} =\e^{-1}(\TT\_s) \bigl(0\cdc 0,\e(\TT_s^{i\_s+1})\cdc$ $\e(\TT_s^{i\_s+k\_s}),0\cdc 0\bigr)^{\interca},$ where $\{i\_s+1\cdc i\_s+k\_s\}=K\_s$ and $\TT_s$ is the set of out-arborescences of $K\_s,$ $s=1\cdc \di.$ By virtue of Theorem \[d\_sol\], the general solution to (\[D\]) is the set of all linear combinations of partial solutions that correspond to each source knot of $\G$. [The first statement follows from $L\q=0$ (item 3 of Theorem \[th4\]). By item 6 of Theorem \[th4\], $\rank(\q)=\di$ and $\rank(L)=n-\di$. Hence, $\di$ is the dimension of the space of solutions to (\[D\]). Let $j\_s\in K\_s,\;s=1\cdc\di.$ Then, by items 1 and 2 of Theorem \[th2\], $$\J\_{\bullet j\_s} =\e^{-1}(\TT\_s) \bigl(0\cdc 0,\e(\TT_s^{i\_s+1})\cdc\e(\TT_s^{i\_s+k\_s}), 0\cdc 0\bigr)^{\interca}.$$ These $\di$ solutions to (\[D\]) are orthogonal and thus, linearly independent. ]{} As a reasonable ultimate score vector, the arithmetic mean $x={1\over n}\J\cdot(1\cdc 1)^{\interca}$ of the columns of $\q$ can be considered. A nice interpretation of this vector is given by \[c-lim\_distr\] [(from Theorem \[M\]).]{} For any Markov chain inversely corresponding to $\G,$ $x={1\over n}\J\cdot(1\cdc 1)^{\interca}$ is the limiting state distribution$,$ provided that the initial state distribution is uniform. It can be mentioned, however, that the ranking method based on $\J$ takes into account long paths in $\G$ only. That is why, in any solutions to (\[D\]), the vertices that are not in the source knots are assigned zero scores, which is questionable. The estimates based on the matrices $Q(\tau),$ instead of $\q,$ are free of this feature. On the other hand, both methods violate the [*self-consistent monotonicity*]{} axiom [@CheSha99], and so do the methods that count the [*routes*]{} between vertices. This axiom is satisfied by the [*generalized Borda method*]{} [@Che89; @Che94] that produces the score vectors $J'(\tau)\!\cdot\!(\od(1)-\id(1)\cdc\od(n)-\id(n))^{\interca}$, where $J'(\tau)$ is the matrix $J(\tau)$ of the undirected graph corresponding to $\G$ [@Sha94]. In our opinion, the latter method can be recommended as a well-grounded approach to scoring objects on the base of arbitrary weighted preference relations, incomplete tournaments, irregular pairwise contests, etc. A concluding remark: a communicatory interpretation of some forest matrices {#a-concluding-remark-a-communicatory-interpretation-of-some-forest-matrices .unnumbered} =========================================================================== In closing, let us mention an interpretation of forest matrices in terms of information dissemination. Consider the following metaphorical model. First, a plan of information transmission along a digraph is chosen. Such a plan is a diverging forest $F\in\FF^{\rto}$: the information is injected into the roots of $F$; then it ought to come to the other vertices along the arcs of $F$. Suppose that $\e\_{ij}\in ]0,1]$ is the probability of successful information transmission along the $(i,j)$ arc, $i,j\in V(\G),$ and that the transmission processes in different arcs are statistically independent. Then $\e(F)$ is the probability that plan $F$ is successfully realized. Suppose now that each plan is selected with the same probability ${\left|\FF^{\rto}\right|}^{-1}.$ Then $J\_{ij}$ (see (\[J\])) is the probability that the information came to $j$ from root $i$, provided that the transmission was successful. As a result, if one knows that the information was corrupted at root $i$ and the transmission was successful, then $J\_{ij}$ is the probability that this corrupted information came to $j.$ Similarly, interpretations of this kind can be given to other normalized forest matrices. This model is compatible with that of centered partitions [@Lenart98] and comparable with some models of [@Pavlov00]. [^1]: This union is also called the [*top cycle*]{} and the [*strong basis*]{} of $\G$.
--- abstract: | We study light coherent transport in the weak localization regime using magneto-optically cooled strontium atoms. The coherent backscattering cone is measured in the four polarization channels using light resonant with a ${\it J_g=0}\rightarrow{\it J_e=1}$ transition of the Strontium atom. We find an enhancement factor close to 2 in the helicity preserving channel, in agreement with theoretical predictions. This observation confirms the effect of internal structure as the key mechanism for the contrast reduction observed with an Rubidium cold cloud (see: Labeyrie et al., PRL **83**, 5266 (1999)). Experimental results are in good agreement with Monte-Carlo simulations taking into account geometry effects. author: - 'Y. Bidel' - 'B. Klappauf' - 'J.C. Bernard' - 'D. Delande' - 'G. Labeyrie' - 'C. Miniatura' - 'D. Wilkowski' - 'R. Kaiser' title: Coherent light transport in a cold Strontium cloud --- During the past twenty years, the outstanding development of mesoscopic physics led to a critical inspection of coherent effects in wave transport. First motivated by electronic transport in conducting devices [@electrons], the underlying physical ingredients proved to be relevant to any linear waves and in particular to light. This triggered active research in the field of optics during the past two decades [@Kuzmin] leading to the observation of coherent backscattering [@Wolf] and universal conductance fluctuations [@Scheffold] to quote a few. A challenge in this field is still the observation of strong localization of visible light. It was recently reported for near-infrared light using semi-conductors powders [@Wiersma], but the interpretation of the experiment in term of Anderson localization was questioned [@scheffold99]. Cold atoms have been quite recently considered as promising scattering media to achieve strong localization [@Nieuwenhuizen]. Indeed, they constitute perfectly monodisperse samples of resonant point-dipole scatterers with large cross-sections. Moreover high spatial density is achieved by adequate trapping techniques [@bec; @katori]. In this letter we report the observation of coherent backscattering (CBS) of light on cold strontium atoms in the weak localization regime $kl \gg 1$ ($k$ is the light wavenumber and $l$ the elastic mean free path). CBS is an interferential enhancement of the *average* scattered intensity reflected off a disordered scattering medium [@qqchose]. It originates from a two-wave constructive interference (near exact backscattering) between waves travelling along a given scattering path and its reversed counterpart. For classical scatterers, bearing on general symmetry arguments valid in the absence of any magnetic field, the CBS interfering amplitudes have been shown to have equal weights at exact backscattering in the so-called parallel polarization channels [@Bart]. In the $lin\| lin$ channel the incoming and detected light fields have same linear polarization. In the $h\| h$ channel, both light fields are circularly polarized with the same helicity, that is opposite polarizations (because the CBS signal is emitted in the backward direction). In the perpendicular channels, nothing ensures the equality of the two interfering amplitudes and the contrast of the interference is decreased. The single scattering events require a separate treatment as the direct and reversed paths coincide and do not contribute to the CBS enhancement in the backward direction. For spherically symmetric scatterers, single scattering does not contribute in the $lin \perp lin$ and $h \| h$ channels. Thus, the CBS contrast (peak to background ratio) is predicted and has been observed to be exactly 2 in the helicity preserving polarization channel $h \| h$ [@Wiersma2]. Using an atomic gas at resonance, a dynamic breakdown of the CBS effect can occur due to the scatterers motion during the transit time of a photon inside the medium. This restricts the RMS velocity $\delta v$ below a critical velocity given by $v_c= \Gamma /k$ (where $\Gamma$ is the width of the atomic dipole resonance), a condition which is well fulfilled for a laser cooled atomic gas [@labeyrie9900]. The quantum internal structure of atoms has also severe consequences for coherent light transport in atomic media. A degeneracy in the groundstate induces a dramatic scrambling of the CBS effect [@jonckmuller]. This has been first experimentally observed with a cold Rubidium sample on a $J_{g}=3 \rightarrow J_{e}=4$ transition [@labeyrie9900]. These results highly motivated the use of nondegenerate groundstate atoms, like strontium, to benefit from full interference effects in coherent transport. The cold strontium (Sr) cloud is produced in a magneto-optical trap (MOT). The transverse velocity of an effusive atomic beam, extracted from a $500^{\circ}$C oven, is immediately compressed with a 2D optical molasse. A 27 cm long Zeeman slower then reduces the longitudinal velocity to within the capture velocity range of the MOT ($\sim$ 50 m/s). The Zeeman slower, molasses, MOT, and probe laser beams at 461 nm are generated from the same frequency-doubled source. Briefly, a single-mode grating stabilized diode laser and a tapered amplifier are used in a master-slave configuration to produce 500 mW at 922 nm. The infrared light is then frequency doubled in a semi-monolithic standing wave cavity with an intra-cavity KNbO$_\textrm{3}$ crystal. The cavity is resonant for the infrared light while the second harmonic exits through a dichroic mirror providing 150 mW of tunable single-mode light, which is then frequency locked on the 461 nm $^{1}$S$_{0}$-$^{1}$P$_{1}$ strontium line in a heat pipe. We use acousto-optic modulators for subsequent amplitude and frequency variations. The MOT is made of six independent trapping beams of 5.2 mW/cm$^{2}$ each, red-detuned by $\delta=-\Gamma$ from the resonance. The saturation intensity is 42.5 mW/cm$^{2}$ and the natural width of the transition is $\Gamma /2\pi=32$ MHz. Two anti-Helmoltz coils create a 100 G/cm magnetic field gradient to trap the atoms. A small population loss to metastable states is repumped to the ground state using two additional red lasers. The best achieved optical thickness of our Sr MOT is $b\approx3$. It is deduced from transmission measurements of a resonant probe through the cloud shortly after switching the MOT off. Note that because the optical thickness of the atomic cloud is larger than one, the imaging of the cloud does not yield a signal proportional to the atomic density (flattening at the center) and the whole process thus overestimates the size of the cloud (see discussion below). The number of trapped atoms $N \simeq 10^{7}$ is derived from the MOT fluorescence signal. From a CCD image the RMS radius of the cloud has been estimated at 0.65 mm yielding a mean free path $l\approx0.5$ mm ($kl \simeq 7000$). The RMS velocity of the atoms is less that 1 m/s, well below the critical velocity $v_c=$ 15 m/s. The detailed experimental procedure for the CBS observation has been published elsewhere [@labeyrie9900]. For the Sr experiment, the signal is obtained using a collimated resonant probe beam with a waist of 3 mm. To avoid any effects linked to the saturation of the optical transition (non-linearities, inelastic radiation spectrum) [@nonlinear], the probe intensity is weak (saturation parameter $s=0.02$). The scattered light is collected in the backward direction by placing a CCD camera in the focal plane of an achromatic doublet. The angular resolution of our apparatus is 0.1 mrad, roughly twice the CCD pixel angular resolution. To avoid recording the MOT fluorescence signal while recording the CBS signal, a time-sequenced experiment is developed. The trapping beams and the magnetic field gradient are switched off during the CBS acquisition sequence (duration $100~\mu s$) and then switched on to recapture the atoms (duration 95% of the 6 ms total cycle time). This procedure also eliminates any possible unwanted nonlinear wave mixing processes. The whole time sequence is then repeated as long as necessary for a good signal-to-noise ratio (typically 15 minutes in the experiment). During the CBS sequence, the image field is opened (and then closed during the MOT sequence) thanks to a mechanical chopper. During the CBS probe interaction time, each atom scatters about 200 photons on average but always remains in resonance since the mean atomic velocity increase is far below $v_c$. Consequently, most of them are recaptured during the following MOT sequence. The CBS images (see Fig.\[image\]) are finally obtained by subtracting the background image taken without cold atoms. This background image is recorded in the absence of the magnetic gradient during all the acquisition time. We thus checked that the fluorescence signal from the residual Sr atoms was negligible. In the helicity preserving channel ($h\| h$), the enhancement factor is found to be $\alpha=1.86\pm0.10$ with an optical thickness of $b=2.9$ (see Fig. \[coupe\]), slightly lower the theoretical prediction $\alpha=2$. Several experimental issues can explain the difference. First, the finite angular resolution of the detection apparatus lowers the CBS enhancement factor by an amount evaluated to $\delta\alpha\approx0.06$. Because single scattering contributes more than 90% of the total signal in the two authorized channels (see Table \[result\]), the reduction of the cone contrast due to imperfect polarization channel isolation in the $h\| h$ is not negligible. We have measured, in the limit of low optical thickness where single scattering dominates over multiple scattering, the fraction of detected light in the forbidden $h \| h$ channel with respect to the total scattered light. We found a channel isolation about $5.10^{-4}$ leading to $\delta\alpha\approx0.03$. Note that single scattering depolarization induced by stray magnetic field acts here like an imperfect polarization isolation. For this reason, its impact on the cone reduction has been minimized during the channel isolation procedure. Another possible source of contrast reduction is a Faraday effect induced by the residual magnetic field [@lenke00]. It turns out that, despite the huge Verdet constant in the atomic gas medium [@labeyrie01], its effect should be smaller than the previously discussed ones. We also checked that the finite transverse size of the laser beam has no significant influence on the signal. Taking into account the systematic errors, we find that the CBS enhancement factor should rather be $\alpha=$1.91, consistent with the measured value. A remaining (but yet uncontrolled) source of error in determining $\alpha$ is certainly an imperfect estimation of the background level measured at angles large compared to the cone width $\theta \gg \Delta\theta_{\mathrm{CBS}}$. In the other polarization channels, we observe lower enhancement factors as predicted by the theory (see table \[result\]). In the $lin \| lin$ and $h \perp h$ channels, the small enhancement factors are mainly due to the strong single scattering contribution – see the relative incoherent background values given in table \[result\] – which is very important since the optical thickness is not very large. In the $ lin \perp lin$ channel (where single scattering is absent), the relatively high contrast value is explained by the low optical thickness. Indeed, in this situation, short scattering paths dominate and double scattering is known to exhibit full interference contrast in all polarization channels [@albada87]. In Table \[result\], we also show data obtained with a Monte-Carlo (MC) calculation, where the amplitude of a multiple scattering path is computed as a function of the initial and final polarizations and of the geometrical positions of the various scatterers. We use a Gaussian distribution for the spatial density of scatterers and take into account the spatial variations of the mean free path during the photon propagation. Our numerical method is tantamount to computing the integral involved in the configuration average using a MC procedure. Given a spatial configuration of the scatterers, we compute simultaneously the various scattering contributions at different scattering orders using the “partial photon” trick [@partial_photon]. Typically, it is enough to launch less than 1 million photons on the medium to get a good signal/noise ratio for the CBS peak. For all polarization channels, there is a good agreement for the cone height between experiments and MC simulations adjusted to take into account the polarization channel isolation and angular resolution effects. The experimental values $\Delta\theta_{\mathrm{CBS}}$ of the FWHM CBS angular cone width are systematically higher (by a factor 1.4) that the ones given by the MC simulation using the measured optical thickness $b$ and size of the atomic cloud. As discussed above, our experimental procedure slightly overestimates the size of the cloud. Modifying the size of the cloud (keeping $b$ constant) results only in a global multiplication of the angular scale, keeping identical both the enhancement factor and the cone shape. We are thus inclined to think that the actual RMS radius of the cloud is 0.45 mm instead of 0.65 mm. With this corrected value, we observe an excellent agreement between MC and experimental data [*in all polarization channels*]{} (see Fig. \[coupe\] and Table \[result\]). The angular dependence of the cone shape in the linear channels reflects the anisotropy of the scatterer’s pattern [@albada87]. In the $lin \| lin$ channel, an elliptical shape with major axis parallel to the incident polarization is predicted and indeed observed (Fig. \[image\]c). In the $lin\perp lin$ channel, the directions of maximum scattering are tilted at $45^{\circ}$ from the incident polarization, yielding a “clover-leafed” CBS cone shape (Fig. \[image\]d). To summarize, we measured the coherent backscattering cone in four different characteristic polarization channels. Our results are in good agreement with a Monte-Carlo calculation. The restoration of a full interference contrast in coherent multiple scattering with atomic gases (as exemplified by the maximum enhancement factor of 2 obtained in the helicity preserving channel) has interesting potentialities for wave localization experiments with cold atoms. For example, in the quest for Anderson localization (which could be obtained only at high density where $kl \approx 1$) where interferences play a crucial role, a $J_{g}=0 \to J_{e}=1$ transition appears to be a good choice, since a degenerate internal structure is known to scramble the interference [@jonckmuller]. A maximum enhancement factor of 1.2 was found in Rb experiment [@labeyrie9900]. Is it now possible to increase the cloud density to reach the Anderson localization threshold? For this purpose, cooling strontium with the intercombination line in a dipole trap appears to be a promising technique [@katori]. The authors thank the CNRS and the PACA region for their financial support. Laboratoire Kastler Brossel is laboratoire de l’Universit[é]{} Pierre et Marie Curie et de l’Ecole Normale Sup[é]{}rieure, UMR 8552 du CNRS. —————————————————————- Channel Background $\alpha$ $\Delta\theta_{CBS}$ (mrad) ----------------------------- -------- ------------ --------------- ----------------------------- -- Exp. 7.5% $1.77\pm0.13$ $0.52\pm0.07$ ${\it h} \|{\it h}$ MC 7.8% 2 0.48 MC$^*$ 7.8% 1.87 0.52 Exp. 92.5% $1.17\pm0.03$ $0.71\pm0.10$ ${\it h} \perp{\it h}$ MC 92.2% 1.20 0.69 MC$^*$ 92.2% 1.19 0.75 Exp. 96.0% $1.17\pm0.03$ $0.9\pm0.2$ ${\it lin} \|{\it lin }$ MC 95.5% 1.24 0.92 MC$^*$ 95.5% 1.22 0.98 Exp. 4.0% $1.59\pm0.20$ $0.5\pm0.3$ ${\it lin} \perp{\it lin }$ MC 4.5% 1.74 0.48 MC$^*$ 4.5% 1.62 0.49 : Comparison between the CBS enhancement factor and peak width measured in the experiment with the results of a Monte-Carlo calculation, for optical thickness $b=2$. In each polarization channel, the experimental enhancement factor $\alpha$ is given with a $\pm 2\sigma$ error bar. For linear polarization channels, the $\Delta\theta_{CBS}$ values are only given for scans parallel to the incident polarization. The results of MC simulation (noted MC) are given for a Gaussian distribution of the could with a variance 0.45 mm. The experimental imperfections like the polarization channel isolation and angular resolution effects have been taken into account in the MC simulation values noted MC$^*$. The “Background” column show the relative contribution of the channel of the total incoherent scattered intensity in the backward direction.[]{data-label="result"} [99]{} D.K. Ferry and S.M. Goodnick, [*Transport in Nanostructures*]{}, Cambridge University Press, New York (1997); S. Datta, [*Electronic Transport in Mesoscopic Systems*]{}, Cambridge University Press, Cambridge (1995). V.L. Kuz’min and V.P. Romanov, Physics-Uspekhi **39**, 231 (1996); M.C.W. van Rossum and Th.M. Nieuwenhuizen, Rev. Mod. Physics **71** , 313 (1999). P.E. Wolf and G. Maret, Phys. Rev. Lett. **55**, 2696 (1985); M.P. Van Albada and A. Lagendijk, Phys. Rev. Lett. **55**, 2692 (1985). F. Scheffold and G. Maret, Phys. Rev. Lett. **81**, 5800 (1998). D.S. Wiersma, P. Bartolini, A. Lagendijk and R. Righini, Nature **390**, 671 (1997). F. Scheffold, R. Lenke, R. Tweer, G. Maret, *comment on the cite[wiersma97]{} paper*, Nature **398**, 207 (1999). Th. M. Nieuwenhuizen, A.L. Burin, Yu. Kagan and G.V. Shlyapnikov, Phys. Lett. A **184**, 360 (1994). T. Ido, Y. Isoya and H. Katori, Phys. Rev. A **61**, R061403 (2000); H. Katori, T. Ido, M. Gonokami, J. Phys. Soc. of Jap. **68**, 2479 (1999). , K. Burnett ed., OSA Trends in Optics and Photonics Series **7** (OSA, 1996). E. Akkermans, P.E. Wolf, R. Maynard and G. Maret, J. Phys. (Paris) **49**, 77 (1988). B.A. van Tiggelen and R. Maynard, [*in Wave Propagation in Complex Media*]{}, IMA **96**, edited by G. Papanicolaon (Springer, New-York), 252 (1997). D.S. Wiersma, M.P. van Albada, B.A. van Tiggelen and A. Lagendijk, Phys. Rev. Lett. **74**, 4193 (1995). G. Labeyrie, F. de Tomasi, J.C. Bernard, C. Müller, C.A. Miniatura and R. Kaiser, Phys. Rev. Lett. **83**, 5266 (1999); G. Labeyrie, C. Müller, D. Wiersma, C. Miniatura and R. Kaiser, J. Opt. B : Quantum Semiclass. Opt. **2** , 672 (2000). T. Jonckheere, C.A. Müller, R. Kaiser, C. Miniatura and D. Delande , Phys. Rev. Lett. **85**, 4269 (2000); C. Müller, T. Jonckeere, C. Miniatura and D. Delande, Phys. Rev. A **64**, 053804 (2001). V.M. Agranovich and V.E. Kravtsov, Phys. Rev. B **43**, 13691 (1991); A. Heiderich, R. Maynard and B. van Tiggelen, Opt. Comm. **115**, 392 (1995). R. Lenke and G. Maret, Eur. Phys. J. B **17**, 171 (2000). G. Labeyrie, C. Miniatura and R. Kaiser, Phys. Rev. A **64**, 033402 (2001). M.P. van Albada, B. van Tiggelen and A. Lagendijk, Phys. Rev. Lett. **58**, 361 (1987). R. Lenke and G. Maret, in [*Scattering in Polymeric and Colloidal Systems*]{}, edited by W. Brown and K. Mortensen (Gordon and Breach, Reading, 2000), p. 1-72.
--- abstract: 'The AC frequency in electrical power systems is conventionally regulated by synchronous machines. The gradual replacement of these machines by asynchronous renewable-based generation, which provides little or no frequency control, increases system uncertainty and the risk of instability. This imposes hard limits on the proportion of renewables that can be integrated into the system. In this paper we address this issue by developing a framework for performing frequency control in power systems with arbitrary mixes of conventional and renewable generation. Our approach is based on a robust stability criterion that can be used to guarantee the stability of a full power system model on the basis of a set of decentralised tests, one for each component in the system. It can be applied even when using detailed heterogeneous component models, and can be verified using several standard frequency response, state-space, and circuit theoretic analysis tools. Furthermore the stability guarantees hold independently of the operating point, and remain valid even as components are added to and removed from the grid. By designing decentralised controllers for individual components to meet these decentralised tests, every component can contribute to the regulation of the system frequency in a simple and provable manner. Notably, our framework certifies the stability of several existing (non-passive) power system control schemes and models, and allows for the study of robustness with respect to delays.' author: - 'Richard Pates and Enrique Mallada [^1]' bibliography: - 'Refs.bib' - 'Refs-em.bib' title: 'Robust Scale-Free Synthesis for Frequency Control in Power Systems' --- Power systems, frequency control, robust stability, decentralised control synthesis. Introduction ============ The composition of the electric grid is in a state of flux [@Milligan:2015ju]. Motivated by the need to reduce carbon emissions, conventional synchronous generators, with relatively large inertia, are being replaced with renewable energy sources with little (wind) or no inertia (solar) at all [@Winter:2015dy]. In addition, the steady increase of power electronics on the demand side is gradually diminishing the load sensitivity to frequency variations [@WoodWollenberg1996]. As a result, rapid frequency fluctuations are becoming a major source of concern for several grid operators [@Boemer:2010wa; @Kirby:2005uy]. Besides increasing the risk of frequency instabilities, this dynamic degradation also places limits on the total amount of renewable generation that can be sustained by today’s electric grids. Ireland, for instance, is already resorting to wind curtailment whenever wind production exceeds $50\%$ of existing demand in order to preserve the grid stability. One approach that has been proposed to mitigate this degradation is to use inverter-based generation to mimic synchronous generator behaviour, by implementing so called virtual inertia [@Driesen:ft]. The rationale is that by mimicking synchronous generator dynamics, virtual inertia will restore the robust frequency regulation that the system used to enjoy. However, it is unclear whether this particular choice of control is the most suitable for the task. Unlike generator dynamics that set the grid frequency, virtual inertia controllers estimate the grid frequency and its derivative using noisy and delayed measurements, which can lead to noise amplification and instabilities [@m2016cdc; @jpm2017cdc]. Furthermore, inverter-based control can be significantly faster than that available for conventional generators. Therefore, using inverters to mimic generator behaviour does not take advantage of their full potential. This poses a new challenge for the control system engineer: develop control systems to regulate frequency in power systems that exploit the capabilities of inverters, and that overcome the issues introduced by renewable generation, including uncertainty in supply, measurement delays, network topology changes, and heterogeneity among components. To achieve this goal, new methods for controller synthesis are required. The crux of the issue is that in the power system context, in order to ensure secure operation, control systems must be able to guarantee **in advance** that adequate levels of robustness are maintained even if its operating point changes, and as components join and leave the grid. Given their uncertain nature, increasing the number of renewable sources vastly increases the number of ways this can happen. It then becomes very difficult to apply conventional control design methods, since one cannot determine which model to use, or identify a tractable set of operating points or network configurations to consider. This is an issue even for many specialised methods for large systems, such as those based on small gain or dissipativity theory [@BL+06; @arcak2016networks]. This is because these still typically require the verification of the feasibility of a that scales with the size of the network, and this test would have to be rechecked for every operating point and change in network configuration. In this paper, we argue that the best way to address the challenge of achieving robustness and scalability is ‘to get the local design right’. To do so, we look to follow, and further extend, the philosophy of passivity based design, and find conditions on the subsystems in the network that guarantee robust stability **independently** of how they are interconnected. These conditions can then be used as a principled basis for **scale-free** design that addresses the requirements of the network setting. In particular, by designing controllers to meet a local stability requirement, strong a-priori guarantees –that hold even as the operating point changes, and as components are added to or removed from the network– can be given. Our main contribution, presented as in , is to derive a decentralised stability criterion that is tailored to frequency control problems in power systems. As described in , the condition allows stability of a full power system model to be deduced on the basis of a set tests on the individual components in the network, in a manner that is independent of operating point and interconnection configuration. The condition allows for detailed, heterogeneous components models, and can include the effect of delays. As shown in , the criterion is robust, and can be verified using several standard frequency response, state-space, and circuit theory analysis tools. Furthermore, as discussed in , it allows for the synthesis of controllers using only local models. The design can be conducted using standard frequency response intuition, as well with off-the-shelf tools from $\Hfty{}$ optimal control. As explained in standard passivity based design criteria arise as a special case, and there essentially exist no better criteria that can be used as a basis for decentralised design with a-priori stability guarantees. We illustrate the results on several standard power system models and controller architectures in . ### Notation {#notation .unnumbered} $\Hfty$ denotes the space of transfer functions of stable linear, time-invariant systems. This is the Hardy space of functions that are analytic on the open right half plane $\C_+$ with bounded norm $\norm{g\s}_\infty\coloneqq{}\sup_{s\in\C_+}\abs{g\s}$. $\discalg$ denotes the subset of $\Hfty$ that is continuous on the extended imaginary axis [@Par97]. $\Rat$ denotes the set of real rational functions, and $\RH\coloneqq{}\Rat\cap\Hfty$. Finally, we denote the lower as $\lft{l}{G}{C}\coloneqq{}G_{11}+G_{12}C\funof{I-G_{22}C}^{-1}G_{21}$. Problem Description =================== font=\] \(p) at (0,0) [$+$]{}; (-1,0) – node\[above\] [$P_d$]{} (p.west); (p.east) – (1,0); at (2,1.35) [Bus Dynamics ]{}; (1,-1) rectangle (3,1); (1,1) rectangle node\[midway\] [$g_1$]{} (1.4,0.6); (1.4,0.6) rectangle (1.8,0.2); (1.8,0.2) rectangle node\[midway\] [$g_i$]{} (2.2,-0.2); (2.2,-0.2) rectangle (2.6,-0.6); (2.6,-0.6) rectangle node\[midway\] [$g_n$]{} (3,-1); (3,0) – node\[near end, above\] [$\dot{\theta}$]{} (5,0); (4,0) – (4,-2) – (2.5,-2); (1.5,-2.4) rectangle node\[midway\] [$\cfrac{1}{s} \, L_B$]{} (2.5,-1.6); at (2,-2.75) [Network]{}; (1.5,-2) – (0,-2) node\[near end, below\] [$P_N$]{} – (p.south) node\[left, yshift=-4\] [$\text{--}$]{}; In this section we describe the power system model used in this paper. We model the power system as a set of $n$ buses, indexed by $i\in\{1,\dots,n\}$, which are coupled through an AC network. Assuming operation around an equilibrium, the linearised dynamics are represented by the block diagram in . The transfer function $g_i\s$ describes the dynamics of the components connected at the *i*th bus. The input to each $g_i\s$ is the net power flow into the bus, relative to its equilibrium value. This includes the variation $P_{N,i}$ in electrical power drawn from the network and an external disturbance $P_{d,i}$, which reflects, for example, variations in power drawn by local loads. The output of each $g_i\s$ is the rate of change of voltage angle (frequency) at the given bus. The network power fluctuations $P_N$ are given by a linearised DC model of the power flow equations. More precisely, $$\label{eq:network} P_N\s = \frac{1}{s}L_B\dot\theta\s$$ where $L_B$ is an undirected weighted Laplacian matrix with entries given by $$\label{eq:Lap} L_{B,ij}=\frac{\partial}{\partial{}\theta_j}{\sum_{l=1}^nV_{i0}V_{l0}b_{il}\sin\funof{\theta_i-\theta_l}}\Bigr|_{\theta=\theta_0}.$$ In the above $V_{0}\in\R^n$ and $\theta_0\in\R^n$ denote the voltage magnitudes and angles at the buses in steady state, and $b_{il}\geq{}0$ the susceptance of the transmission line connecting buses *i* and *l* ($b_{il}=0$ if there is no line). Finally, to allow for the design of local controllers, we further open the loop at each $g_i\s$ and define a generalized plant model $G_i\s$ for each bus as $$\label{eq:generalized-plant} \begin{bmatrix} \dot\theta_i\s\\z_i\s \end{bmatrix}= \begin{bmatrix} G_{i,11}\s \!\!&\! G_{i,12}\s\\ G_{i,21}\s \!\!&\! G_{i,22}\s \end{bmatrix}\!\! \begin{bmatrix} P_{d,i}\s-P_{N,i}\s\\P_{c,i}\s \end{bmatrix}\!.$$ The entries of $G_i\s$ capture both the internal dynamics at the bus, and specify the measurements available for control system design. The signal $z_i\s$ specifies the measurements available for implementing the local controller, and $P_{c,i}\s$ the controller’s power injection. These signals are related through $$\label{eq:controller} P_{c,i}\s=c_i\s z_i\s,$$ where $c_i\s$ is the transfer function of the controller to be designed. The transfer functions $g_i,G_i$ and $c_i$ are related through the lower according to $g_i=\lft{l}{G_i}{c_i}$ as illustrated in . Note that in general $G_i$ and $c_i$ need not be scalar, though $g_i$ always is. Combining \[eq:network,eq:generalized-plant,eq:controller\] leads to the following generic linearised power system model: $$\label{eq:model} \begin{aligned} \begin{bmatrix} \dot\theta_i\s\\z_i\s \end{bmatrix}&= G_i\s \begin{bmatrix} P_{d,i}\s-P_{N,i}\s\\P_{c,i}\s \end{bmatrix},\\ P_{c,i}\s&=c_i\s{}z_i\s,\\ P_N\s&=\frac{1}{s}L_B\dot{\theta}\s. \end{aligned}$$ at (0.5,1.25) ; at (0.5,-1.75) ; (0,0) rectangle node\[midway\] [$G_i$]{} (1,1); (0.15,-1.35) rectangle node\[midway\] [$c_i$]{} (.85,-0.65); (1,0.75) – (2.5,0.75) node\[right, align=left\] [$\dot{\theta}_i$\ frequency]{}; (1,0.25) – (1.8,0.25) – (1.8,-1) node\[right, align=left\] [$z_i$\ measurements]{} – (.85,-1); (0.15,-1) – (-.8,-1) node\[left, align=right\] [$P_{c,i}$\ controller\ power injection]{} – (-.8,0.25) – (0,0.25); (-1.5,0.75) node\[left, align=right\] [$P_{d,i}-P_{N,i}$\ power imbalance]{} – (0,0.75); Although \[eq:model\] is rather generic and can account for many bus models, when illustrating our approach we will use models based on the classical swing equations. That is, we will consider the bus dynamics described by $$m_i \ddot{\theta_i}+d_i\dot \theta_i = P_{c,i}+P_{d,i}-P_{N,i},$$ where $m_i$ and $d_i$ are the generator’s inertia and damping respectively. This leads to a generalised plant transfer function $$G_i\s=\begin{bmatrix} \frac{1}{m_is+d_i}&\frac{1}{m_is+d_i}\\ G_{i,21}\s \!\!&\! G_{i,22}\s \end{bmatrix},$$ where the particular transfer functions $G_{i,21}\s$ and $G_{i,22}\s$ depend on the measured signal $z_i\s$. For example, if angular velocity measurements are available, then $G_{i,21}\s=G_{i,22}\s=\frac{1}{m_is+d_i}$. The network model in \[eq:model\] implicitly makes the following assumptions which are standard and well-justified for frequency control in transmission networks [@kundur_power_1994]: (i) bus voltage magnitudes are constant for all $i$, (ii) transmission lines are lossless, and (iii) reactive power flows do not affect bus voltage phase angles and frequencies. See, e.g., [@Zhao:2014bp; @Li:2016tcns; @mallada2017optimal] for applications of similar models for frequency control within the control literature. Results {#sec:res} ======= A Scale-Free Stability Criterion {#sec:res1} -------------------------------- font=\] \(p) at (0,0) [$+$]{}; (-1,0) – node\[above\] [$e$]{} (p.west); (p.east) – (1,0); (1,-1) rectangle (3,1); (1,1) rectangle node\[midway\] [$p_1$]{} (1.4,0.6); (1.4,0.6) rectangle (1.8,0.2); (1.8,0.2) rectangle node\[midway\] [$p_i$]{} (2.2,-0.2); (2.2,-0.2) rectangle (2.6,-0.6); (2.6,-0.6) rectangle node\[midway\] [$p_n$]{} (3,-1); (3,0) – node\[near end, above\] [$y$]{} (5,0); (4,0) – (4,-2) – (2.5,-2); (1.5,-2.4) rectangle node\[midway\] [$\cfrac{1}{s} \, L$]{} (2.5,-1.6); (1.5,-2) – (0,-2) node\[near end, below\] [$u$]{} – (p.south) node\[left, yshift=-4\] [$\text{--}$]{}; In this section we will present a scale-free stability criterion for the feedback interconnection $$\label{eq:seceq} \begin{aligned} y_i\s&=p_i\s{}\funof{e_i\s-u_i\s}\\ u\s&=\frac{1}{s}Ly\s. \end{aligned}$$ This interconnection is illustrated in . In particular we will show that given any $L$ in the set $$\mathcal{L}\coloneqq{}\cfunof{L:L=L^T,0\preceq{}L\preceq{}I},$$ stability[^2] of \[eq:seceq\] can be guaranteed on the basis of decentralised tests on each of the transfer functions $p_i\s$. We will show how to use this to guarantee stability of the linearised power system model in the next section. Our criterion is written in terms of and functions. This establishes strong connections to many well established areas of control theory, including: 1. Multiplier methods and absolute stability criteria; 2. $\Hfty$ optimal control; 3. The Nyquist stability criterion; 4. Classical circuit theory. We will highlight these connections throughout the rest of the paper. We now formally define these function classes. \[def:pr\] A (not necessarily proper or rational) transfer functions $g\s$ is if: (i) $g\s$ is analytic in $\mathrm{Re}\funof{s}>0$; (ii) $g\s$ is real for all positive real $s$; (iii) $\mathrm{Re}\funof{g\s}\geq{}0$ for all $\mathrm{Re}\funof{s}>0$. If in addition $g\in\discalg$ and there exists an $\epsilon>0$ such that $g\funof{s}-\epsilon$ is , then $g\s$ is . The following theorem, which is inspired by the results for scalar systems from [@BW65 Theorem 2], shows that provided $L\in\mathcal{L}$ and that the elements in the diagonal transfer function are drawn from a parametrised class $$\mathcal{P}_h\coloneqq{}\cfunof{p\in\Hfty:p\funof{0}\neq{}0,h\s\funof{1+\frac{p\s}{s}}\in\ESPR},$$ then the feedback interconnection in \[eq:seceq\] is stable. \[thm:main\] If $h\in\PR\cap\discalg{}$, then for any $p_1,\ldots{},p_n\in\mathcal{P}_h$ and any $L\in\mathcal{L}$, the feedback interconnection in \[eq:seceq\] is stable. The function $h\s$ in is typically referred to as a multiplier. A useful class of multipliers that we will use in all our examples is given by $$\cfunof{\frac{s}{s+T}\prod_{k=1}^N\frac{s+\alpha_k}{s+\beta_k}:0<\beta_1<\alpha_1<\beta_2<\ldots{}<T}.$$ There is an extensive literature supporting the design of multipliers [@BW65], and (as we will discuss in ) the choice of $h\s$ has a graphical interpretation. Nonlinear extensions of are also possible, using for example the Popov or Zames-Falb multipliers, though this will not be pursued here (see [@PV17] for ideas along these lines). Let $P={\mbox{diag}}\funof{p_1,\ldots{},p_n}$. Since $P\in\Hfty^{n\times{}n}$, the interconnection of $P$ and $\frac{1}{s}L$ is stable if and only if $$\tfrac{1}{s}L\funof{I+\tfrac{1}{s}PL}^{-1}\in\Hfty^{n\times{}n}.$$ Since $L\in\mathcal{L}$, we can factorize it as $L=QXQ^*$, where $\epsilon{}I\preceq{}X\preceq{}I$, $Q\in\C^{n\times{}\funof{n-m}}$, $m>0$, $Q^*Q=I$, $\epsilon>0$. Hence $$\begin{aligned} \tfrac{1}{s}L\funof{I+\tfrac{1}{s}PL}^{-1}&=QXQ^*\funof{sI+PQXQ^*}^{-1},\\ &=QX\funof{sI+Q^*PQX}^{-1}Q^*. \end{aligned}$$ Clearly then it is sufficient to show that $$\label{eq:intintint1} \funof{sI+Q^*PQX}^{-1}\in\Hfty^{\funof{n-m}\times{}\funof{n-m}}.$$ The above can be immediately recognised as an eigenvalue condition: $-s\notin\lambda\funof{Q^*P\s{}QX},\forall{}s\in\overline{\mathbb{C}}_+$. By Theorem 1.7.6 of [@HJ91], for any $s\in\C$: $$\lambda\funof{Q^*P\s{}QX}\subset\text{Co}\funof{kp_i\s:i\in \cfunof{1,\ldots{},n},\epsilon\leq{}k\leq{}1}.$$ Therefore it is sufficient to show that $$\label{eq:convsets} 0\notin\text{Co}\funof{s+kp_i\s:i\in \cfunof{1,\ldots{},n},\epsilon\leq{}k\leq{}1},$$ for all $s=\overline{\mathbb{C}}_+$. Observe that since each $p_i\s$ is bounded, this condition is trivially satisfied for large $s$. It is therefore enough to check that this holds for $s\in\overline{\mathbb{C}}_+,\abs{s}<R,$ for sufficiently large $R$. This can be done using the separating hyperplane theorem, applied pointwise in $s$. In particular, \[eq:convsets\] holds for any given $s$ if and only if there exists a nonzero $\alpha\in\C$ and $\gamma>0$ such that $\forall{}i\in\cfunof{i,\ldots{},n}$: $$\label{eq:prtest} \text{Re}\funof {\alpha{}\funof{s+kp_i\s}}\geq{}\gamma{},\forall{}\;\epsilon{}\leq{}k\leq{}1.$$ We will now use a minor adaptation of the argument in Theorem 2 of [@BW65] to show that such an $\alpha{}$ is guaranteed to exist. From the conditions of the theorem and the maximum modulus principle, for any $R\geq{}0$, there exists a $\delta>0$ such that $\forall{s}\in\overline{\C}_+,\abs{s}\leq{}R$: $$\label{eq:finaltest} \text{Re}\funof{h\s\funof{1+p_i\s/s}}\geq{}\delta.$$ Since $h\s$ is , for all $k^*\geq{}0$, $\text{Re}\funof{k^*h\s}\geq{}0$, and therefore $$\text{Re}\funof{h\s\funof{1+p_i\s/s}+k^*h\s}\geq\delta{}.$$ Dividing through by $\funof{1+k^*}$ and rearranging shows that under these conditions $$\text{Re}\funof{\tfrac{h\s}{s}\funof{s+\funof{1+k^*}^{-1}p_i\s}}\geq{}\funof{1+k^*}^{-1}\delta{}.$$ Therefore setting $\alpha\equiv{}h\s/s$ and $\gamma\equiv\epsilon{}\delta{}$ shows that \[eq:prtest\] is satisfied for the required values of $k$ and $s$. Consequently \[eq:intintint1\] is satisfied, and the result follows. Applying to Linearised Power System Models {#sec:appthm} ------------------------------------------ (-0.25,0) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\text{-}\frac{1}{2}}$]{} (0.75,1); (p) at (1.75,0.5) [$+$]{}; (2.5,0) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\frac{1}{2}}$]{} (3.5,1); (4.25,0) rectangle node\[midway\] [$G$]{} (5.25,1); (6,0) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\frac{1}{2}}$]{} (7,1); (8,0) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\text{-}\frac{1}{2}}$]{} (9,1); (2.5,-1.5) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\text{-}\frac{1}{2}}$]{} (3.5,-0.5); (4.25,-1.5) rectangle node\[midway\] [$\frac{1}{s}L_B$]{} (5.25,-0.5); (6,-1.5) rectangle node\[midway,yshift=1.5\] [$\Gamma^{\text{-}\frac{1}{2}}$]{} (7,-0.5); (-1,0.5) – node\[above, pos=0.3\] [$P_d$]{} (-0.25,0.5); (0.75,0.5) – node\[pos=0.4,above\] [$e$]{} (p.west); (p.east) – (2.5,0.5); (3.5,0.5) – (4.25,0.5); (5.25,0.5) – (6,0.5); (7,0.5) – (8,0.5); (9,0.5) – node\[above, pos=0.5\] [$\dot{\theta}$]{} (9.75,0.5); (7.5,0.5) – (7.5,-1) node\[right\] [$y$]{} – (7,-1); (5.25,-1) – (6,-1); (3.5,-1) – (4.25,-1); (p.south) node\[left, yshift=-4\] [$\text{--}$]{} – (1.75,-1) node\[left\] [$u$]{} – (2.5,-1); (2.5,-1.75) – (7,-1.75); (2.5,1.25) – (7,1.25); (pu) at (4.75,1.75) [$\equiv\text{diag}\funof{p_1,\ldots{},p_n},\;p_i\in\mathcal{P}_h$]{}; (pb) at (4.75,-2.25) [$\equiv\frac{1}{s}L,\;L\in\mathcal{L}$]{}; In this section we will show that a set of decentralised conditions can be used to guarantee stability of the full linearised power system model in \[eq:model\]. These guarantees are valid for every operating point that satisfies the following mild assumption. \[ass:1\] At equilibrium, the angle difference $\abs{\theta_{0,i}-\theta_{0,j}}$ across each transmission line is less than , and the voltage magnitude at each bus is at most $V_{\max,i}$. This assumption is essentially without loss of generality, since thermal and voltage drop limitations for transmission lines preclude load angles anywhere near and equilibrium bus voltages above 1.05 p.u. [@kundur_power_1994]. We will now show that given any $h\in\PR\cap\discalg{}$, the power system model in \[eq:model\] is guaranteed to be stable if every bus model satisfies $$\label{eq:basic} \gamma_i\lft{l}{G_i}{c_i}\in\mathcal{P}_h,$$ where $$\label{eq:gam1} \gamma_i\coloneqq{}2\sum_{j=1}^nV_{\max,i}V_{\max,j}b_{ij}.$$ Note that $\gamma_i$ is a constant that depends only on the susceptances of the transmission lines connected to the *i*th bus and the largest allowable voltage magnitudes at their endpoints. Therefore this condition is local, independent of the operating point, and guarantees stability even as the components are connected and disconnected from the buses. This makes \[eq:basic\] an ideal basis for conducting scale-free design. In order to verify stability of the power system model using , we need to connect \[eq:model,eq:seceq\]. As can be seen from , by closing all the local control loops the interconnection in \[eq:model\] simplifies to $$\label{eq:contclosed} \begin{aligned} \dot{\theta}_i\s&=\lft{l}{G_i\s}{c_i\s}\funof{P_{d,i}\s-P_{N,i}\s}\\ P_N\s&=\frac{1}{s}L_B\dot{\theta}\s. \end{aligned}$$ This feedback configuration has the same form as \[eq:seceq\] (compare ), however cannot yet be applied since $L_B$ is not necessarily in $\mathcal{L}$. The following simple lemma, which is proved in , shows that we can rescale \[eq:contclosed\] so that it is of the appropriate form. \[lem:rescale\] Suppose that $L_B$ as given by \[eq:Lap\] satisfies , and let $\Gamma={\mbox{diag}}\funof{\gamma_1,\ldots{},\gamma_n}$, where the $\gamma_i$’s are given by \[eq:gam1\]. Then given any conformal partitioning of $\Gamma$ and $L_B$ such that $$\Gamma=\begin{bmatrix} \Gamma_1&0\\0&\Gamma_2 \end{bmatrix},\,L_B=\begin{bmatrix} L_{B,11}&L_{B,11}\\ L_{B,21}&L_{B,22} \end{bmatrix},$$ $0\preceq{}\Gamma_1^{-\frac{1}{2}}\funof{L_{B,11}-L_{B,12}L_{B,22}^{-1}L_{B,21}}\Gamma_1^{-\frac{1}{2}}\preceq{}I$. The most basic consequence of is that given any operating point satisfying , $$\Gamma^{-\frac{1}{2}}L_B\Gamma^{-\frac{1}{2}}\in\mathcal{L}.$$ This suggests that in order to rescale \[eq:contclosed\] so that can be applied, we should use the loop transform in . This shows that stability of \[eq:contclosed\] is equivalent to that of $$\begin{aligned} y_i\s&=\gamma_i{}\lft{l}{G_i\s}{c_i\s}\funof{e_i\s-u_i\s}\\ u\s&=\frac{1}{s}\Gamma^{-\frac{1}{2}}L_B\Gamma^{-\frac{1}{2}}y\s. \end{aligned}$$ In the above the signals $y,u,e$ are re-scaled versions of $\dot{\theta},P_N$ and $P_{d}$. can now be applied by setting $$p_i\s\equiv{}\gamma_i{}\lft{l}{G_i\s}{c_i\s}\,\text{and}\,L\equiv{}\Gamma^{-\frac{1}{2}}L_B\Gamma^{-\frac{1}{2}}.$$ This proves that \[eq:basic\] is sufficient for stability of \[eq:model\] for every operating point meeting . Therefore all that remains is to show that these claims hold even as components are disconnected from the buses. Suppose for now that we disconnect the components at the *–$n$*th buses. These buses are now ‘floating’, and may be eliminated using Kron reduction in the usual way. If this is done we obtain the following ‘reduced’ version of \[eq:contclosed\]: $$\begin{aligned} \dot{\theta}_i\s&=\lft{l}{G_i\s}{c_i\s}\funof{P_{d,i}\s-P_{N,i}\s}\\ P_N\s&=\frac{1}{s}\funof{L_{B,11}-L_{B,12}L_{B,22}^{-1}L_{B,21}}\dot{\theta}\s, \end{aligned}$$ where $L_{B,22}\in\R^{m\times{}m}$. shows that **exactly the same** loop transform will also re-scale the reduced model so that can be applied. Therefore satisfying \[eq:basic\] also implies stability when these components are removed. By simply re-indexing the buses, the same argument can be used to show that \[eq:basic\] also implies stability even as any combination of components are removed. Stability as we have defined it implies that if the external signals (the disturbances $P_d$) are bounded and tend to zero, then the internal signals $P_N,\dot{\theta}$ will tend to zero. This does not necessarily mean that the ‘state variables’ $\theta$ will tend to their equilibrium values $\theta_0$, since they do not appear explicitly in the internal signals. However, since $$P_N=L_B\funof{\theta-\theta_0},$$ it is clear that if $\lim_{t\rightarrow{}\infty}P_N\tm=0$, then $\lim_{t\rightarrow{}\infty}\theta\tm-\theta_0\in\text{Ker}\funof{L_B}$. Therefore because $L_B$ is a weighted Laplacian matrix, satisfying \[eq:basic\] ensures that the phases differences (and hence power flows) across the transmission lines will return to their equilibrium values. A Scale-Free Analysis Method {#sec:an} ---------------------------- shows that given a function $h\in\PR\cap\discalg$, stability can be guaranteed on a component by component basis using \[eq:basic\]. The true strength of this result is that it can be used to design controllers based only on local models with a-priori guarantees that hold independently of operating point and network configuration. However before considering synthesis questions, it is first instructive to understand how to check \[eq:basic\]. Rather than simply checking that \[eq:basic\] holds, instead we propose to find the largest $\gamma$ such that $\gamma\lft{l}{G_i}{c_i}\in\mathcal{P}_h$. This is justified by the following lemma, and useful because it will give our criteria robustness guarantees. It will also provide a synthesis objective as discussed in . The proof is given in . \[lem:marg\] Let $h\in\PR$ and $p\in\mathcal{P}_h$. If $0<\gamma\leq{}1$, then $\gamma{}p\in\mathcal{P}_h$. Based on the above, we define the following scale-free analysis problem. \[prob:analysis\] Given $h,G_i,c_i$ $$\begin{aligned} \text{maximize} \quad{}&\gamma\\ \text{subject to}\quad&\gamma\lft{l}{G_i}{c_i}\in\mathcal{P}_h. \end{aligned}$$ Denoting the solution to this problem as $\gamma_i^*$, it follows from that if $\gamma_i\leq{}\gamma^*_i$, then $\gamma{}_i\lft{l}{G_i}{c_i}\in\mathcal{P}_h$ (i.e. \[eq:basic\] is satisfied), and the difference $\gamma^*_i-\gamma_i$ gives a measure of robustness. We now summarise some techniques for solving . These both illustrate how to solve the problem, and also give insight into how the function $h\s$ should be selected. Robustness with respect to other standard classes of uncertainty can also be guaranteed by adding more constraints to , see for example [@Jon01]. ### Frequency response methods {#sec:fr} Probably the simplest way to check that a function is is to plot its frequency response. These methods are also the most insightful, since they give $h\s$ and \[eq:basic\] a graphical interpretation. The required result is the following, and is proved in . \[lem:fr\] Let $g\in\discalg$. Then $g\in\ESPR$ if and only if there exists an $\epsilon>0$ such that $$\text{Re}\funof{g\jw}\geq{}\epsilon,\;\forall\omega\in\R\cup\cfunof{\infty}.$$ This suggests a simple frequency gridding approach for solving . In particular it shows that is equivalent to $$\begin{aligned} \text{maximize}\;&\gamma\\ \text{subject to}\;&\text{Re}\funof{\!h\!\jw\!\funof{\!1+\frac{\gamma{}\lft{l}{G_i\jw}{c_i\jw}}{j\omega}}\!}\!\geq{}\epsilon,\forall\omega. \end{aligned}$$ This optimisation problem is easily tackled with a host of numerical methods. Perhaps more importantly the frequency domain characterization shows that the choice of $h\s$ has a graphical interpretation. To understand this, observe that for a fixed $\omega$, finding an $\epsilon>0$ such that the constraint in the above is satisfied is equivalent to checking whether $$\label{eq:rotplane} \text{Re}\funof{e^{j\angle{}h\jw}\funof{1+z}}>0,$$ where $z={\gamma{}\lft{l}{G_i\jw}{c\jw}}/{j\omega}$. This corresponds to checking whether the point $z\in\C$ lies in a half-plane which cuts through the point $-1$, and has slope $\angle{}h\jw$. This is illustrated in . The significance of this observation is that it shows that graphical frequency domain tools, robustness measures, and intuition can be used to design both $h\s$ and the controllers $c_i\s$. This will be discussed further in . It also connects to the results from [@LV06; @PV12]. ### State-space methods If we restrict ourselves to the space of real rational transfer functions, state-space techniques can also be employed. The following simple extension of the lemma is the required result. It shows that if we have a state-space realisation of the component model and $h$, we can solve by checking an . This proof is given in . \[lem:kyp\] Let $p,h\in\Rat$, $\gamma>0$, and suppose that $p\s,\frac{h\s}{s}$ have minimal realisations $$p\s=\sqfunof{\begin{array}{c|c} A_1 & B_1 \\\hline C_1 & D_1 \end{array}},\;\frac{h\s}{s}=\sqfunof{\begin{array}{c|c} A_2 & B_2 \\\hline C_2 & 0 \end{array}}.$$ The following are equivalent: (i) $\gamma{}p\in\mathcal{P}_h$. (ii) There exists an $X\succ{}0$ such that $$\begin{bmatrix} A^TX+XA&C^T-XB\\C-B^TX&-\funof{D+D^T} \end{bmatrix}\prec{}0,$$ where $$A=\begin{bmatrix} A_1&B_1C_2\\0&A_2 \end{bmatrix},\;B=\begin{bmatrix} 0\\B_2 \end{bmatrix},$$ and $ C=\begin{bmatrix} \gamma{}C_1&\gamma{}D_1C_2+C_2A_2 \end{bmatrix},\; D=C_2B_2. $ Observe in particular that the in is affine in $\gamma$. This means that we may address by solving an optimisation problem of the form $$\begin{aligned} \text{maximize}\quad&\gamma\\ \text{subject to}\quad& \begin{bmatrix} A^TX+XA&C^T-XB\\C-B^TX&-\funof{D+D^T} \end{bmatrix}\prec{}0\\ &X\succ{}0, \end{aligned}$$ where $A,B,C,D,\gamma{}$ are as in (ii). ### Circuit theory methods {#sec:circtheory} The functions have also been extensively studied in the context of classical circuit theory. One consequence of this was the development of algebraic tests for positive realness that can be applied to simple functions. For example, excluding the degenerate case $b_0=b_1=b_2=0$, the function $$\frac{a_2s^2+a_1s+a_0}{b_2s^2+b_1s+b_0}\in\PR$$ if and only if all the coefficients are non-negative, and $$\label{eq:biquad} \begin{aligned} \funof{\sqrt{a_2b_0}-\sqrt{a_0b_2}}^2\leq{}a_1b_1. \end{aligned}$$ For this result, historical context, and results for other rational functions, see [@CS09]. Such tests give a convenient method for solving when $\lft{l}{G_i}{c_i}$ is given by a simple parametrised model. We will illustrate this in . A Scale-Free Design Method {#sec:res3} -------------------------- The true strength of is that it can be used as a basis for decentralised design with a-priori guarantees that hold for all operating points and network configurations. In this section we will discuss both how to design the function $h\s$, and the local controllers $c_i\s$. ### Designing $h\s$ The objective here is **not to design the perfect $h\s$**, rather to get a sensible starting point for designing the decentralised controllers. In we saw that testing \[eq:basic\] with respect to any given $h\s$ is equivalent to checking that the frequency responses of $\gamma_i\lft{l}{G_i\s}{c_i\s}/s$ lie in a frequency dependent half-plane. Therefore if we know roughly how these responses will look, by for example plotting their Nyquist diagrams for some nominal parameter values, we can use this graphical intuition to design a suitable function $h\s$. As illustrated in , this is extremely easy to do with respect to a fixed half-plane, since a half-plane can be identified directly from the Nyquist diagrams. A function that will certify \[eq:basic\] for any set of models with Nyquist diagrams in this half-plane is then guaranteed to exist by the following simple extension of the off-axis circle criterion [@CN68], which is proved in . \[lem:offaxiscirc\] Let $p_1,\ldots{},p_n\in\discalg$ and assume that $p_i\funof{0}>0$. If there exists a $\theta\in[0,\pi/2)$ such that for all $i$ $$\text{Re}\funof{e^{j\theta}\funof{1+p_i\jw/j\omega}}>0,\,\forall{}\omega>0,$$ then there exists an $h\in\PR\cap\discalg$ such that $p_1,\ldots{},p_n\in\mathcal{P}_h$. Even if a fixed half-plane cannot be used, this process can be used to identify frequency ranges where different slopes are suitable. An $h\s$ to match these slopes in the these frequency ranges can then be obtained using a lead-lag design. Alternatively other graphical or computational methods for multiplier design can be used, for example Popov plots. For further discussions about the design of half-planes from the perspective of robustness and performance, see [@Pat15]. ### Synthesis of Controllers Consider the synthesis counterpart to . \[prob:synthesis\] Given $G_i,h$, $$\begin{aligned} \text{maximize}\quad&\gamma\\ \text{subject to}\quad&\gamma{}\lft{l}{G_i}{c_i}\in\mathcal{P}_h\\ &c_i\in\Rat_{c_i} \end{aligned}$$ where $\Rat_{c_i}\subseteq{}\Rat$ denotes the set of possible designs for $c_i$. Solving the above maximizes the robustness margin introduced in . In the power system context, simple controllers are typically desired. In this case the most effective way to solve is probably to solve the analysis problem in for a range of controller gains, and then select those that maximize $\gamma$. This will be illustrated for design in . Alternatively lead-lag design with respect to diagrams such as offers another simple alternative. Formal synthesis methods can also be used. In fact, when $\Rat_{c_i}=\Rat$ and $G_i\in\Rat$, can be solved using the based tools of [@SKS94]. \[thm:hfty\] Let $$M=\sqfunof{ \begin{array}{c|cc} A&B_1&B_2 \\\hline C_1&D_{11}&D_{12}\\ C_2&D_{21}&0 \end{array} },$$ and assume that $\funof{A,B_2}$ is stabilizable and that $\funof{C_2,A}$ is detectable. Then there exists a strictly proper controller $c\s$ such that $\lft{l}{M}{c}\in\ESPR$ if and only if there exist matrices $X_1,X_2,Y_1,Y_2$ such that $$\begin{aligned} \begin{bmatrix} AX_1+B_2X_2&0\\ C_1X_1+D_{12}X_2-B_1^T&-D_{11} \end{bmatrix}+\funof{\star}^T&\prec{}0,\\ \begin{bmatrix} Y_1A+Y_2C_2&Y_1B_1+Y_2D_{21}-C_1^T\\ 0&-D_{11} \end{bmatrix}+\funof{\star}^T&\prec{}0,\\ \begin{bmatrix} -X_1&I\\I&-Y_1 \end{bmatrix}&\prec{}0, \end{aligned}$$ where $\funof{\star}^T$ denotes the transpose of the matrix on its left. In [@SKS94] they also give an explicit realisation of a controller that renders $\lft{l}{M}{c}\in\ESPR$, though due to space limitations we omit this. allows to be solved as follows. By computing a minimal realisation $M_\gamma{}$ of the transfer function $$\begin{bmatrix} \frac{\gamma{}h\s}{s}&0\\0&I_n \end{bmatrix}G_i\s+\begin{bmatrix} h\s&0\\0&0 \end{bmatrix},$$ and checking the in , the optimal solution to can be computed to arbitrary precision using a bisection over $\gamma$. Synthesis with further performance and robustness guarantees is also possible by adding more constraints to . Again, see [@Jon01] for an introduction. Do There Exist Better Scale-Free Design Criteria? {#sec:d} ------------------------------------------------- does not offer the only way to conduct scale-free design. For example, passivity theory shows that if for all $i$ $$\gamma_i\lft{l}{G_i}{c_i}\in\ESPR,$$ then the power system model is stable[^3]. This condition could also be used to conduct decentralised design, and gives the same types of guarantees as \[eq:basic\]. In this section we will both show that this passivity based condition is a special case of \[eq:basic\], and also that in some sense the criteria from are the best possible. The following demonstrates the first claim, and is proved in . \[lem:passive\] If $p_1,\dots{},p_n\in\ESPR$, then there exists an $h\in\PR\cap\discalg{}$ such that $p_1,\dots{},p_n\in\mathcal{P}_h$. The converse of is not true. Indeed the models considered in are not passive, but do satisfy \[eq:basic\] for wide ranges of parameter values. In order to investigate whether there are better decentralised stability criteria than \[eq:basic\], suppose that for some frequency $$\label{eq:violate} \text{Re}\funof{h\jw\funof{1+\frac{\gamma_1{}\lft{l}{G_1\jw}{c_1\jw}}{j\omega}}}<0.$$ That is \[eq:basic\] does not hold for the first bus, but perhaps only by an $\epsilon{}$ amount (compare \[eq:violate\] with the conditions in ). The idea is that if a better decentralised condition existed, it would have to allow for \[eq:violate\] to hold. The following theorem shows that for a broad class of functions $h\s$ (which includes all the multipliers used in the examples) this is not possible, since if \[eq:violate\] holds then there exist $\gamma_2\lft{l}{G_2}{c_2},\dots{},\gamma_n\lft{l}{G_n}{c_n}\in\mathcal{P}_h$ and an $L_B$ meeting such that the power system model is unstable. This means that we cannot even relax the decentralised requirement for a single component by an $\epsilon{}$ amount and still obtain a-priori stability guarantees in a decentralised manner. \[thm:converse\] Let $p_1\in\Hfty$, $$h\s=\frac{s}{s+T}g\s\in\PR\cap\discalg,$$ where $T>0$ and $g,g^{-1}\in\discalg$, and assume that \[eq:violate\] holds for some $\omega>0$. Then given any $n\geq{}2$ there exist $p_2,\ldots{},p_n\in\mathcal{P}_h$ and an $L\in\mathcal{L}$ such that \[eq:seceq\] is unstable. The interconnection in \[eq:seceq\] is stable only if $$M\jw=\funof{I+L{\mbox{diag}}\funof{p_1\jw,\ldots{},p_n\jw}/j\omega}$$ is invertible. Now suppose that $p_2\!\s\!=\!\ldots{}\!=\!p_n\!\s\!=\!p\!\s$, and $$L=\begin{bmatrix} 1/\sqrt{2}\\-\sqrt{\frac{1}{2\funof{n-1}}}\mathbf{1_{n-1}} \end{bmatrix}\begin{bmatrix} 1/\sqrt{2}\\-\sqrt{\frac{1}{2\funof{n-1}}}\mathbf{1_{n-1}} \end{bmatrix}^T.$$ Under these conditions $L\in\mathcal{L}$ and $$\det{}M\jw=1+\tfrac{1}{2}p_1\jw/j\omega+\tfrac{1}{2}p\jw/j\omega.$$ Letting $x=p_1\jw/j\omega$, we see that if $p\jw/j\omega\equiv{}-x-2$ then $\det{}M\jw=0$, and therefore $M\jw$ is not invertible. Therefore all we need to do is find a $p\in\mathcal{P}_h$ such that $p\jw/j\omega=-x-2$. Equivalently we can find a $q\in\ESPR$ such $q\jw=h\jw\funof{-1-x}$ and $q\funof{\infty}=h\funof{\infty}$, and then set $$p\s=\funof{s+T}\funof{q\s-h\s}g\s^{-1}.$$ Provided $\text{Re}\funof{h\jw\funof{-1-x}}>0$, such a $q$ can always be found using well known interpolation results (for example [@Vin00 Lemma 1.14]). Observing that by assumption $$\text{Re}\funof{h\jw\!\funof{-1-x}}\!=\!-\text{Re}\funof{h\jw\!\funof{1+\tfrac{p_1\jw}{j\omega}}}>0$$ completes the proof. Examples {#sec:examples} ======== The three examples in this section show that our conditions can be used to: (i) demonstrate stability of existing power system models; (ii) give delay robustness guarantees for the swing dynamics with delayed droop control; and (iii) analyse the robust stability of automatic generation control (AGC) and design novel AGC controllers. Stability of the Swing Equations {#sec:ex1} -------------------------------- In this example we will show that our criteria can be used to verify stability of the swing equations when there is no control. It is or course no great surprise that this model is stable, and many other tools can be used to prove this. It is nevertheless reassuring that our conditions can easily cover this case. If we have a swing equation model with no control, then for all $i$, $c_i=0$, and consequently $$\label{eq:ex1show} \lft{l}{G_i}{c_i}=\frac{1}{m_is+d_i},$$ Therefore in this case, \[eq:basic\] simplifies to $$\label{eq:ex1} \frac{\gamma_i}{m_is+d_i}\in\mathcal{P}_h.$$ The following corollary shows that there exists an $h$ such that the above holds for arbitrarily large $\gamma_i$ given any $m_i\geq{}0$ and $d_i>0$. Therefore the swing equation model is stable by for any possible parameter values, operating point and interconnection configuration. The proof uses the tools from circuit theory discussed in , illustrating their strength when simple parametrised models are considered. \[ex:1\] Let $p_1\s=\gamma_1/\funof{m_1s+d_1},\ldots{},p_n\s=\gamma_n/\funof{m_ns+d_n}$. If for all $i$ $$m_i\geq{}0,\, d_i>0\,\text{and}\,\gamma_i>0,$$ then there exists an $h\in\PR\cap\discalg{}$ such that $p_1,\ldots{},p_n\in\mathcal{P}_h$. Let $h\s=\frac{s}{Ts+1}$. It is sufficient to show that for all $i$ there exists an $\epsilon>0$ such that $$\frac{s}{s+T}\funof{1+\frac{\gamma_i}{s\funof{m_is+d_i}}}-\epsilon\in\PR.$$ Multiplying out the above shows that it is equivalent to $$\frac{\funof{1-\epsilon}m_is^2+\funof{d_i-d_i\epsilon{}-T\epsilon{}m_i}s+\gamma_i-Td_i\epsilon{}}{m_is^2+\funof{d_i+Tm_i}s+Td_i}\in\PR.$$ We can show that the above holds by applying \[eq:biquad\]. Note however that $\funof{\sqrt{a_2b_0}-\sqrt{a_0b_2}}^2\leq{}\max\cfunof{a_2b_0,a_0b_2}$, and that if $T$ is sufficiently large and $\epsilon$ sufficiently small, then for all $i$ $$\funof{1-\epsilon}m_iTd_i\geq{}m_i\funof{\gamma_i-Td_i\epsilon{}}.$$ Therefore it is sufficient to show that $\funof{1-\epsilon}m_iTd_i\leq{}\funof{d_i-d_i\epsilon{}-T\epsilon{}m_i}\funof{d_i+Tm_i}$. Multiplying out this expression yields $$d_i^2-\funof{d_i\funof{d_i+Tm_i}+T^2m_i^2}\epsilon\geq{}0.$$ We can always pick $\epsilon$ small enough so that the above holds for all $i$, which completes the proof. Stability of Droop Control Subject to Delay {#ex:2} ------------------------------------------- In this example we will use our criteria to verify stability of the swing equations when there is droop control subject to delays. In order to get simpler criteria we will neglect governor and turbine dynamics (these can easily be included, and will be in the next example). This model is described by $$\label{eq:delay-swing} \begin{aligned} G_i=\frac{1}{m_is+d_i}\begin{bmatrix} 1&1\\1&1 \end{bmatrix},\;c_i=-\frac{1}{r_i}e^{-s\tau_i},\\\Longrightarrow{}\lft{l}{G_i}{c_i}=\frac{1}{m_is+d_i+\frac{1}{r_i}e^{-s\tau_i}}. \end{aligned}$$ In the above $r_i>0$ is the droop constant, and $\tau_i\geq{}0$ a measurement delay. In the following we will show that if for all $i$ $$\label{eq:scfrdroop} r_i\leq{}\sqrt{2/\gamma_im_i},$$ then stability of the power system model is guaranteed by for any values of the delays that satisfy $$\label{eq:scfrdroop1} 0\leq{}\tau_i<\pi{}m_ir_i/4,$$ and for any non-negative values of the natural damping constants $d_i$ (which are typically unknown). This perfectly illustrates the strength of our approach for conducting design in the network setting. By using , the task of synthesizing decentralised controllers to guarantee robust stability to delays in a large uncertain system –a daunting task– has been simplified to picking a set of constant gains that satisfy a simple inequality. Such constants always exist, and the resulting controllers are simple to implement. Furthermore the design comes with a-priori guarantees about robustness to delays and levels of natural damping, that hold entirely independently of operating point and interconnection configuration. To derive this result we will use the approach outlined in . As suggested there, in order to choose a suitable $h\s$, we plot the Nyquist diagrams of $\lft{l}{G_i\s}{c_i\s}/s$ for a range of parameter values. This is shown in . This not only shows that passivity tools cannot be used, even for arbitrarily small values of the delay, but also that the Nyquist diagrams lie within the same half-plane for wide ranges of parameter values. This suggests that we can use to verify the decentralised stability requirement in \[eq:basic\]. In fact this requirement can be turned into parameter dependent inequalities, as shown in below. For ease of presentation we only give the result for the special choice of half-plane that leads to \[eq:scfrdroop,eq:scfrdroop1\]. For generalizations of these inequalities and the proof, see . \[lem:delays\] Let $m\geq{}0,r>0$ and $\gamma{}>0$. If $$r\leq{}\sqrt{2/{\gamma}m},$$ then for all $0\leq{}\tau<\pi{}mr/4,d\geq{}0$ and $\omega>0$, $$\text{Re}\funof{\funof{\pi+6j}\funof{1+\frac{\gamma{}}{j\omega\funof{mj\omega+d+\frac{1}{r}e^{-j\omega{}\tau}}}}}>0.$$ Stability of Automatic Generation Control (AGC) {#ex:3} ----------------------------------------------- (p1) at (0,.5) [$+$]{}; (.7,0) rectangle node\[midway\] [$\frac{k_i}{s}$]{} (1.5,1); (p2) at (2.2,.5) [$+$]{}; (2.9,0) rectangle node\[midway\] [$\frac{1}{1+sT_{g,i}}$]{} node\[below, yshift=-8\][Governor]{} (4.5,1); (5,0) rectangle node\[midway\] [$\frac{1}{1+sT_{t,i}}$]{} node\[below, yshift=-8\][Turbine]{} (6.6,1); (p3) at (7.4,.5) [$+$]{}; (8.1,0) rectangle node\[midway\] [$\frac{1}{m_is+d_i}$]{} node\[below, yshift=-8\][Generator]{} (9.7,1); (-.4,1.35) rectangle node\[midway\] [ $\text{-}\beta_i$]{} (.4,2.35); (1.8,1.35) rectangle node\[midway\] [$\text{-}\frac{1}{r_i}$]{} (2.6,2.35); (-1.7,0.5) – node\[above, pos=.25\] [$P_{d,i}\!-\!P_{N,i}$]{} (p1); (p1) – (.7,.5); (1.5,.5) – (p2); (p2) – (2.9,.5); (4.5,.5) – (5,.5); (6.6,.5) – (p3); (p3) – (8.1,.5); (9.7,.5) – node\[above, pos=.75\] [$\dot{\theta_i}$]{} (10.5,.5); (9.85,.5) – (9.85,3) – (0,3) – (0,2.35); (0,1.35) – (p1); (2.2,1.35) – (p2); (2.2,3) – (2.2,2.35); (-1.2,0.5) – (-1.2,-1.5) – (7.4,-1.5) – (p3); ------------------- ------- --------- --------- ------- ----------- ------- \[0.2ex\] ${}m{}$ ${}d$ ${}T_g$ ${}T_t$ ${}r$ ${}\beta$ ${}k$ \[0.2ex\] 0.16 0.02 0.08 0.40 3.00 0.33 0.30 0.20 0.02 0.06 0.44 2.73 0.40 0.20 0.12 0.02 0.07 0.30 2.82 0.38 0.40 ------------------- ------- --------- --------- ------- ----------- ------- is an extension of droop control. The primary objective of is to regulate system frequency to the specified nominal value (50/60 Hz), while maintaining the flow of power between buses at their scheduled values. A typical controller architecture is shown in [@Bev14]. From the control perspective, the synthesis task is to design the parameters $\beta_i,k_i$. It is common to select $\beta_i\approx{}1/r_i+d_i$, with $k_i$ selected based on simulation studies to act on the time scale of 1-10 minutes (see e.g. [@kundur_power_1994 §[11.1.5]{}]), and it has been observed that when ‘large’ $\beta_i$’s are chosen, stability issues can arise. Within our framework, the generalised plant is $$G_i\s={\begin{bmatrix} \frac{1}{m_is+d_i}&\frac{1}{\funof{m_is+d_i}\funof{1+sT_{g,i}}\funof{1+sT_{t,i}}}\\ \frac{1}{m_is+d_i}&\frac{1}{\funof{m_is+d_i}\funof{1+sT_{g,i}}\funof{1+sT_{t,i}}}\\ 1&0 \end{bmatrix}},$$ and the standard controller is $$c_i\s=\begin{bmatrix} -\frac{1}{r_i}&0 \end{bmatrix}+\frac{k_i}{s}\begin{bmatrix} -\beta_i&1 \end{bmatrix}.$$ To formally address the design of the controller, we solved the analysis problem in for a range of values of the control parameters. For the first set of generator parameters this is shown in . From this figure we see that the nominal design, which is marked by a cross, is a reasonable choice, though the robustness margin could be further improved by reducing $\beta_i$ or increasing $k_i$. We also see that increasing $\beta_i$ will reduce the optimal $\gamma$, justifying the observation that ‘large’ $\beta_i$’s can cause stability problems. We can also design controllers by solving the synthesis problem in using methods. Given the need for simple controllers, the value here is more in finding out what levels of robustness are possible, rather than in the controllers themselves. To this end we fixed the controller parameters $r_i,\beta_i$ to their values from . Selecting the best possible $k_i\in\R$ gives an optimal solution of around $11$. However, by replacing the constant $k_i$ with a transfer function $k_i\in\Rat$, and solving the synthesis problem using the $\Hfty{}$ method from yields an optimal solution of around $10^4$. This shows that the use of dynamic control has the potential to greatly increase the robustness margin. It is interesting to think how this can be exploited in the design of inverters, where the use of more complex controllers is a more realistic prospect. Conclusions =========== A decentralised analysis and design framework for frequency control in power systems has been presented. Our framework allows for the design of decentralised controllers using only local models, and provides strong a-priori robust stability guarantees that hold independently of operating point, even as components are added to and removed from the grid. Furthermore our conditions can be applied even when the network consists of complex heterogeneous components, and can be checked using standard frequency response, state-space, and circuit theoretic tools. We illustrate the suitability of the framework for power systems by: (i) showing that the robustness of existing schemes can be analysed and further improved using the newly developed tools; and (ii) providing novel delay robustness criteria for the classical swing equations. [Richard Pates]{} received the M.Eng degree in 2009, and the Ph.D. degree in 2014, both from the University of Cambridge. He is currently a Researcher at Lund University. His research interests include scale-free methods for control system design, stability and control of electrical power systems, and fundamental performance limitations in large-scale systems. [Enrique Mallada]{} is an assistant professor of electrical and computer engineering at Johns Hopkins University. Before joining Hopkins in 2016, he was a post-doctoral fellow at the Center for the Mathematics of Information at the California Institute of Technology from 2014 to 2016. He received his ingeniero en telecomunicaciones degree from Universidad ORT, Uruguay, in 2005 and his Ph.D. degree in electrical and computer engineering with a minor in applied mathematics from Cornell University in 2014. Proof of {#app:lem1} --------- implies that $0\preceq{}L_B$, from which standard arguments (using e.g. Gershgorin discs) show that $$0\preceq{}\begin{bmatrix} \Gamma_1&0\\0&\Gamma_2 \end{bmatrix}^{-\frac{1}{2}}\begin{bmatrix} L_{B,11}&L_{B,11}\\ L_{B,21}&L_{B,22} \end{bmatrix}\begin{bmatrix} \Gamma_1&0\\0&\Gamma_2 \end{bmatrix}^{-\frac{1}{2}}\preceq{}I.$$ The result then follows immediately from [@Smi92 Theorem 5]. Proof of {#app:1} --------- Since $p\in\mathcal{P}_h$, there exists an $\epsilon$ such that $h\s\funof{1+\frac{p}{s}}-\epsilon\in\PR$. Therefore $$\frac{1-\gamma}{\gamma}h\s+h\s\funof{1+\frac{p\s}{s}}-\epsilon\in\PR.$$ This implies that $h\s\funof{1+\gamma{}\frac{p\s}{s}}-\gamma\epsilon\in\PR$. Consequently $\gamma{}p\s\in\mathcal{P}_h$ for all $0<\gamma\leq{}1$ as required. Proof of {#app:2} --------- Denote $\phi\s=\frac{1-s}{1+s}$, and let $z=\phi\s$ and $G\funof{z}=g\funof{\phi^{-1}\funof{z}}$. Since $\phi$ maps the open right half plane to the open unit circle, $$\sup_{\text{Re}\funof{s}>0}\text{Re}\funof{g\s}=\sup_{\abs{z}<1}\text{Re}\funof{G\funof{z}}.$$ Since $g\s\in\discalg$, $G\funof{z}$ is analytic in the open unit circle, and continuous on the unit circle [@Par97]. Therefore by the maximum modulus principle $$\sup_{\abs{z}<1}\text{Re}\funof{G\funof{z}}=\!\!\max_{t\in\sqfunof{0,2\pi}}\text{Re}\funof{G\funof{e^{jt}}}=\!\!\!\!\!\max_{\omega\in\R\cup\cfunof{\infty}}\text{Re}\funof{g\s}.$$ The result is now immediate from . Proof of {#app:3} --------- By [@SKS94 Lemma 2.3], if $$G\s=\sqfunof{\begin{array}{c|c} A & B \\\hline C & D \end{array}},$$ then the condition $G\in\ESPR$ is equivalent to the existence of an $X\succ{}0$ such that $$\begin{bmatrix} A^TX+XA&C^T-XB\\C-B^TX&-\funof{D+D^T} \end{bmatrix}\prec{}0.$$ Therefore we need only show that $$h\s\funof{1+\frac{\gamma{}p\s}{s}}=\sqfunof{\begin{array}{c|c} A & B \\\hline C & D \end{array}},$$ where $A,B,C,D$ are given as in (ii). Applying standard formulae for multiplying state-space realisations shows that $h\s\gamma{}p\s/s$ and $sh\s$ have realisations $$\begin{aligned} \sqfunof{\begin{array}{cc|c} A_1&B_1C_2 & 0 \\ 0&A_2&B_2\\\hline \gamma{}C_1&\gamma{}D_1C_2 & 0 \end{array}}\;\text{and}\; \sqfunof{\begin{array}{c|c} A_2 & B_2 \\\hline C_2A_2 & C_2B_2 \end{array}} \end{aligned}$$ respectively, from which the result immediately follows. Proof of {#app:offaxis} --------- Let $g_i=\funof{s/\funof{s+T}}\funof{1+p_i\s/s}$. It is easily shown that $g_i\in\discalg$, and that for $T$ sufficiently large there exists an $\epsilon>0$ such that for all $i$ and $\omega\geq{}0$ $$\text{Re}\funof{-je^{j\funof{\theta+1/T}}g_i\jw}\geq{}\epsilon{}.$$ Therefore by [@CN68 Theorem 2] there exists an ‘$RC$’ multiplier $h_{RC}$ such that $h_{RC}g_i\in\ESPR$. Consequently if $h\s=h_{RC}\s{}s/\funof{s+T}$, then $p_1,\ldots{},p_n\in\mathcal{P}_h$ as required. Proof of {#app:7} --------- Since $p_i\in\ESPR$ there exists an $\epsilon>0$ and a $\gamma>0$ such that for all $i$ and $\omega\in\R\cup\cfunof{\infty}$, $$\text{Re}\funof{p_i\jw}\geq{}\epsilon{},\,\abs{\text{Im}\funof{p_i\jw}}\leq{}\gamma.$$ Let $h\s=s/\funof{s+T}$. By , $p_i\in\mathcal{P}_h$ if and only if there exists and $\delta>0$ such that $$\text{Re}\funof{j\omega/\funof{j\omega+T}\funof{1+p_i\jw/j\omega}}\geq{}\delta{}.$$ This is equivalent to $$\begin{aligned} \frac{T\text{Re}\funof{p_i\jw}+\omega\funof{\omega+\text{Im}\funof{p_i\jw}}}{\omega^2+T^2}&\geq{}\delta{}&\Longleftarrow{}\\ \frac{T\epsilon+\omega\funof{\omega-\text{sign}\funof{\omega}\gamma{}}}{\omega^2+T^2}&\geq{}\delta{}. \end{aligned}$$ By picking $T$ sufficiently large there will always exist a $\delta>0$ such that the above is satisfied, which completes the proof. Proof of {#app:6} --------- ([-1+2.1558\*cos(r)]{}, [-1.9099+2.1558\*sin(r)]{}); ([-r\*(r-sin(deg(0.68\*r))]{}, [r\*(0.1+cos(deg(0.68\*r)))]{}); ([-r\*(r-sin(deg(0.71\*r))]{}, [r\*(cos(deg(0.71\*r)))]{}); ([-r\*(r-sin(deg(0.7854\*r))]{}, [r\*cos(deg(0.7854\*r))]{}); (axis cs:-1,-1.9099) – node\[left\][$\sqrt{1+\frac{36}{\pi^2}}$]{} (axis cs:-2,0); First note that by putting $k=1/{m\gamma{}r^2}$, $\tilde{\omega}=mr\omega$ and $t={\tau}/{mr}$, we obtain the following canonical form: $$\frac{1}{j\omega{}\funof{mj\omega{}+d+\frac{1}{r}e^{-j\omega{}\tau}}}=\frac{\nicefrac{1}{k}}{j\tilde{\omega}\funof{j\tilde{\omega}+d/mrk+e^{-tj\tilde{\omega}}}}.$$ From the conditions of the theorem $k\geq{}\frac{1}{2}$, and therefore it is sufficient to show that $$\label{eq:suffirst} \text{Re}\funof{\funof{\pi+6j}\funof{\frac{1}{2}+\frac{1}{j\tilde{\omega}\funof{j\tilde{\omega}+d/mrk+e^{-tj\tilde{\omega}}}}}}>0.$$ Given $z\neq{}0$ it is simple to show that $$\begin{aligned} \text{Re}\funof{\funof{\pi+6j}\funof{1/2+1/z}}&>0&\Longleftrightarrow{}\\ \funof{\text{Re}\funof{z}+1}^2+\funof{\text{Im}\funof{z}+6/\pi}^2&>\sqrt{1+36/\pi^2}. \end{aligned}$$ Therefore if the curve ${j\tilde{\omega}\funof{j\tilde{\omega}+d/mrk+e^{-tj\tilde{\omega}}}}$ lies strictly outside a circle with centre $-1-6j/\pi$ and radius $\sqrt{1+36/\pi^2}$, then the theorem holds. A lengthy but routine geometric argument then shows that this is the case for all $d\geq{}0$ and $\tilde{\omega}>0$ if and only if $t<\pi/4$, from which the result follows. This is illustrated in . The following generalization allows for arbitrary half-planes ( corresponds to the case $\alpha=\pi/4$). Reducing $\alpha$ allows for stronger delay robustness guarantees at the expense of requiring larger droop constants $r$. Let $m\geq{}0,r>0,\gamma{}>0$ and $\pi/2>\alpha>0$. If $$r\leq{}\sqrt{\frac{\pi\funof{\pi-2\alpha{}}}{4\alpha^2m\gamma}},$$ then for all $0\leq{}\tau<\alpha{}mr,d\geq{}0$ and $\omega>0$, $$\text{Re}\!\funof{\!\!\funof{\pi{}\!+\!\frac{2j\funof{\pi-\alpha}}{\alpha}}\!\!\funof{1\!+\!\frac{\gamma{}}{j\omega\funof{mj\omega+d+\frac{1}{r}e^{-j\omega{}\tau}}}}\!\!}\!\!>\!0.$$ [^1]: R. Pates is a member of the LCCC Linnaeus Center and the ELLIIT Excellence Center at Lund University, Lund, Sweeden. Email: . E. Mallada is with the Department of ECE at Johns Hopkins University, Baltimore, Maryland, USA. Email:. This work was supported by the Swedish Foundation for Strategic Research, the Swedish Research Council through the LCCC Linnaeus Center, and NSF through grants CNS 1544771, EPCN 1711188, AMPS 1736448, and CAREER 1752362. A preliminary version of this work has been presented in [@PM17]. [^2]: We say the interconnection is stable if $$\begin{bmatrix} P\s\\I \end{bmatrix}\funof{I+\tfrac{1}{s}LP\s{}}^{-1}\begin{bmatrix} \tfrac{1}{s}L&I \end{bmatrix}\in\Hfty{}^{2n\times{}2n},$$ where $P\s={\mbox{diag}}\funof{p_1\s,\ldots{}p_n\s}$. [^3]: This is because $\frac{1}{s}L$ is passive for all $L\in\mathcal{L}$, and the negative feedback interconnection of a passive and strictly passive system is stable (e.g. [@BL+06]).